At the Practising Law Institute’s 56th Annual Institute on Securities Regulation, panelists discussed how public companies are addressing cybersecurity and artificial intelligence (AI) related issues.

Cybersecurity Disclosure Landscape

As cyber threats continue to evolve and challenge companies, the SEC has honed its focus on corporate disclosures related to cybersecurity incidents, risk management, and governance practices.  In July 2023, the SEC adopted specific disclosure requirements to ensure consistent and comparable disclosures across different companies.

Recent SEC comments emphasize the importance of disclosing management’s expertise in cybersecurity oversight and ensuring clear, accurate disclosures.  Companies are advised to avoid overstating compliance with cybersecurity frameworks, such as the National Institute of Standards and Technology (NIST), unless they are fully compliant.  This can be achieved through ongoing collaboration with the information security team, which can help ensure the appropriate terminology is used and efforts are accurately represented.  Finally, companies should ensure consistency in their disclosures, particularly regarding third-party involvement in cybersecurity efforts.  Risk factors should also be updated regularly to reflect actual incidents, rather than hypothetical risks that may no longer be relevant.

Although the SEC’s cybersecurity disclosure rule does not require disclosures in proxy or registration statements, it remains relevant in those filings.  For example, cybersecurity issues may need to be addressed, if relevant, in various sections of a registration statement, such as risk factors, MD&A, or the business description.  Additionally, given the SEC’s heightened focus on insider trading, companies should consider how their insider trading policies would apply in the event of a cybersecurity incident.

AI Disclosures

Companies embracing generative AI face the challenge of balancing innovation with responsible disclosure, while avoiding pitfalls like overstating AI capabilities (sometimes known as “AI-washing”).  The SEC is also focusing on emerging risks related to AI and its evolving role in corporate operations.  Thus far, the regulatory approach to AI disclosure generally emphasizes avoiding AI-washing and addressing the potential risks associated with the technology. 

As companies consider whether and how to disclose AI-related risks in their filings, AI-related disclosures are becoming more common in S&P 500 and Fortune 100 filings. These risks typically fall into five categories: (a) cybersecurity and AI risks, (b) regulatory risks, (c) ethical and reputational risks, (d) operational risks, and (e) competition risks. Companies providing AI-related disclosure should review their disclosure processes to avoid inaccuracies and inconsistencies in their public statements and regularly revisit AI disclosures to reflect evolving risks, industry developments, and regulatory expectations. Companies also should consider whether a specific committee or the Board should be responsible for overseeing AI-related risks.  As illustrated by cases like SEC v. Raz and recent FTC enforcement actions, inaccurate AI-related claims could lead to SEC enforcement actions, securities class action lawsuits, and even criminal proceedings.