In recent wide-ranging remarks punctuated with a number of movie references and analogies (some that I confess were lost on me), Securities and Exchange Commission Chair Gensler spoke about various aspects of Artificial Intelligence (“AI”).  The Chair noted, as he has consistently, that AI may pose systemic risk or aggravate risks—Chair Gensler discussed herding and network interconnectedness.  To the extent, for example, that there is overreliance on a single or on a few AI models or data aggregators, risks may be magnified.  AI may therefore pose challenges to financial stability.  Chair Gensler also cautioned against the risks associated with AI models which may take into account compliance with current rules and regulations, but may fail to update for changes in regulations, market conditions and disclosures.  Also, Gensler raised the possibility that AI models are learning, changing and adapting on their own—making it harder to implement appropriate guardrails relating to regulatory compliance.

The Chair touched on “AI washing,” which raises many of the same concerns raised by “greenwashing” — that is, are public companies making false claims and trying to profit from the allure of AI related technologies?  Presumably an enforcement warning.  Chair Gensler reminded companies the “basics of good securities lawyering still apply.”  Claims regarding future prospects should be grounded in a reasonable basis and this should be explained to investors.  Material risks should be disclosed and should be tailored to be specific to the issuer.  In the case of investment advisers and broker-dealers, these regulated entities also should be clear as to when they are using an AI model and the manner in which they are using the model.  He raised the prospect that if an investment adviser or a broker-dealer is using an AI model it may hallucinate an unsuitable or a conflicted investment recommendation.  In this regard, he noted that to the extent an investment adviser or a broker-dealer use AI models to render advice or make recommendations they need to take steps to ensure that these are not based on a hallucination or on inaccurate information.  Of course, Gensler commented on, and ended with, the use of predictive data analytics—suggesting once again that the proposed rule is intended to address potential or actual conflicts of interest across a range of interactions. 

See Chair Gensler’s full text of his remarks here.