Artificial Intelligence or Illusions: The SEC’s Crackdown on Misleading AI Claims
The U.S. Securities and Exchange Commission (“SEC”) has recently intensified its scrutiny of artificial intelligence (“AI”) fraud by targeting misleading claims about AI in the investment space. (SEC Press Release). This enforcement effort, focused on preventing a deceptive marketing tactic called “AI washing,” aligns with a broader regulatory trend focused on ensuring transparency in AI disclosures. Id. The SEC has pursued enforcement actions against public companies and investment advisers that exaggerate their AI capabilities or falsely claim to integrate AI into decision-making processes. (Kevin Friedmann, et. al., Norton Rose Fulbright). As AI technology becomes more prevalent in financial and corporate sectors, companies must navigate these regulations carefully to maintain compliance and investor confidence. This post examines the SEC’s enforcement actions, anticipates the agency’s future focus, and explores the implications for advisors and businesses that use AI.
In March 2024, the SEC settled charges against two investment advisers that misled investors about their AI usage, in violation of the Advisers Act. (SEC Press Release). The Advisers Act, a federal law enforced by the SEC, was enacted to protect the public from fraudulent and unethical practices in the investment industry by requiring advisers to register with the SEC, thereby ensuring transparency and accountability. (Roberta S. Karmel, BrooklynWorks). Over time, amendments expanded the SEC’s authority to enforce antifraud provisions, impose fiduciary duties, and regulate conflicts of interest, reflecting a broader policy goal of maintaining market integrity and protecting investors from exploitation. Id. The first AI washing settlement concerned Delphia (USA) Inc. (“Delphia”), an investment firm that operated as a “robo-adviser” using automated, algorithm-driven software to provide financial planning and portfolio management with minimal human involvement. (Vanessa A. Countryman, Settlement Order). Delphia claimed to use social media, banking, and other account data to make its investment decisions “more robust and accurate.” Id. During a 2021 SEC examination, Delphia admitted it had not used any client data or created an algorithm that relied on such data to manage portfolios. Id. Despite this admission, Delphia continued to claim it used client data in emails to investors and on social media. Id. As a result, the SEC found that Delphia violated the Advisers Act’s marketing and compliance rules. Id.
The second settlement concerned Global Predictions, Inc. (“Global Protections”), which falsely touted “expert AI-driven forecasts” and claimed to be the “first regulated AI financial advisor.” (Vanessa A. Countryman, Settlement Order). The SEC found these statements were deceptive and determined that Global Protections had violated the Advisers Act. Id. As part of its remediation, Global Protections retained a compliance consultant to review its marketing materials. Id.
These cases sent a clear message that firms making unsubstantiated AI claims risk severe legal consequences, including securities fraud charges and reputational harm. As a response to growing AI washing concerns, the Federal Trade Commission (“FTC”) announced its new regulatory crackdown called Operation AI Comply in September 2024. (FTC Press Release). In announcing the new operation, FTC Chair Lina M. Khan stated, “‘[t]he FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.’” Id. At the state level, attorneys general from Texas, Massachusetts, and California, among others, have echoed this stance, cautioning that “companies employing AI must ensure those uses comply with existing laws.” (Robert A. Cohen, et. al., NYU).
The SEC’s focus on AI fraud will likely intensify as AI becomes more embedded in financial services and corporate operations. SEC Director Gurbir S. Grewal has pledged to protect investors from AI washing, warning the investment industry, “if you claim to use AI in your investment processes, you need to ensure that your representations are not false or misleading.” (SEC Press Release). Additionally, the SEC has issued investor alerts, warning of exaggerated AI claims like “‘Use AI to Pick Guaranteed Stock Winners!’” or “‘Our proprietary AI trading system can’t lose!’” (SEC Investor Alert). These developments may lead to new reporting requirements or heightened scrutiny during routine SEC examinations, prompting firms to strengthen their internal oversight of AI-related disclosures. (Kevin Friedmann, et. al., Norton Rose Fulbright). Moreover, the SEC will handle individual liability in AI cases much like it does with cybersecurity disclosure failures. Id. The agency will look at whether individuals knew or should have known about misleading statements and what actions they took, or failed to take, to prevent them. Id. Managers who act in good faith and take reasonable steps to ensure accurate reporting are unlikely to face personal liability. Id. However, as more companies adopt AI and raise capital through securities offerings, executives now face higher risks of individual liability if they do not implement proactive compliance strategies to avoid AI washing. Id.
Investment advisers must ensure their AI-related disclosures are accurate and verifiable. (Comply). Regulators are actively examining whether AI is truly integrated into decision-making processes or merely serves as a marketing tool. Id. Compliance teams should conduct rigorous internal audits to validate AI statements before sharing them with investors. Id. Additionally, advisors may benefit from third-party audits that objectively verify AI capabilities, as that could provide an additional layer of credibility and regulatory protection. (Aaron Pinnick, et. al., ACA Global). Companies utilizing AI should focus on three key regulatory considerations. First, they must remain transparent about AI capabilities and avoid exaggerations or inflated promises. (Baker Botts). Second, they should thoroughly document AI-related processes to produce evidence of compliance when regulators conduct reviews. Id. Third, companies should perform periodical internal audits to confirm their claims keep pace with evolving AI technologies. (Leslie S. Cruz, et. al., Mayer Brown). As AI evolves, both regulators and investors will continue to demand genuine proof of AI’s benefits, rather than speculative marketing.
The SEC’s actions highlight a growing regulatory concern over AI-related fraud and misinformation. As AI continues to reshape industries, the SEC and other regulatory bodies will remain vigilant in preventing deceptive practices and protecting investors. (Dylan Tokar, The Wall Street Journal). Companies that use AI must proactively ensure their claims are accurate and substantiated by evidence or risk significant enforcement action, including potential litigation and reputational damage. Id. However, these enforcement efforts unfold against the backdrop of increasing legal and political challenges, as some courts and policymakers question the authority of agencies like the SEC. (Varu Chilakamarri, et. al., K&L Gates). If new laws or policies further weaken regulatory oversight, enforcing AI washing rules could become harder, placing more responsibility on investors and private parties to detect misleading claims. In this changing landscape, compliance and transparency will be critical for any business that seeks to leverage AI while preserving both regulatory trust and investor confidence.