top of page

Securities Regulation and Artificial Intelligence: Rethinking Liability Architecture 

  • Aratrika Choudhuri
  • 7 hours ago
  • 5 min read

The SEC’s recent regulatory efforts around AI have largely centered on disclosure, i.e., pursuing enforcement actions against AI-washing and recommending robust disclosures of AI’s operational impact. Nonetheless, the SEC has not substantively addressed the more fundamental challenge AI poses as an autonomous actor to the liability regime underpinning securities regulation. Since AI systems can generate outputs resembling deceptive or manipulative conduct without discernible human intent, the current framework, anchored in tracing scienter to identifiable decision-makers, is ill-equipped to govern AI-driven market activity. This Note argues that the SEC should adopt a risk-based, sliding scale liability model, complemented by proactive supervision at key market touchpoints, to preserve market integrity while enabling responsible innovation.



AI and Its Challenges to Traditional Securities Law Frameworks


Conventional conceptions of scienter, manipulation, and securities fraud are premised on: (i) identifying violators who are individuals or distinct, known entities, and (ii) tracing the requisite state of mind to their misstatements, omissions, and conduct. The catch-all antifraud provisions codified in §10(b) of the Securities Exchange Act of 1934 (‘EA’) and Rule 10b-5, which govern secondary markets, impose a demanding scienter requirement, necessitating a strong inference of intent to defraud. Courts have also circumscribed disclosure liability under Rule 10b-5(b) to a “maker” of a false statement with ultimate control over its content, foreclosing private suits against secondary actors and reinforcing securities law’s emphasis on anchoring liability in identifiable decision-makers.


However, due to the “black-box” nature of AI and its deep reinforcement learning systems that evolve without human intervention, large language models can autonomously generate outputs that resemble insider trading, market manipulation and other deceptive conduct, in contravention of their programming. These systems can rapidly assess materiality, draft disclosures, automate quantitative investment, and expedite financial decision-making. Yet, AI’s inherent opacity and unpredictability in its reasoning defy anthropomorphic frameworks seeking to pinpoint intent, necessitating a rethinking of liability allocation and risk management.


     The scienter standard can be contrasted with the negligence-based liability standard under §11 of the Securities Act of 1933, which imposes onerous gatekeeping and due diligence obligations on issuers, underwriters, and experts for misstatements and omissions in registration statements. Ostensibly, the negligence-based standard, focused on foreseeable risk and effects rather than subjective intent, appears better suited to AI-related harms. The 2023 FAIRR bill similarly proposes strict liability on any person deploying AI, unless they took reasonable preventative steps. However, AI’s agentic behavior can and regularly freely transcends its programmed constraints, and imposing blanket vicarious liability on its human users/developers who could not have foreseen its conduct or outcomes risks chilling innovation and diminishing efficiency gains.


Rethinking Regulatory Frameworks and Enforcement Approaches


The current state-of-play requires a combination of regulatory approaches to address the risks posed by AI. 


First, the SEC should incentivize proactive ex ante supervision and risk management rather than rely solely on ex post enforcement. This can be achieved by: (i) identifying and monitoring key touchpoints where AI interacts with capital markets, including trading firms, securities markets, cryptocurrency platforms and exchanges, (ii) implementing periodic AI-audits at these touchpoints, targeted at live detection and reporting of deceptive outputs before dissemination; and (iii)  encouraging regulatory sandboxes to live-test AI-driven products and trading strategies under structured oversight. 


Second, the SEC should adapt its regulatory playbook to evolving AI use cases. Its recent Rule 10b-5 actions against alternative data providers (‘ADPs’) for misrepresenting data-derivation practices and robustness of internal controls signal a shift beyond traditional issuer-focused disclosures to the regulation of alternative information intermediaries. That same logic should extend to AI-driven systems, whose scale and capacity to translate vast public and non-public datasets into market-moving insights pose heightened risks to informational integrity. 


Building on this, the SEC should consider a sliding-scale approach that assigns liability depending: (i) the purpose of AI deployment and predictability of harms, (ii) the system’s design intent and functional autonomy, and (iii) the degree of human supervision and transparency in decision-making. 


For instance, at one end of the spectrum, where AI is utilized in preparing IPO registration statements, characterized by acute information asymmetry and investor reliance, strict liability should apply if issuers, underwriters or experts use autonomous AI systems without meaningful safeguards, humans-in-the-loop or transparent audit trails. 


At the other end of the spectrum, where AI merely assists in drafting periodic disclosures or investor communications in secondary markets under substantial human supervision and is limited to verified historical data, liability should attach only upon proof of scienter. These illustrations delineate the spectrum of efficiency-enhancing applications of AI and are intended not to constrain its productive use, but to demonstrate how a sliding-scale approach can calibrate liability in a manner that preserves innovation while proportionately addressing AI-related harms.


Third, while courts have narrowed disclosure liability under Rule 10b-5(b), they have espoused a broader conception of scheme liability under Rules 10b-5(a) and (c), reaching disseminators who employ deceptive “devices” or “artifices”. AI fits squarely within this framework, both as a tool for manipulation and an active disseminator propagating deceptive information across interconnected networks, generating ripple effects that disrupt market information flows. By employing the regulatory approaches outlined above, the SEC can proactively identify and supervise touchpoints where AI systems deployed by traders and other market intermediaries connect with the market and use a sliding-scale approach to hold them liable, depending on their deployment of AI’s autonomous capabilities in propagating manipulation and the degree of human supervision. 


Finally, Regulation Best Interest, which imposes fiduciary duties of loyalty and care on investment advisers, can also be strengthened to require mandatory disclosure of AI systems used in managing client accounts, standardized compliance controls, and threshold-based limits requiring human authorization for significant transactions.


Conclusion

The emergence of AI as an autonomous actor challenges securities law’s long-standing emphasis on tracing subjective intent to human actors and identifiable entities. As AI systems evolve beyond predefined parameters and generate potentially deceptive or manipulative outputs without discernible scienter, prevailing liability doctrines under §10(b) and Rule 10b-5 must adapt. While imposing indiscriminate vicarious liability risks stifling efficiency gains that AI can deliver across disclosure, compliance and trading functions, a regulatory model grounded in proactive supervision at identified market touchpoints and sliding-scale liability offers a principled path forward in preserving market integrity and informational reliability while enabling responsible innovation.

     


Aratrika Choudhuri is an LL.M. student specializing in Corporation Law and an Arthur T. Vanderbilt Scholar at the NYU School of Law, where she serves as a Graduate Editor of the NYU Journal of Law & Business and a Graduate Student Research Fellow at the NYU Pollack Center for Law & Business. Prior to attending NYU, she was a Senior Associate in the Capital Markets practice at AZB & Partners, one of India's leading law firms, where she represented high-profile issuers, underwriters, and investors on some of the largest and most complex securities offerings in India and Asia. She graduated from the West Bengal National University of Juridical Sciences, Kolkata, a top-tier Indian law school, with a Bachelor of Arts and Bachelor of Law Honors degree. 

 
 
 

Comments


Featured Posts
Topic Tags
Archive

© 2024 New York University Journal of Law & Business

bottom of page