Beyond Respondeat Superior: AI and the Future of Corporate Liability
- Tara Brady
- 2 days ago
- 4 min read
The current corporate landscape is ever-evolving. The ‘newest’ addition to the evolutionary tree
is Artificial Intelligence (AI). Companies now deploy proprietary, closed-network AI systems to manage compliance, pricing, hiring, and risk analysis. A survey conducted in 2025 by Resume.org showed that almost 40% of companies aim to replace parts of their workforce with AI. As these algorithms assume decision-making authority, a fundamental legal question arises: when AI causes harm, is it merely a tool or does it function as an employee of the corporation?
To Mihailis Diamantis, a prominent academic on law and technology, the answer is clear: AI should be treated as an employee under his Labor Model. Diamantis argues that the Labor Model can be applied where the corporation and algorithm share both substantial benefit and control. This reflects a functional, rather than formal, understanding of employment – one that focuses on the role an actor plays within the firm’s operations. Under this approach, the absence of legal personhood or consciousness is not determinative. Corporations have long been treated as legal fictions; they possess no mind, yet the law attributes knowledge, intent, and criminal responsibility to them. When companies design, train, and deploy algorithms for profit-generating tasks, AI becomes deeply embedded in the firm’s economic structure. In this sense, the relationship resembles employment within the firm—not because AI possesses consciousness, but because it performs functions that advance corporate objectives under corporate control. AI systems operate through complex architectures shaped by training data and are not neutral. Their outputs reflect structured, decision-oriented processes that resemble cognition for legal purposes.
The Supreme Court has interpreted the definition of “employee” broadly, encompassing a wide range of relationships beyond traditional, formal employment arrangements. Further, in Barfield v NYC Health & Hospital Corp (2008), the Supreme Court determines whether an employer-employee relationship exists based on a holistic economic reality. If AI functions as an employee, then its actions would be imputed to the corporation and assessed under respondeat superior. The doctrine has two main prongs: that the criminal activity must have been done by an employee, manager, etc. of the company, and it must have been to benefit the company. In many cases, the doctrine has been used to punish the corporation for even rogue employees.
Yet, respondeat superior sits uneasily in the AI context. AI does not ‘intend’ to benefit the
Corporation in a human sense; it executes instructions and optimizes towards programmed goals. Unlike human employees, whose actions may diverge from corporate design, AI systems operate as extensions of that design. Framing AI misconduct as the acts of rogue employees, therefore, stretches the doctrine beyond its conceptual limits. The key distinction is that AI’s
conduct is not independent of the corporation, but rather inseparable from the architecture designed by the corporation.
Under the current understanding of the law, corporations are limited when evaluated under the doctrine of respondeat superior. This framework does not fully capture how corporations actually operate. Corporate culture permeates decision-making structures, shaping how employees behave, and increasingly, how AI behaves as well. AI can be manipulated and directed by the incentives and data set by the firm. While human oversight is vital—and this paper does not suggest removing it—it comes with its own problems. There are instances where humans become distracted by the mundane task of overseeing AI systems. Therefore, when evaluating corporate knowledge and corporate crime, the focus should be on all actors within the firm, including AI. A human overseeing AI can only do so much—much like a manager overseeing a rogue junior.
An increasing number of firms have been developing their own AI. These AIs are
closed-networked systems and are differentiated for that firm’s specific uses by their own
developers. This closed network creates a further avenue for the firm’s culture to permeate.
Knowledge is a crucial element when it comes to assessing criminal liability, yet corporations themselves lack a “mind.” Courts have addressed this by aggregating the knowledge of employees to attribute intent or awareness to the firm. If corporate knowledge can be constructed through human actors, a similar logic could extend to AI systems that function as integral components of the firm’s decision-making processes.
As AI continues to displace human labor and assume decision-making authority, corporate law
must evolve alongside it. The answer does not lie in stretching an already stretched doctrine of
respondeat superior. Rather, AI should be understood as part of the corporate collective – a
structural extension of the firm itself. This approach also avoids the risk of under-deterrence, where harm is attributed to opaque systems rather than the corporate structures that deploy them. In doing so, the law preserves doctrinal coherence while ensuring that corporate accountability adapts to technological change.
Tara Brady is an LL.M. candidate in Corporations at New York University School of Law and a Graduate Editor on the NYU Journal of Law & Business. She received her LL.B. (First Class Honours) from Trinity College Dublin and has experience in corporate law through internships at leading Irish firms.
Comments