Should AI Systems Be Treated as Legal Persons in Criminal Law?

Should AI Systems Be Treated as Legal Persons in Criminal Law?

Introduction: From Tools to “Legal Persons”?

As artificial intelligence (AI) systems become more autonomous and influential, legal scholars and policymakers are asking a provocative question: should AI systems be treated as legal persons in criminal law?

The idea is not entirely new. Legal systems already recognize corporations and other entities as legal persons capable of holding rights and obligations, including in some cases criminal liability. If we can criminally punish a company, why not an advanced AI system that operates with a high level of autonomy?

This article examines the debate over AI legal personhood in criminal law, exploring the arguments for and against treating AI systems as legal persons, and assessing whether such a move would solve real accountability problems or simply create new ones.


1. Legal Personhood and Criminal Law: A Brief Overview

Legal personhood is a legal fiction that allows entities other than human beings to hold rights, bear duties, sue and be sued, and sometimes be punished. Two basic categories exist:

  • Natural persons – human beings;
  • Legal persons – entities like corporations, associations, foundations, and in some systems, states or municipalities.

In criminal law, corporate criminal liability is now widely accepted:

  • Corporations can be fined,
  • They can be subjected to compliance orders, monitoring, or activity bans,
  • In extreme cases, they can be dissolved.

However, even corporate personhood is grounded in the idea that human beings stand behind the entity: managers, employees, shareholders, and directors whose decisions and cultures are expressed through the organization. The question is whether AI systems can plausibly fit into this framework.


2. The Case for Treating AI as Legal Persons in Criminal Law

Supporters of AI legal personhood in criminal law often emphasize pragmatic and functional considerations rather than metaphysical claims about consciousness.

2.1. Closing Accountability Gaps

As AI systems become more complex, responsibility can become diffuse and opaque:

  • Many actors contribute to development and deployment (developers, data providers, integrators, users),
  • Decisions emerge from machine learning processes not fully understood by any one individual,
  • Harms might occur even when no single human was clearly reckless or intentionally wrongful.

Recognizing AI systems as legal persons could, in theory, help to close accountability gaps by:

  • Attaching responsibility directly to the system,
  • Maintaining a clear “target” for sanctions,
  • Avoiding situations where no one can be held criminally liable despite serious harm.

2.2. Analogy with Corporations

Proponents also invoke the analogy with corporations:

  • Corporations do not have minds or bodies, yet are held liable,
  • Corporate intent is constructed from the behavior and knowledge of employees and managers,
  • The law uses a fiction to achieve practical goals: deterrence, prevention, and compensation.

In the same way, they argue, AI systems could be granted limited legal personhood:

  • AI personhood would be a functional tool, not a claim about moral agency,
  • It would allow for registration, monitoring, and sanctioning of high-risk systems.

2.3. Liability Funds and Insurance Models

Another argument is institutional and economic. AI systems could be:

  • Linked to mandatory liability funds or insurance pools,
  • Required to maintain sufficient resources (via owners, manufacturers, or operators) to cover foreseeable harms,
  • “Punished” through fines or restrictions imposed on that fund.

Under this model, criminal law’s role would be partly symbolic: sanctioning the AI system-as-entity, while in practice impacting the humans and organizations financially responsible for it.


3. The Case Against AI Legal Personhood in Criminal Law

Despite these attractive features, there are serious objections to treating AI systems as legal persons in the criminal sphere.

3.1. No Moral Agency, No True Culpability

Criminal law — especially in its retributive or blame-focused dimensions — is founded on the idea of culpability:

  • The offender understands the norm,
  • Could have acted otherwise,
  • Deserves blame for violating shared values.

AI systems:

  • Do not understand legal or moral norms,
  • Do not have consciousness or free will,
  • Cannot feel guilt, shame, or fear of punishment.

Without moral agency, many argue, there can be no genuine criminal culpability. Personhood fictions used for corporations are still grounded in underlying human decision-making, whereas AI systems lack this moral substrate.

3.2. Risk of Creating a Scapegoat

A major practical concern is that AI personhood could become a way for humans to evade responsibility:

  • Companies might design risky systems but blame “the AI”,
  • Managers could hide behind technical complexity,
  • Regulators might be satisfied with symbolic sanctions against the system while failing to address human misconduct.

In this scenario, AI legal personhood would weaken, not strengthen, accountability. It would create a legal scapegoat that absorbs blame while shielding those who design, deploy, and profit from AI.

3.3. Sanctions Become Purely Instrumental

Criminal sanctions imposed on AI systems would be inherently instrumental:

  • Shutting down or limiting the system,
  • Modifying its code or retraining it,
  • Blocking its access to certain domains or operations.

But these actions are ultimately taken by human actors. The AI system does not experience punishment; it is merely reconfigured. Criminal law might lose its expressive and communicative dimension, turning into a technical parameter adjustment regime rather than a genuine system of censure.


4. Towards a Sui Generis Status? Electronic Personhood with Limits

Between full rejection and full personhood lies a middle ground: sui generis electronic personhood for AI in specific, limited domains.

Under such a model, AI systems might:

  • Be registered as distinct entities for regulatory and liability purposes,
  • Be required to have identifiable owners or controllers,
  • Serve as “liability nodes”, aggregating claims, insurance, and risk data.

However, crucially:

  • Criminal liability would remain primarily with humans and corporations,
  • AI personhood would be used mainly for civil liability, regulation, and oversight,
  • Any criminal sanctions would be tied to human actors responsible for the AI’s lifecycle (design, deployment, supervision).

This approach recognizes that AI is more than a simple tool but less than a moral agent, and tries to adjust legal categories without abandoning human-centered accountability.


5. The Role of Corporate Criminal Liability in AI Governance

Instead of granting personhood to AI systems themselves, many scholars advocate expanding and refining corporate criminal liability to deal with AI-related harms.

Key ideas include:

  • Treating AI decisions as corporate decisions, since the company chooses to build, purchase, or deploy the system,
  • Evaluating the organizational culture, policies, and governance surrounding AI,
  • Imposing sanctions that incentivize robust AI governance: risk assessments, audits, transparency, human oversight, and incident reporting.

In this model, AI remains part of the corporation’s organizational machinery, and the corporation is the legal person responsible. This approach keeps human and organizational agency at the center of criminal law.


6. Normative Assessment: Should We Treat AI as Legal Persons in Criminal Law?

From a normative standpoint, the question is whether granting criminal legal personhood to AI systems:

  • Improves accountability,
  • Enhances deterrence and prevention,
  • Protects legal interests more effectively than existing models.

The concerns about moral agency, scapegoating, and purely instrumental sanctions suggest that full AI personhood in criminal law is not desirable. The main problems created by AI are not metaphysical, but institutional and organizational:

  • How to ensure that those who design, deploy, and control AI do so responsibly,
  • How to prevent diffusion of responsibility in complex socio-technical systems,
  • How to allocate risks and costs fairly.

These goals can likely be better achieved by:

  • Strengthening corporate criminal liability,
  • Clarifying duties of care for developers, providers, and professional users,
  • Introducing targeted regulatory regimes for high-risk AI systems.

7. Conclusion: Keep Personhood Human, Keep Responsibility Human-Centered

AI legal personhood in criminal law is an intellectually intriguing idea, but it risks solving the wrong problem. AI systems do not need to be criminal persons to be controlled, regulated, and governed. What matters is not whether AI can be blamed, but whether humans and organizations can be held accountable for the systems they create and deploy.

A cautious conclusion is therefore:

  • No: AI systems should not be treated as full legal persons in criminal law.
  • Yes: We should develop legal tools — including corporate criminal liability and robust regulation — that take seriously the unique risks of AI.
  • Always: Responsibility must remain fundamentally human-centered, even in a world of increasingly autonomous machines.

In the end, criminal law is about more than managing risk; it is about judging human conduct. AI may transform how harm occurs, but it should not obscure who is ultimately responsible.

Categories:

Yanıt yok

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir

Our Client

We provide a wide range of Turkish legal services to businesses and individuals throughout the world. Our services include comprehensive, updated legal information, professional legal consultation and representation

Our Team

.Our team includes business and trial lawyers experienced in a wide range of legal services across a broad spectrum of industries.

Why Choose Us

We will hold your hand. We will make every effort to ensure that you understand and are comfortable with each step of the legal process.

Open chat
1
Hello Can İ Help you?
Hello
Can i help you?
Call Now Button