Re-Defining “Human Conduct” in Criminal Law for AI-Driven Systems
Introduction: When Actions Are Mediated by Machines
Criminal law has traditionally centered on human conduct: a physical act or omission that can be attributed to a person who can be blamed, punished, and deterred. This is usually captured in the concept of actus reus, the external element of an offense.
However, in a world increasingly shaped by AI-driven systems, human influence over outcomes is often indirect, mediated, distributed, or partially automated. Algorithms decide which content users see, cars decide how to brake or steer, and software executes trades and transactions in milliseconds without human input at the moment of action.
This raises a fundamental question: how should “human conduct” be understood when AI systems perform the visible act in the world? This article argues that criminal law must re-define human conduct in a way that captures design, deployment, configuration, and oversight of AI systems as potential loci of action.
1. Classical Notion of Human Conduct in Criminal Law
Traditionally, criminal conduct has the following features:
- It is a voluntary human act or omission,
- It is performed in the external world,
- It is causally linked to a prohibited result (e.g., harm, damage, danger).
The law distinguishes:
- Act – a bodily movement controlled by the will,
- Omission – failure to act where there is a legal duty,
- Involuntary movements (e.g., reflexes, seizures) which usually cannot found liability.
In this model, there is a relatively clear line: the physical movement or omission belongs to a human, and that is the conduct criminal law evaluates.
AI-driven systems undermine this neat picture because the final “movement” or decision may be executed entirely by software or a machine.
2. AI Systems as “Acting Interfaces”: Who Is Really Acting?
In AI environments, we might see:
- A self-driving car swerving and causing an accident,
- An algorithm blocking or approving transactions,
- A content recommender amplifying harmful material,
- A medical AI suggesting an incorrect diagnosis.
From a surface perspective, the system is acting, not the human. But criminal law cannot punish machines. It must identify a human conduct to attach liability to.
This requires us to treat AI systems as acting interfaces through which human choices are projected into the world. The conduct may no longer be a simple finger pulling a trigger; it may be:
- Designing and releasing a model with foreseeable unsafe behavior,
- Configuring parameters, thresholds, or risk profiles,
- Deploying an AI system in an inappropriate context,
- Failing to supervise or override the system in critical situations.
In short, human conduct is increasingly upstream of the visible harm.
3. Re-Locating Human Conduct: From Physical Acts to System-Level Decisions
To keep criminal law meaningful in AI-driven environments, we must relocate the idea of conduct from the moment of physical impact to earlier, system-level decisions.
3.1. Design and Training as Conduct
Key choices in the design and training phase can constitute relevant conduct:
- Selecting training data that embeds obvious bias or harmful patterns,
- Failing to implement basic safety checks and constraints,
- Ignoring widely known best practices for high-risk AI systems.
Here, the “act” is not a single physical gesture, but a course of conduct culminating in the release of a system that is likely to cause harm.
3.2. Deployment and Context as Conduct
Similarly, decisions about where and how to deploy AI are crucial:
- Using a non-validated AI tool for medical diagnosis,
- Introducing experimental autonomous vehicles in crowded areas,
- Deploying facial recognition in sensitive, high-stakes contexts without safeguards.
These deployment choices are human actions that shape the risk landscape and can be framed as conduct for criminal law purposes.
3.3. Configuration and Oversight as Conduct
Even after deployment, there are ongoing human decisions:
- Setting aggressiveness or risk thresholds (e.g., in trading algorithms or content moderation),
- Deciding the level of human-in-the-loop control,
- Responding (or failing to respond) to warning signals and near-misses.
In AI-driven systems, configuration and monitoring are themselves acts or omissions that may ground liability.
4. Omissions and Duties in AI-Driven Systems
The concept of omission becomes particularly important. Criminal liability for omission requires:
- A legal duty to act,
- A failure to fulfill that duty,
- A causal link between the omission and the result.
In AI contexts, duties may arise for:
- Developers and providers of high-risk AI systems,
- Companies that operate critical infrastructure using AI,
- Professionals (e.g., doctors, pilots, financial advisors) who rely on AI tools.
Examples of culpable omissions could include:
- Failing to update systems when known vulnerabilities emerge,
- Failing to deactivate a system after repeated harmful outputs,
- Failing to implement mandated human oversight in high-risk scenarios.
Here, the omission is human, even if the visible harm comes from AI behavior.
5. Causation in AI-Driven Conduct: Breaking or Extending the Chain?
Redefining human conduct in AI systems also requires re-thinking causation. When an AI system acts in complex ways, some might argue that human conduct is too remote to be the cause of harm.
However, if the system behaved in a way that was:
- Foreseeable given the training data and architecture,
- Typical for the system’s design and deployment context,
- Insufficiently controlled due to human choices,
then the human conduct (design, deployment, omission) can still be seen as causally significant. AI is an intermediate mechanism, not an independent agent that breaks the chain.
Criminal law may need to adopt a more systemic view of causation, recognizing chains of decisions and design choices as part of the conduct, rather than insisting on direct physical contact.
6. Collective and Distributed Human Conduct
AI systems are often the product of teams, departments, and organizations, not individuals. No single person may:
- Write the whole code,
- Understand every technical detail,
- Control all deployment decisions.
This challenges the traditional image of a solitary actor. Human conduct becomes distributed across:
- Development teams,
- Management structures,
- Governance bodies.
In response, criminal law may need to:
- Rely more on corporate criminal liability,
- Recognize organizational conduct (policies, cultures, structural decisions) as a form of human action,
- Accept that responsibility can stem from collective failures to act reasonably in the design and operation of AI systems.
7. A New Conceptual Frame: “Systemic Human Conduct”
To integrate AI-driven systems into criminal law without losing the centrality of human agency, we can adopt the notion of “systemic human conduct”:
- Conduct includes not only immediate physical acts, but creating, configuring, and maintaining systems that act in the world.
- Human conduct is evaluated over the lifecycle of AI systems, not just at the moment of harm.
- Responsibility attaches to roles and duties within AI ecosystems (developers, operators, decision-makers).
This allows us to keep the fundamental idea that only humans (and human organizations) can be criminals, while acknowledging that the forms of their conduct have evolved.
8. Conclusion: Preserving Human Responsibility in Automated Environments
AI-driven systems do not eliminate human conduct; they transform how and where it occurs. If criminal law clings to a narrow, purely physical notion of action, it risks becoming blind to the real sources of risk and harm in automated environments.
Re-defining “human conduct” for AI-driven systems means:
- Looking upstream to design, deployment, configuration, and oversight,
- Recognizing omissions and systemic decisions as genuine forms of conduct,
- Accepting that conduct can be distributed and organizational, not only individual and immediate.
Ultimately, the goal is not to make AI into a criminal actor, but to ensure that those who design, deploy, and profit from AI remain answerable when their systems cause wrongful harm. Human conduct is still at the heart of criminal law — it simply wears more technological clothing.
Yanıt yok