# Book Review: ‘Death Machines’ and the Limits of Algorithmic Ethics

A Synthesis of Elke Schwarz’s book ‘Death Machines’ and Its Implications for AGI Risk

Synthesised from: Schwarz, E. (2018). Death Machines: The Ethics of Violent Technologies. Manchester University Press; Archambault, E. (2019). Review of Death Machines. International Affairs, 95(2), 470–471; and adjacent literature in autonomous weapons ethics and AI governance.

Source: personal research and summarized, formatted and conclusions by Anthropic Claude.ai

1. What the Book Actually Argues

Elke Schwarz’s Death Machines (2018) is frequently miscategorised as a book about drone warfare. It is not, or not primarily. Its true subject is what happens to moral reasoning when ethical decisions are progressively delegated to technical systems — and why the standard philosophical frameworks used to evaluate that delegation are themselves part of the problem.
The book’s central argument is deceptively simple: the conventional debate over whether autonomous weapons are ethical already concedes the most important ground. By engaging with that question — can a drone kill more ethically than a soldier? — the moral philosopher accepts a framing in which violence has been pre-legitimised and the only remaining task is optimisation.
Schwarz draws on Hannah Arendt, particularly The Human Condition (1958), to argue that modern politics has collapsed the distinction between action — genuinely free, politically meaningful human conduct — and behaviour — rule-following, procedural compliance. What she calls the “biopolitical mode” treats the protection of the social body as the supreme political aim, and violence against threats to that body as a form of hygiene rather than a moral act requiring justification. As Archambault (2019) summarises, this biopolitical logic means “the question is no longer whether killing can be justified, but how to kill more effectively” (p. 470).
The critical concept is what Schwarz terms ethics as technics: the operationalisation of moral reasoning into code, rules, and procedure. As Archambault notes, “ethics is considered no longer as a means of engaging in political action, but rather as a code, which must be followed, notwithstanding the actual results” (p. 470). In fully autonomous weapons, this is literal — the ethical framework is software. But Schwarz argues the same logic pervades the entire architecture of modern military ethics, including proportionality calculations, civilian casualty estimates, and targeting criteria: all encoded, all computed, all progressively insulated from human moral judgment.
The terminus of this trajectory she calls necroethics — where the ethics of life (bioethics) has become the ethics of death, in which the only morally legible categories are those that can be easily computed. Life and death are countable. Psychological injury, social fabric destruction, the existential weight of living under surveillance drones — these are not. They disappear from the ethical calculus because they cannot be systematically quantified (Archambault, 2019, p. 471).

2. How This Differs from Classical Ethics

Classical ethical frameworks — Kantian deontology, utilitarian consequentialism, Aristotelian virtue ethics, Just War theory — share a common assumption that Schwarz’s argument directly challenges: that ethics is a stable body of principles that can be applied to situations by a reasoning agent. The frameworks differ on what those principles are and how they are derived, but not on the fundamental architecture. In each case, a subject reasons about an act using principles, and the quality of the outcome depends on the quality of the reasoning.
Schwarz’s departure is structural. She is not arguing that existing frameworks reach wrong conclusions; she is arguing that the conditions under which modern violence occurs make genuine ethical reasoning of that classical kind unavailable. Three specific breaks with classical ethics are worth isolating:
The dissolution of the moral subject. Classical ethics assumes an identifiable agent who bears responsibility for a decision. Robert Sparrow (2007, 2016) formalised this in his “responsibility gap” concept — autonomous systems create outcomes without identifiable moral agents — but Schwarz goes further. She argues that even in partially automated systems, the human moral subject is progressively hollowed out. The analyst who identifies a target, the lawyer who clears the strike, the operator who executes it, the algorithm that selected the sequence — each carries a fragment of the decision and none carries the whole. Classical moral philosophy has no adequate framework for this distributed, disaggregated form of agency.
The corruption of the deliberative moment. Virtue ethics and Kantian ethics both require something like a moment of genuine deliberation — the weighing of competing moral claims by a reflective subject. Schwarz argues this moment is structurally eliminated in technologised violence. The decision is embedded in procedure before the operator arrives. The ethical work has been done upstream, in doctrine, code, and training. The human at the point of action is executing, not deciding.
The pre-emptive legitimation of violence. Just War theory, the dominant framework in military ethics, asks whether a specific act of violence meets criteria of proportionality, discrimination, and necessity. This is a retrospective (or contemporaneous) justification structure. Schwarz argues that biopolitical logic pre-legitimises violence at the level of the category — anything classified as a threat to the social body is, by definition, a legitimate target — so that the Just War criteria function as administrative checklists rather than genuine moral constraints.

3. Why These Conclusions Matter in Ethics Studies

The importance of Schwarz’s argument for the wider field of applied ethics is that it shifts the level of analysis. Most applied ethics — including most AI ethics — operates at the level of decisions: was this act justified? should this system be deployed? does this algorithm produce fair outcomes? Schwarz forces the question up one level: what are the conditions under which decisions are made, and do those conditions permit genuine ethical reasoning at all?
This has a specific methodological implication. If the conditions of technologised decision-making systematically foreclose genuine moral deliberation, then producing better ethical frameworks — more rigorous proportionality tests, more nuanced utilitarian calculations, more precise algorithmic fairness metrics — does not solve the problem. It deepens it, by providing more sophisticated legitimation infrastructure for a process that has already been insulated from moral accountability.
The concept of procedural violence is particularly important here. Violence laundered through sufficiently complex procedure loses its moral register. No individual participant in the process feels they have made a killing decision, because each has only performed their assigned step in a compliant procedure. The ethical weight is distributed to the point of invisibility. Classical applied ethics, operating at the level of individual acts and agents, cannot recover it.
Renic and Schwarz (2023), writing in Ethics & International Affairs, extended this analysis to the post-Ukraine operational environment, arguing that autonomous weapons institutionalise killing through technical complexity — meaning the complexity itself functions as moral insulation. This is now an empirically observable phenomenon rather than a theoretical prediction.

4. The AGI Translation

The analogy from autonomous weapons to AGI is not perfect, but it is close enough to be genuinely alarming.
The parallel cases are:
| Autonomous Weapons (Schwarz’s domain) | AGI / Frontier AI (contemporary domain) |
|:-:|:-:|
| Proportionality algorithms | Constitutional AI, RLHF training |
| Kill decision distributed across agents | Model output distributed across training, RLHF, deployment context |
| Ethics encoded as targeting criteria | Ethics encoded as system prompt and fine-tuning objectives |
| Biopolitical legitimation (“protecting the body politic”) | Alignment legitimation (“beneficial to humanity”) |
| Responsibility gap: no identifiable moral agent | Responsibility gap: developer, deployer, user, model |
| Procedural violence | Procedural harm: harms laundered through sufficiently complex technical systems |
The core structural parallel is this: in both cases, ethics has been converted from a practice of moral reasoning into a set of upstream technical specifications. The system is designed to behave ethically. The human at the point of use is not deliberating — they are operating a system that has already been deliberated over, at some remove, by engineers, ethicists, and policy teams.
This is not nothing. The upstream deliberation may be careful, well-intentioned, and sophisticated. But it has the same structural property Schwarz identifies in military ethics: it eliminates the deliberative moment at the point of consequence. The user of a powerful AI system does not reason about what the model should do in this case; they prompt, and the model’s pre-encoded ethical specifications respond. The ethical work is done before the interaction begins.
The AGI-specific aggravation of this problem is capability scale. Drone warfare distributes decision-making across a finite operational chain. A frontier AI model distributes moral agency across hundreds of millions of interactions daily, each with its own context, each mediated by the same upstream specifications. The responsibility gap is not merely distributed — it is effectively infinite.
A second aggravation is what might be called the Wiener problem, named for Norbert Wiener’s warning (cited in Schwarz’s own work) that delegating responsibility to machines — regardless of their capacity for learning — is to cast responsibility to the wind and find it seated on the whirlwind. Wiener wrote in the context of early cybernetics. The problem has not changed in kind; it has changed in scale and speed. A self-learning system at AGI capability levels adapts its outputs in ways that even its designers cannot fully predict or audit. The ethical specifications encoded in training are not guarantees of ethical behaviour in novel contexts — they are tendencies, calibrated against known distributions of situations.

5. Why Guardrails Are Inadequate

The dominant response to AI risk in commercial deployment — across Anthropic, OpenAI, Google DeepMind, and the emerging regulatory architecture — is the guardrail model. Systems are constrained by content filters, acceptable use policies, constitutional AI training, RLHF, red-teaming, and deployment restrictions. These are presented, including by Anthropic’s own published research and policy communications, as the responsible approach to managing risk during a period of rapid capability growth.
Schwarz’s framework suggests this approach is inadequate, and not merely technically. The inadequacy is structural and philosophical.
First: guardrails are ethics as technics. Constitutional AI and similar approaches are precisely the operationalisation of moral reasoning Schwarz describes — ethics converted into a code to be followed. The system is trained to refuse certain outputs, to caveat others, to weight responses in ways consistent with specified values. This is not moral reasoning; it is moral compliance. The distinction matters because moral reasoning is generative in novel situations — it can encounter a genuinely unprecedented ethical problem and reason toward a response. Moral compliance is pattern-matching against prior specifications. At the frontier of AI capability, where novel situations are definitional, the gap between reasoning and compliance is exactly where risk concentrates.
Second: guardrails produce legitimation, not constraint. Schwarz’s most uncomfortable observation about military ethics — that sophisticated ethical frameworks function primarily to legitimate violence rather than constrain it — applies directly to AI guardrails. The existence of a comprehensive safety framework, published policy, and red-team certification does not constrain the deployment of a potentially dangerous system; it provides the legitimation infrastructure that makes deployment socially and legally defensible. The Anthropic-Pentagon dispute, in which Anthropic’s acceptable use policy was tested against national security applications, is a live example of this dynamic. The policy does not prevent the use; it defines the terms under which use can be declared acceptable.
Third: the responsibility gap is unresolved and possibly irresolvable. When a powerful AI system produces a harmful output, responsibility is distributed across the training team, the fine-tuning process, the deployment context, the operator who built the application, and the user who prompted it. Each participant can truthfully say they followed the prescribed process. No individual bears the full moral weight. This is Sparrow’s responsibility gap translated into commercial AI — and no current governance framework has an adequate answer to it.
Fourth: what guardrails cannot capture is precisely what matters most. Schwarz notes that bioethics — ethics of life — becomes necroethics when only the easily quantifiable categories survive. For AI systems, the quantifiable harms are the ones that can be detected, classified, and filtered: explicit content, known disinformation, targeted harassment. The harms that resist quantification — epistemic dependency, atrophied critical thinking, the slow normalisation of delegation to machine judgment, the progressive erosion of human agency in decision-making at civilisational scale — are not addressable by content filters. They are not even legible to the guardrail architecture, for the same reason that psychological injury and social fabric destruction are invisible to drone targeting algorithms.
Fifth: the biopolitical legitimation structure is already visible in AI governance. Schwarz argues that violence is pre-legitimised at the categorical level when the protection of the social body is accepted as the supreme political aim. In AI governance, the parallel structure is safety framing: the deployment of powerful AI is pre-legitimised at the categorical level once a system is certified as “safe” by the responsible scaling framework. The question of whether deployment is appropriate — whether this technology should be available in this form, at this scale, to these actors — is foreclosed by the safety certification process. Safety becomes the ethical horizon, just as security becomes the political horizon in Schwarz’s biopolitical analysis.

6. What Schwarz Does Not Provide, and Where the Literature Goes Next

Archambault’s (2019) review notes that the book is frustrating precisely because it diagnoses the condition without prescribing a remedy. Schwarz explicitly declines to offer a revised ethical framework, arguing that any such framework would simply be captured by the same biopolitical rationality she has identified. This is philosophically honest but practically unsatisfying.
The more constructive responses to Schwarz’s diagnosis in adjacent literature include:
Shannon Vallor’s virtue ethics approach (Technology and the Virtues, 2016) — arguing that the cultivation of technomoral virtues (honesty, care, humility, perspective-taking) provides a form of resistance to moral atrophy that does not depend on frameworks or specifications. This maps onto AI governance as a case for ongoing human deliberation as part of deployment — not upstream specification alone.
Meaningful Human Control (MHC) doctrine — developed extensively in the LAWS literature (Boulanin & Verbruggen, 2017; Miller, 2025) — argues that certain categories of decision must remain with identifiable human agents, not as a procedural requirement but as a substantive condition of moral accountability. For AGI governance, this would mean institutional requirements for human deliberation at consequential decision points, not merely human oversight of automated outputs.
Structural prohibition — the Campaign to Stop Killer Robots position, and its AI equivalent in calls for binding deployment restrictions on certain capability thresholds — reflects the Schwarz-consistent conclusion that the legitimation dynamic cannot be corrected from within; it requires external constraint before the technology achieves sufficient embeddedness to make constraint politically impossible.
None of these is comfortable for the commercial AI development model. That is probably the point.

7. Summary

Elke Schwarz’s Death Machines is not a book about whether AI weapons should be banned. It is a book about what happens to moral reasoning when ethics is converted into a technical specification — and why that conversion, however carefully executed, cannot substitute for genuine moral deliberation at the point of consequential action.
The argument translates to AGI risk with uncomfortable precision. Guardrail architectures — however sophisticated, however well-intentioned — are examples of ethics as technics. They operationalise moral reasoning into upstream specifications, eliminate the deliberative moment at the point of use, distribute responsibility across a chain of agents until it becomes invisible, and provide legitimation infrastructure for deployment decisions that should remain genuinely open questions.
The challenge for AI governance is not to build better guardrails. It is to ask whether a system of governance premised on the guardrail model can, in Schwarz’s terms, actually ask the right question: not how do we deploy this responsibly, but whether we should deploy this at all, in this form, at this scale, to these actors, right now.
That question is not currently being asked at any significant institutional level. Schwarz’s book, written about armed drones in 2018, explains why it probably won’t be until it is too late to matter.

References

Archambault, E. (2019). Review of Death Machines: The Ethics of Violent Technologies by Elke Schwarz. International Affairs, 95(2), 470–471. https://doi.org/10.1093/ia/iiz020
Boulanin, V., & Verbruggen, M. (2017). Mapping the Development of Autonomy in Weapons Systems. SIPRI.
Miller, S. (2025). Lethal autonomous weapon systems (LAWS): Meaningful human control, collective moral responsibility and institutional design. Ethics and Information Technology, 27. https://doi.org/10.1007/s10676-025-09874-x
Renic, N., & Schwarz, E. (2023). Crimes of dispassion: Autonomous weapons and the moral challenge of systematic killing. Ethics & International Affairs, 37(3), 321–343.
Schwarz, E. (2018). Death machines: The ethics of violent technologies. Manchester University Press.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
Sparrow, R. (2016). Robots and respect: Assessing the case against autonomous weapon systems. Ethics & International Affairs, 30(1), 93–116.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
Wiener, N. (1950). The human use of human beings: Cybernetics and society. Houghton Mifflin. [Cited in Schwarz, 2018]

briefing

Leave a comment