Source Claude AI
I studied and researched this book and little experience with #Arendt I go further researched in Claude and the results are illuminating and point to serious deficiencies in the “guardrails “ thinking that guides latest Government thinking on AI regulation.
————————————
This is a rich and genuinely important book, and you’ve landed on it at exactly the right moment given what’s unfolding with the Anthropic-Pentagon thread we’ve been tracking.
Death Machines — Elke Schwarz (2018/2019)
Core Argument
Schwarz’s central move is philosophically subversive: she refuses to engage the ethics of lethal autonomous weapons on their own terms. The conventional debate — can drones or killer robots be more precise, more discriminating, more “ethical” than human soldiers? — is, for Schwarz, already corrupted. Rather than addressing the ethics of violent technologies directly, she explores the ethicality of ethics itself in modern political violence — arguing that the framing has been colonised by a biopolitical rationality that treats securitisation of the body politic as the supreme political purpose, making targeted elimination of threats feel like medical hygiene rather than killing.
The Arendtian scaffold is essential here. Schwarz draws heavily on The Human Condition and The Origins of Totalitarianism to argue that the way we conceive of the ethics of contemporary warfare is itself imbued with a set of bio-technological rationalities that work as limits on our capacity for genuine moral reasoning. The danger isn’t just that machines kill — it’s that the logic of machines reshapes human moral subjectivity, eroding what Arendt called the capacity for action (genuinely free, politically meaningful conduct) in favour of behaviour (rule-following, procedural compliance).
The book’s most penetrating concept is what she calls procedural violence — violence laundered through bureaucratic and algorithmic process until it no longer registers as a moral act requiring justification. The kill decision is disaggregated across analysts, lawyers, operators, algorithms, and commanders until no individual carries the weight. She asks pointedly: have we stopped asking whether it is ethical to kill and begun only to ask what is the most ethical way to kill?
The chapter Ethics as Technics is where Schwarz cuts deepest: she argues that applied ethics frameworks — utilitarian calculus, proportionality assessments, Just War doctrine — are increasingly being operationalised as software, as if ethics were a computable function. This doesn’t constrain killing; it legitimises it at industrial scale.
Comparison with Subsequent and Adjacent Literature
Robert Sparrow’s “Responsibility Gap” (2007, extended post-2018)
Sparrow’s concept — that autonomous systems decouple lethal actions from human agency, creating gaps where harms occur without identifiable moral agents  — is the analytic philosophy counterpart to Schwarz’s continental argument. Sparrow asks who is legally and morally responsible; Schwarz asks what kind of subject is produced by a world where that question has no clean answer. They’re complementary, but Schwarz is the more uncomfortable read because she won’t let you solve the problem by tightening accountability chains.
Renic & Schwarz (2023) — “Crimes of Dispassion”
Schwarz’s more recent collaborative work with Renic, published in Ethics & International Affairs, examines how autonomous weapons institutionalise killing through technical complexity  — a direct extension of Death Machines into the post-Ukraine, post-Gaza operational environment where drone swarms are no longer speculative. The emotional disengagement she theorised in 2018 is now empirically observable in how militaries discuss attrition in autonomous engagements.
The “Flash War” Problem (2023–2025 literature)
The most significant development since Schwarz wrote is operational rather than philosophical: autonomous targeting systems operate at machine-speed, compressing decision loops to microseconds and creating scenarios where reciprocal AWS interactions could accelerate tactical skirmishes into strategic conflicts before human oversight intervenes . This is Schwarz’s moral atrophy thesis becoming a kinetic risk — the human is no longer merely ethically displaced, they are temporally excluded.
Kate Crawford’s Atlas of AI (2021)
Crawford’s contribution is to situate the extractive political economy behind AI systems — the labour, minerals, data — alongside the military application. Where Schwarz focuses on the ethics of deployment, Crawford’s frame exposes the production logic that makes these systems feel natural and inevitable. Together they form a more complete critique than either alone.
Shannon Vallor’s Technology and the Virtues (2016, but increasingly cited post-2020)
Vallor approaches from virtue ethics rather than biopolitics: her concern is what habitual delegation of moral judgment to algorithms does to human character over time. This maps closely onto Schwarz’s Arendtian worry about the atrophy of moral agency, but Vallor offers a constructive counterpoint — the cultivation of technomoral virtues as resistance. Schwarz is more pessimistic; she doesn’t believe incremental virtue cultivation is adequate to the structural conditions she’s diagnosing.
The Anthropic Connection
You’ve flagged this accurately. The philosophical difficulty Anthropic wrestles with is a civilian-domain version of Schwarz’s argument, and I’d argue the parallels are more direct than Anthropic’s public communications generally acknowledge.
Schwarz’s thesis about ethics as technics — the operationalisation of moral reasoning into rules and filters — is precisely what Constitutional AI and RLHF represent in intent. Anthropic is attempting to encode ethics into the model’s training such that the model behaves ethically without requiring continuous human deliberation at point of use. That’s not a criticism of Anthropic’s approach; it may be the only tractable method at scale. But Schwarz would immediately note that the act of formalising ethics as a technical system changes what ethics is — it becomes compliance rather than judgment, procedure rather than conscience.
The Anthropic-Pentagon dispute adds a further layer. The “responsible scaling” framing and the Acceptable Use Policy both function as what Schwarz would recognise as legitimation infrastructure — frameworks that make advanced AI available to powerful actors while maintaining the appearance of ethical constraint. Whether that appearance corresponds to genuine constraint is precisely the question Schwarz’s book forces you to hold open.
The deeper problem — and this is where Death Machines is genuinely prophetic — is that once a powerful technology exists and states want it, the ethics debate tends to migrate from should we deploy this at all to how do we deploy this responsibly. The second question is more tractable and more fundable. But answering it may foreclose the first.
Worth the read. Dense in places — Arendt scholarship requires patience — but the central argument is tight and the contemporary resonance has only grown since publication.
