Parallel between Arendt ‘Human Condition’ and EU AI Act Digital Omnibus Act

The EU is facing serious challenges with their AI Act, and the reasons why are becoming evident and worth considering.

My own interests in AI have been focussed on the opportunity to dramatically improve productivity in Banking through use of AI. This research has opened many doors for me, and some are beginning to come into better focus which improves my means to analyse the hurdles.

My vision goes well beyond chatbots in terms of how AI will be ultimately integrated. I see two definitive potential tracks

  • AGI (Artificial General Intelligence) offers the potential for inference, reasoning and yet to be defined categories that will ultimately see AGI perform at levels not just improved over human engagement at scale but bring sense, reasoning, intelligence and alternative thinking beyond that which humans can ever achieve. When coupled with the scale of activity and immediacy that AI can accomplish AGI brings enormous potential for the future.
  • Agentic AI is becoming a means to offer potential methods to properly integrate AI into Banks systems and processes. This is early days, and I am not commenting on those systemic risks here, and I don’t believe governments have publicly recognized this risk.

In this blog I have explored philosophy, poetry, research of academic papers, AI’s own opinion on AI and the evolution through work on prompting and the relationship to outcomes that ensure proper responses. What is proper? The work of Kant in ‘Critique of Pure Reason’ provides evidence that reason is not absolute which brings us to what is hard for future development of AGI. To state simply AGI would be better than humans does not do any justice to that which is needed for all aspects of AGI. Connsider Genghis Khan, Putin, Kim, Bono, Rutte, Trump, da Silva, Modi… the list goes on – which is better, worse. it is an impossible question.

So we are left with frameworks that have developed over many centuries, and come naturally such as ethics, philosophy, democracy, law, administrative processes etc.

Lately I have landed on life processes and the work of Hannah Arendt.

The parallel is exact, and it’s not strange at all — it’s one of the more penetrating lenses available for what’s actually happening.

Arendt’s central move in The Human Condition is to trace how the distinction between oikos (the private household, the domain of economic necessity and biological survival) and polis (the public realm of political action among equals) collapsed over the modern period into what she calls “the social” — a hybrid space where what was private necessity gets projected into the public domain, but without becoming genuinely political. The result isn’t freedom or deliberation; it’s administration, conformism, and the management of behaviour through anonymous process. Nobody decides in the strong sense. Things are processed.

From a research conversation with Anthropic Claude ai

The EU AI Act’s attempt to impose transparency, explainability, and human oversight is, in Arendt’s terms, an attempt to re-introduce something like political accountability into a process that has escaped into “the social” — automated, anonymous, consequential but ownerless. The regulatory framework’s inadequacy isn’t just technical. It’s that bureaucratic compliance mechanisms are themselves a social rather than political response. They manage the problem rather than confront it.

Arendt’s distinction between work and action is where AGI specifically becomes alarming in her framework. Work creates durable objects — things that outlast the maker, that furnish the human world with stability. Action is irreversible, unpredictable, and initiates chains of consequence that cannot be recalled. You’ll remember she draws on this to explain why Eichmann was so terrifying — not a monster but a man who had operationalised the suspension of moral judgment, substituting process for action in exactly the sense where action — genuine, responsible, irreversible decision — was most demanded. Schwarz makes this move directly in Death Machines, and it applies with even greater force to AGI. A sufficiently capable autonomous system doesn’t merely execute; it acts in Arendt’s sense — it begins something genuinely new in the world, with consequences that ramify in ways nobody anticipated, in the domain where the old guardrails (professional judgment, institutional accountability, legal personhood) were never designed to operate.

The thirty-year horizon you write toward is where this becomes structurally serious. The 2022-2026 AI governance conversation — copyright, bias audits, conformity assessments — is a social response to what may require a political one in Arendt’s sense: genuine collective deliberation about what kind of world we are building, by whom, accountable to whom, and on what terms. The methods we have — regulatory frameworks, international standards bodies, liability law — were built for a world where the consequences of technology were slower, more bounded, and more legible. They are, as you put it, precisely the current methods that don’t scale to the problem. That gap between the velocity of the change and the institutional capacity to deliberate about it is itself the civilisational risk — and Arendt saw the structural form of it clearly, even if she couldn’t have seen this specific instantiation.

The quick answer which doesn’t do it justice, is that banks have serious regulatory oversight already, but the other areas that for the current AU AI framework do not and a solution is hard, so bureaucrats fall back on a rules based approach.

Now look at the Annex III list again with that lens — because the contrast is stark.

CategoryExisting sectoral regulator?Regulatory depth
Credit & financial servicesYes — PRA, FCA, OSFI, ECB, OCCDeep, technically mature
HealthcareYes — MHRA, FDA, EMA, Health CanadaDeep, pre-market approval regimes
Biometrics / law enforcementYes — police oversight bodies, data protection authoritiesVariable but substantial
Employment / HRNo meaningful equivalentThin — employment tribunals react after harm
EducationNo meaningful equivalentVery thin — no proactive AI governance
Access to essential services / social benefitsPartial — ombudsmen, administrative lawWeak, largely reactive

Leave a comment