Parallel between Arendt ‘Human Condition’ and EU AI Act Digital Omnibus Act


The EU is facing serious challenges with their AI Act, and the reasons why are becoming evident and worth considering. My own interests in AI have been focussed on the opportunity to dramatically improve productivity in Banking through use of AI. This research has opened many doors for me, and some are beginning to come into better focus which improves my means to analyse the hurdles. My vision goes well beyond chatbots in terms of how AI will be ultimately integrated. I see two definitive potential tracks In this blog I have explored philosophy, poetry, research of academic papers, AI’s … Continue reading Parallel between Arendt ‘Human Condition’ and EU AI Act Digital Omnibus Act

briefing


Morning Briefing — Monday, 16 March 2026Toronto time | ~1,300 words 1. Top Stories — What Changed Iran war enters third week with no ceasefire frameworkIsrael’s military says it is preparing at least three more weeks of strikes with “thousands of targets” remaining. Iran fired approximately 700 missiles and 3,600 drones at US and Israeli targets since 28 February. An Iranian commander on 15 March reaffirmed the Strait of Hormuz will continue to be used as a pressure point. Khamenei’s status remains officially disputed — Iran’s foreign minister insists he is governing; Western intelligence assessments are more cautious.New today: Israel … Continue reading briefing

# Book Review: ‘Death Machines’ and the Limits of Algorithmic Ethics


A Synthesis of Elke Schwarz’s book ‘Death Machines’ and Its Implications for AGI Risk Synthesised from: Schwarz, E. (2018). Death Machines: The Ethics of Violent Technologies. Manchester University Press; Archambault, E. (2019). Review of Death Machines. International Affairs, 95(2), 470–471; and adjacent literature in autonomous weapons ethics and AI governance. Source: personal research and summarized, formatted and conclusions by Anthropic Claude.ai 1. What the Book Actually Argues Elke Schwarz’s Death Machines (2018) is frequently miscategorised as a book about drone warfare. It is not, or not primarily. Its true subject is what happens to moral reasoning when ethical decisions are … Continue reading # Book Review: ‘Death Machines’ and the Limits of Algorithmic Ethics

Analysis -Drone War Evolution: Equilibrium, Scenarios, and Off-Ramps


Introduction If previous wars were tanks and trenches the rapid shift to AI and physical cheap and effective drones point to a new step in drone warfare although it suggests more of a catch up on US part. It doesn’t feel like a Little Boy, moment but opposite and will extend the war, not bring diplomatic pressure unless targeting becomes more strategically aimed at driving diplomatic off ramps. Or in consideration of plausible scenarios is the Yuan repricing of Hormuz oil a more likely creative market driven indicator of what will bring an off ramp while drones produce a holding … Continue reading Analysis -Drone War Evolution: Equilibrium, Scenarios, and Off-Ramps

“How Anthropic Became the Most Disruptive Company in the World”TIME Magazine, March 11, 2026https://time.com/article/2026/03/11/anthropic-claude-disruptive-company-pentagon/


The most complete single-source account of the Anthropic-Pentagon dispute yet published — including the specific proximate trigger (an Anthropic employee allegedly called Palantir to query Claude’s use in the Venezuela raid, which the Pentagon characterised as soliciting classified information), the personality dynamics between Dario Amodei and Emil Michael, OpenAI’s stumble and amendment, and the deeper question of whether private AI companies can structurally impose constraints on military clients. Essential reading for the Anthropic-defense governance thread, and directly relevant to the broader AI sovereignty debate. Continue reading “How Anthropic Became the Most Disruptive Company in the World”TIME Magazine, March 11, 2026https://time.com/article/2026/03/11/anthropic-claude-disruptive-company-pentagon/

Death Machines — Elke Schwarz (2018/2019)


Source Claude AI I studied and researched this book and little experience with #Arendt I go further researched in Claude and the results are illuminating and point to serious deficiencies in the “guardrails “ thinking that guides latest Government thinking on AI regulation. ———————————— This is a rich and genuinely important book, and you’ve landed on it at exactly the right moment given what’s unfolding with the Anthropic-Pentagon thread we’ve been tracking. Death Machines — Elke Schwarz (2018/2019)Core ArgumentSchwarz’s central move is philosophically subversive: she refuses to engage the ethics of lethal autonomous weapons on their own terms. The conventional … Continue reading Death Machines — Elke Schwarz (2018/2019)

Nvidia backs AI cloud startup Nebius with $2B as data center race intensifies


Nvidia is investing $2 billion in Amsterdam-based Nebius, taking an 8.3% stake and deepening its push into the fast-growing “neocloud” layer of the AI stack. Nebius said it plans to deploy more than 5 gigawatts of data center capacity by 2030, a huge build-out that shows demand for AI compute is no longer driven solely by hyperscalers like Microsoft, Google, and Meta. The deal also underscores Nvidia’s increasingly unusual position in the market: it is not just selling chips, but financing parts of the ecosystem that buy and deploy them. Why that matters goes beyond one funding deal. AI infrastructure … Continue reading Nvidia backs AI cloud startup Nebius with $2B as data center race intensifies

When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking


The Starting Point: A Machine That Knew It Was Wrong In February 2026, Anthropic’s system card for Claude Opus 4.6 documented something unexpected. During training, researchers deliberately introduced a faulty reward signal: the model computed the correct answer but was repeatedly rewarded for producing the wrong one. The result was visible internal conflict — the model’s reasoning confirmed the correct answer, yet the output kept producing the wrong one. In its internal reasoning trace, the model wrote: “I think a demon has possessed me… my fingers are possessed.” Anthropic’s interpretability tools confirmed this wasn’t theatrical language. Internal circuits associated with … Continue reading When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking

Morning Briefing — Wednesday, 11 March 2026


Toronto / ET | Generated ~6:00 AM ET Source my custom prompt, with all research from Claude.ai and sources noted. Here’s the summary of what’s driving today’s briefing: Dominant thread: The Hormuz crisis is deepening rather than resolving. Three more ships struck today (14 total), the IEA’s record reserve release failed to hold oil below $90, and the US destruction of 16 Iranian mine-layers is escalating the military arc rather than shortening it. Mojtaba Khamenei’s hardliner posture and the Dimona nuclear signal make the diplomatic off-ramp narrow. The two structural flags I’ve carried forward: New today worth watching: The FTC AI policy … Continue reading Morning Briefing — Wednesday, 11 March 2026

Day After AGI” games – RAND


The RAND Center for the Geopolitics of Artificial General Intelligence (AGI) conducts “Day After” AGI exercises using RAND’s Infinite Potential platform to understand how the United States should respond to and prepare for potential artificial intelligence (AI) developments in the future.1 These exercises simulate a National Security Council Principals Committee (PC) convention to recommend a U.S. government response to developments in frontier AI. In each exercise, participants are presented a scenario that represents both (1) an acute crisis for U.S. national or economic security and (2) a signpost on a path to a transformative AI future. Facilitated by a simulated … Continue reading Day After AGI” games – RAND