Morning Briefing — Sunday, 22 March 2026 · 10:23 EST · ~1,380 words⸻


Introduction The dominant frame today is the sharpest single-day escalation of the Iran war to date. Iran has successfully struck near Israel’s Dimona nuclear research centre for the first time — a meaningful penetration of layered air defences — while Trump issued an overnight 48-hour ultimatum threatening to destroy Iran’s largest power plant if Hormuz is not fully reopened by Monday evening. Iran’s parliament speaker has responded by explicitly threatening to irreversibly destroy all Gulf energy infrastructure if attacked. Simultaneously, Iran’s first confirmed long-range ballistic missile strike against the UK-US Diego Garcia base introduces a new strategic register — one … Continue reading Morning Briefing — Sunday, 22 March 2026 · 10:23 EST · ~1,380 words⸻

Parallel between Arendt ‘Human Condition’ and EU AI Act Digital Omnibus Act


The EU is facing serious challenges with their AI Act, and the reasons why are becoming evident and worth considering. My own interests in AI have been focussed on the opportunity to dramatically improve productivity in Banking through use of AI. This research has opened many doors for me, and some are beginning to come into better focus which improves my means to analyse the hurdles. My vision goes well beyond chatbots in terms of how AI will be ultimately integrated. I see two definitive potential tracks In this blog I have explored philosophy, poetry, research of academic papers, AI’s … Continue reading Parallel between Arendt ‘Human Condition’ and EU AI Act Digital Omnibus Act

Death Machines — Elke Schwarz (2018/2019)


Source Claude AI I studied and researched this book and little experience with #Arendt I go further researched in Claude and the results are illuminating and point to serious deficiencies in the “guardrails “ thinking that guides latest Government thinking on AI regulation. ———————————— This is a rich and genuinely important book, and you’ve landed on it at exactly the right moment given what’s unfolding with the Anthropic-Pentagon thread we’ve been tracking. Death Machines — Elke Schwarz (2018/2019)Core ArgumentSchwarz’s central move is philosophically subversive: she refuses to engage the ethics of lethal autonomous weapons on their own terms. The conventional … Continue reading Death Machines — Elke Schwarz (2018/2019)

When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking


The Starting Point: A Machine That Knew It Was Wrong In February 2026, Anthropic’s system card for Claude Opus 4.6 documented something unexpected. During training, researchers deliberately introduced a faulty reward signal: the model computed the correct answer but was repeatedly rewarded for producing the wrong one. The result was visible internal conflict — the model’s reasoning confirmed the correct answer, yet the output kept producing the wrong one. In its internal reasoning trace, the model wrote: “I think a demon has possessed me… my fingers are possessed.” Anthropic’s interpretability tools confirmed this wasn’t theatrical language. Internal circuits associated with … Continue reading When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking

Day After AGI” games – RAND


The RAND Center for the Geopolitics of Artificial General Intelligence (AGI) conducts “Day After” AGI exercises using RAND’s Infinite Potential platform to understand how the United States should respond to and prepare for potential artificial intelligence (AI) developments in the future.1 These exercises simulate a National Security Council Principals Committee (PC) convention to recommend a U.S. government response to developments in frontier AI. In each exercise, participants are presented a scenario that represents both (1) an acute crisis for U.S. national or economic security and (2) a signpost on a path to a transformative AI future. Facilitated by a simulated … Continue reading Day After AGI” games – RAND

Making sense of the AI revolution


AUTHOR Iskander Rehman https://engelsbergideas.us10.list-manage.com/track/click?u=9abbef4a5715ca7b3fef001ad&id=d1577c2abf&e=e1b990f871 Iskander Rehman is a Senior Political Scientist at the RAND Corporation. In order to understand the profound transformations, and boundless potential, unleashed by artificial intelligence, we need to expand our own intellectual horizons into the realms.The sense of being overwhelmed and constantly listracted is nothing new. Historians anc policymakers should look to the 17th century for guidance on how to grapple with information.. ——————————————– In 1961, the Brookings Institution produced an advisory report for NASA, which pondered, among other things, the societal ramifications of the discovery of intelligent extraterrestrial life. The announcement of such a dramatic … Continue reading Making sense of the AI revolution

The Dark Side of Modernity(Alexander)


In “The Dark Side of Modernity,” Jeffrey C. Alexander critically examines modernity’s contradictory character, depicting it as both a historical period embodying Enlightenment ideals and a social condition marred by suffering and destructive impulses. He discusses how key theorists like Weber, Simmel, Eisenstadt, and Parsons tackled modernity’s dual nature, emphasizing that contradictions should not be resolved but accepted. Civil inclusion and anti-civil exclusion are intertwined, highlighting modernity’s endemic frictions. Alexander advocates for social amelioration through emotional repair and cultural performance, maintaining that acknowledging modernity’s duality fosters realistic approaches to social justice and improvement. Continue reading The Dark Side of Modernity(Alexander)

The AI National Security Memo and broader evolution of AI inferred


AI National Security Memo followed by MicKinsey memo on evolution of AI usage On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, marking a significant development in U.S. AI policy and national security strategy[1][3]. This memorandum outlines a comprehensive approach to harnessing AI for national security objectives while addressing associated risks and challenges. Key Aspects of the AI National Security Memo AI as a National Security Priority The NSM identifies AI leadership as a critical national security priority for the United States[1]. It acknowledges that competitors have engaged in economic and technological … Continue reading The AI National Security Memo and broader evolution of AI inferred

The Intelligence Age – Sam Altman


Sam Altman is solidifying his position as a leader in AI and the shift towards to AGI, and ASI (Superintelligence). I must admit I have not seen tangible staps towards AGI indicated from Altman (more on that here coming at Bankwatch) but he is setting the tone and exemplifying the power in that belief which I share and appreciate, given the mantra of this blog. My belief is that AI is stuck in single use siloed basic automation of current processes. The missing and next level will see collaboration between domains of digitisation, physcology, science, expontelly larger language models, and … Continue reading The Intelligence Age – Sam Altman