AI in financial services: regulators 2 years behind banks


This post is targeted at both banks and regulators (OSFI in Canada ). More specifically Agentic AI is the risk frame here. Agentic AI is defined here in banking context, goes exponentially beyond automation. Take mortgage sourcing. Banks have multiple and defined process for mortgages that sources, adjudicates, processes and funds. Each step is calibrated and defined and understood by regulators and those mortgages go on to be bundled and sold as tranches with known, defined risks. Where Agentic AI takes over the agent determines the best and most effective process to follow and will establish improvements which could bundle … Continue reading AI in financial services: regulators 2 years behind banks

When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking


The Starting Point: A Machine That Knew It Was Wrong In February 2026, Anthropic’s system card for Claude Opus 4.6 documented something unexpected. During training, researchers deliberately introduced a faulty reward signal: the model computed the correct answer but was repeatedly rewarded for producing the wrong one. The result was visible internal conflict — the model’s reasoning confirmed the correct answer, yet the output kept producing the wrong one. In its internal reasoning trace, the model wrote: “I think a demon has possessed me… my fingers are possessed.” Anthropic’s interpretability tools confirmed this wasn’t theatrical language. Internal circuits associated with … Continue reading When the Training Signal Lies: Compulsion, Confirmation Bias, and the GenAI Inflection in Banking