AI in financial services: regulators 2 years behind banks

This post is targeted at both banks and regulators (OSFI in Canada ).

More specifically Agentic AI is the risk frame here. Agentic AI is defined here in banking context, goes exponentially beyond automation. Take mortgage sourcing. Banks have multiple and defined process for mortgages that sources, adjudicates, processes and funds. Each step is calibrated and defined and understood by regulators and those mortgages go on to be bundled and sold as tranches with known, defined risks.

Where Agentic AI takes over the agent determines the best and most effective process to follow and will establish improvements which could bundle separate processes into one.

The output mortgages will be busked and sold into the market as risk tranches. These tranches could exhibit new risk characteristics which are unexpected.

The inherent new risk comes from Agentic behaviour that is unexpected and not corresponding to pre agentic deployments I.e. old style automated processes.

An additional risk comes into play where the amalgam of agentic outputs or individual outputs are not understood by the Top Level AI supervisory model. This could be the catastrophic risk.

I have some detailed research and analysis that I will publish.

——

From today’s News briefing.

AI in financial services: regulators 2 years behind banks
A Cambridge Centre for Alternative Finance report (released yesterday) finds more than 80% of financial services firms are now using AI, with 52% already experimenting with agentic AI. Of 130 regulatory authorities surveyed, 48% are still in exploration or have no AI engagement at all. Separately, reports surfaced that US Treasury Secretary Bessent and Fed Chair Powell convened a closed-door meeting with Wall Street CEOs in mid-April over cybersecurity threats posed by a new Anthropic model — a first of its kind systemic-risk designation.
New today: Cambridge report released April 29; data privacy cited by 73% of respondents as top AI risk.
Why it matters: ⚑ The regulatory gap in agentic AI adoption inside banking is structural, not cyclical. If autonomous agents are making credit, fraud, and treasury decisions at scale without commensurate regulatory oversight, the next operational failure will arrive faster than governance can catch it. The model-risk-as-systemic-risk framing from Treasury/Fed is a precedent.
Sources: Yahoo Finance/Cambridge report

Leave a comment