New tensions for Banks from regulators on both sides of Atlantic with deadlines

US Treasury released AI Based plan to regulate financial services core funtions

Artificial intelligence is increasingly embedded in core financial services functions – from fraud detection and cybersecurity to credit underwriting and operational risk management. As adoption accelerates, regulators and institutions must ensure that governance, supervisory approaches, and market practices evolve alongside technological capability.

This will undoubtedly create tension within Banks to meet the requirements. I must add my own view the Treasury deployment is moving at such speed I doubt it can meet its own deadline as well as incorporate all their own objectives.

There is a parallel and possible greater tension from the EU Omnibus Act.

… the EU AI Act’s high-risk provisions — covering credit scoring, AML monitoring, and automated lending — apply from August 2, 2026. Non-compliance penalties run to 7% of global turnover. A Wolters Kluwer Q1 survey found only 12.2% of banks describe their AI/ML strategy as “well-defined and resourced,” while 31.8% have deployed AI into production. A parallel KPMG figure: 99% of firms plan to deploy AI agents; only 11% have done so. 

Why it matters to Bankwatch.

Certain core functions that are not handled effectively in banks today, including AML will see some strange outcomes. The survey indicates the lack of preparation among banks with regard to AI/ML strategy writ large, let alone for core functions. Further the encroachment into automated lending something Banks take some pride in whether justified or not, will create new headaches.

This parallel activities (EU and US Treasury) create an escalated tension amongst Banks as they seek to manage broad issues of Financial Stability, Economic uncertainty and FX shifts that have potential to shift markets (Hormuz /yuan).

______________________________________________________________

EU AI Act August Deadline: Banking Sector Unprepared

The EU AI Act’s high-risk provisions — covering credit scoring, AML monitoring, and automated lending — apply from August 2, 2026. Non-compliance penalties run to 7% of global turnover. A Wolters Kluwer Q1 survey found only 12.2% of banks describe their AI/ML strategy as “well-defined and resourced,” while 31.8% have deployed AI into production. A parallel KPMG figure: 99% of firms plan to deploy AI agents; only 11% have done so. Separately, US Treasury launched an “AI Innovation Series” (April 1) — a public-private initiative framing AI adoption as a national economic security matter. Treasury Secretary Bessent: “failure to adopt productivity-enhancing technology is its own risk.”

New today: Treasury’s AI Innovation Series launched; August EU AI Act deadline now under 4 months out.

PRESS RELEASES

Treasury Launches the Artificial Intelligence (AI) Innovation Series

March 23, 2026

WASHINGTON — The Office of the Financial Stability Oversight Council (FSOC) and the Treasury Department’s Artificial Intelligence Transformation Office (AITO) launched the AI Innovation Series this week, a public-private initiative to support the continued strength and resilience of the U.S. financial system in an era of accelerating technological change.

Artificial intelligence is increasingly embedded in core financial services functions – from fraud detection and cybersecurity to credit underwriting and operational risk management. As adoption accelerates, regulators and institutions must ensure that governance, supervisory approaches, and market practices evolve alongside technological capability.

“Economic security – the condition of having secure and resilient domestic production capacity – is core to financial stability, and leadership in AI adoption is a crucial component of economic security,” said U.S. Treasury Secretary Scott Bessent. “We are optimizing regulation to support growth for both Main Street and Wall Street: moving from a posture focused on constraint toward one that recognizes failure to adopt productivity-enhancing technology as its own risk. The Treasury Department will continue evaluating regulatory frameworks and enforcement policies to enable the U.S. financial sector’s leadership in AI adoption while preserving national security and long-term economic resilience.”

Over four roundtables, the AI Innovation Series will convene financial institutions, technology firms, regulators, and specialized experts to explore the highest-value AI use cases and identify practical approaches to scaling innovation while preserving safety and soundness.

“AI adoption is not merely a question of technological modernization—it is critical to America’s financial stability and a precondition to economic growth,” said Deputy Assistant Secretary for FSOC Christina Skinner. “When institutions cannot deploy tools that improve fraud detection, credit allocation, and operational resilience, the system becomes less efficient and less secure.”

“AI is moving from experimentation to enterprise-wide integration, and disciplined implementation will determine its impact,” said Paras Malik, Treasury’s Chief AI Officer and Counselor to the Secretary. “The priority now is on operationalization, embedding AI into core workflows in ways that measurably enhance risk management and resilience. Through the Innovation Series, we are convening regulators and industry leaders to ensure governance frameworks evolve alongside deployment and remain fit for purpose as AI becomes embedded across financial markets.”

Why it matters: The gap between AI deployment ambition and governance readiness in financial services is now a regulatory exposure, not just an operational lag. The August date is fixed.

Sources: Wolters Kluwer survey · US Treasury

Leave a comment