Exploration of thesis: “Saas shift to Gaas”. What are impacts on Core Banking software vendors and regulatory regimes

(Gaas – Agentic AI as a service – source NVDA)

Here is some real time research that emanates from today’s Morning Briefing. The core of this disussion is the shif to Agentic AI and provision of core services which goes to the heart of commoditisation for tranditional vendors. The scope of this discussion here is on core banking software vendors and banking regulatory regimes OSFI.

Explanation

1. Prompt: my comments and questions

2. Output: results from Claude.ai

This is raw realtime thinking. The space is moving fast driven by frontier development with Anthropic Claude Mythos exemplifying the direction of Gaas which first (to me at any rate) gained traction at GTC (Jensen Huang).

OpenAI too are re-focussing and cutting breadth of development and apparently shifting to the core of ChatGPT (time to rename and lose the ‘chat’ moniker Sam). The speed of development requires me to keep track and I use my banking blog for that.

————————

Prompt

re: “US Treasury launches AI Innovation Series for financial sector Treasury’s FSOC and its new Artificial Intelligence Transformation Office (AITO) launched a four-roundtable public-private series to accelerate AI deployment across fraud detection, credit underwriting, and operational risk management in US financial institutions. Chief AI Officer Paras Malik framed it as moving “from experimentation to enterprise-wide integration.”

Why it matters: The first formal US government-convened framework specifically targeting AI deployment in core banking functions — aligns with the SaaS-to-GaaS thesis and signals regulatory direction of travel.

Source: US Treasury”

“how does government standardising proprietary core banking processes align with Gaas.” I think there are two issues here possibly: 1. government involvement in business process 2. Gaas deployment which is a pure business process and the evolution of such.

Output

You’re right to push back. The framing in the briefing conflated two distinct tracks, and the “aligns with GaaS” claim doesn’t hold on examination.

Track 1 — Government involvement in business process

What Treasury is actually doing is regulatory and supervisory: establishing oversight frameworks for AI deployment in systemically important financial institutions. That’s governance of AI in banking, not acceleration of it in any commercially meaningful sense. The FSOC’s mandate is safety and soundness — its instinct is to slow and standardise, not to enable competitive differentiation. Historically, when regulators convene public-private roundtables on technology adoption, the output is guidance, model risk management frameworks, and audit requirements — all of which impose compliance overhead that tends to favour incumbent vendors and established architectures. This could as easily retard GaaS adoption as support it.

Track 2 — GaaS as commercial business process evolution

The SaaS-to-GaaS thesis — as Huang framed it at GTC — is a market structure argument: that AI infrastructure (GPU clusters, frontier models, inference APIs) will displace the SaaS application layer as the primary point of enterprise value capture. Banks stop buying loan origination software from Finastra or FIS and start consuming intelligence directly from hyperscaler or frontier AI provider infrastructure, with their own workflow logic sitting on top. That’s a competitive and architectural shift driven by cost, capability, and vendor economics — nothing to do with Treasury roundtables.

Where the conflation occurred

The only legitimate connection is indirect: regulatory clarity can reduce one inhibitor of enterprise AI adoption. If FSOC eventually produces clear guidance on model validation, explainability requirements, and audit trails for AI-driven credit decisions, that removes one source of institutional hesitation. But that’s a second-order effect — and it’s as likely to constrain GaaS architectures (which are inherently less auditable than deterministic SaaS workflows) as to enable them.

Corrected framing for the briefing item

The Treasury initiative is better read as regulators beginning to engage with AI deployment in banking before it outpaces their supervisory frameworks — which matters as a risk signal rather than as an enabler. For GaaS specifically, the regulatory direction of travel is more likely to produce friction (explainability mandates, model risk governance, vendor concentration concerns) than alignment.

Worth noting separately: the GaaS transition in banking is happening independently of Treasury, driven by procurement economics and capability gaps in incumbent SaaS vendors — Salesforce, ServiceNow, the core banking platforms. Those dynamics are what merit tracking, not the regulatory scaffolding.

Prompt

beautiful reframe- thank you. Treasury are rightly staying ahead of new tolls, processes and players within their regulatory purview. I have a follow up question though and this goes to the role and architectural placement of frontier AI in 2050 and beyond.

Jensen is on record referring the future AI as a ultility like electricity. [{Ed} Pushed enter accidentally – see prompt (follow up) right after this Ourput.}

Output

Colin is asking a substantive analytical question about the long-term architectural placement of frontier AI — comparing Jensen Huang’s “AI as utility” framing to electricity as a historical analogy, and projecting to 2050 and beyond. This is right in his wheelhouse — geopolitics, technology, societal transformation with a 30+ year horizon. Let me think through this carefully.

The electricity analogy is well-worn but worth interrogating seriously:

The electricity utility model — Electricity became a utility when it was commoditised, standardised, and delivered over shared infrastructure. The value migrated from generation to application. Insull’s grid model, then regulated utilities, then deregulation. The “intelligence” in electricity is minimal — it’s a commodity input.

Jensen’s framing — He’s positioning AI inference as a utility: intelligence consumed from a grid of data centres and GPU clusters, much like drawing power from the electrical grid. It’s the GaaS thesis scaled to civilisational scope.

The 2050 question — I need to think about what frontier AI’s architectural position actually looks like that far out. The core tension is whether intelligence can truly commoditise like electricity did, or whether capability differentiation keeps frontier models proprietary and tiered rather than undifferentiated utilities. There’s also the regulatory dimension to consider — how does governance shape whether this becomes a genuine utility or remains concentrated.

⠀The generation analogy has real limits too. Electricity can be distributed across many sources, but frontier AI compute may naturally consolidate into monopolies or tight oligopolies just by the physics of scale. And nations won’t tolerate foreign AI utilities the way they do energy dependence, which pushes toward sovereign AI stacks and fragmentation. The further question is whether AGI or post-AGI systems even fit the utility model at all.

By 2050, if AI becomes true infrastructure, it might be embedded so deeply in governance, finance, and medicine that “utility” undersells what it is. The real architectural battle is over who controls the inference layer — hyperscalers, sovereign operators, or distributed open networks. And energy becomes the hard constraint. AI at civilisational scale needs so much power that inference infrastructure probably gets physically fused with energy generation, especially nuclear and geothermal.

For banking specifically, this means the core platforms everyone uses today become almost irrelevant. The actual “core” becomes an intelligence layer sitting on top of minimal data storage, with credit decisions, risk assessment, compliance, and customer service all pulled from the AI utility grid.

Now I’m pushing back on the electricity analogy itself. It works for explaining commoditisation and how capital concentration drives supply, but it breaks down in crucial ways. AI systems have agency in ways electricity doesn’t, so governance gets fundamentally different. Intelligence isn’t homogeneous like electricity — models will probably stay differentiated. And AI consumption is interactive and recursive, not passive like flipping a switch. The feedback loops change everything. Plus, electricity became a natural monopoly because physics made duplication wasteful, but AI might not follow that same logic across all scales.

The real 2050 question isn’t whether AI becomes a utility — it’s whether it becomes so embedded in legal, financial, medical, and governance systems that it’s inseparable from how civilisation actually functions. That’s a different category entirely than electricity. It’s not just powering machines; it’s making decisions and generating knowledge at a scale where human agency over those decisions becomes the actual problem. This connects to the autonomy question Elke Schwarz raised — at civilisational scale, the meaningful question shifts from “is it regulated?” to “do the people nominally in control actually retain agency over what it decides?”

The electricity analogy is Jensen’s most powerful public framing — and it’s right in its structural logic but incomplete in ways that matter considerably at the 2050 horizon.

Where the analogy holds

The historical parallel is sound. Electricity followed a recognisable arc: proprietary generation (Edison’s Pearl Street), infrastructure standardisation (Insull’s grid), regulatory capture as natural monopoly, then commoditisation with value migrating entirely to the application layer. Jensen is mapping that arc onto AI infrastructure: GPU clusters as generation, hyperscaler networks as transmission grid, inference APIs as the wall socket. Consumption displaces ownership. The GaaS thesis is just this applied to enterprise software over the next decade.

Where the analogy breaks — and this is the more interesting territory

Electricity is undifferentiated. A watt from nuclear and a watt from solar are identical at point of consumption. Intelligence is not, and may never be. If frontier model capability continues to compound — and the Mythos/Capybara leak this week is one data point suggesting it will — then the “utility” is perpetually tiered in ways electricity never was. There is no equivalent in electricity of one utility’s current being qualitatively superior to another’s. This means the natural monopoly dynamics may be far more concentrated and durable than the electricity analogy implies.

More fundamentally: electricity has no agency. At the 2050 horizon, frontier AI systems will have increasing and possibly substantial agency — generating knowledge, mediating legal and financial decisions, operating within governance structures. That is a categorically different infrastructure type. The better frame at that horizon is not utility but infrastructure of cognition — embedded so deeply in legal, financial, medical, and scientific systems that it is no longer separable from them, in the way that writing or mathematics are no longer separable from civilisation. You don’t “use” writing as a utility. It constitutes the architecture of thought.

The sovereignty problem the utility frame obscures

Nations accepted energy import dependence — with significant political cost, as 1973 and the current Hormuz crisis illustrate. They will not accept cognitive infrastructure dependence at civilisational scale without either regulatory capture of foreign providers or the development of sovereign alternatives. This is already visible: EU AI Act, China’s domestic model stack, the Anthropic-Pentagon litigation. By 2050 the geopolitical architecture of AI will look more like nuclear energy — a handful of sovereign or quasi-sovereign operators, strict export controls, and technology transfer as a primary instrument of statecraft — than like electricity, which flows relatively freely across borders.

The banking-specific implication at 2050

If you take the utility frame seriously and extend it to its logical conclusion over 25 years, core banking platforms as currently conceived — Temenos, FIS, Finastra, Mambu — are vestigial architecture. The “core” becomes a thin data persistence substrate with identity, ledger, and regulatory record-keeping functions, while credit, risk, compliance, pricing, and customer interaction are all consumed from the AI inference layer. The bank as institution becomes a regulatory licence and a balance sheet wrapped around an intelligence consumption contract. That is GaaS taken to its structural conclusion — and it implies a radical reconcentration of power toward whoever controls the inference infrastructure.

The question the utility frame doesn’t answer

Who governs the grid? Electricity utilities are regulated as natural monopolies because transmission physics makes duplication inefficient. The AI equivalent is compute concentration — and it is far more concentrated than any electricity utility has ever been. Three or four entities — likely Microsoft/OpenAI, Google DeepMind, Anthropic, and a Chinese state-backed equivalent — may control the majority of civilisationally significant AI infrastructure by 2050. The governance question is not whether AI becomes a utility but whether democratic institutions retain the capacity to regulate an infrastructure that is simultaneously more powerful and less legible than any prior utility. That is where the Schwarz/Arendt frame re-enters: the banality of outsourced cognition at scale.

Jensen’s framing is the right commercial architecture for the next decade. At the 2050 horizon, the more precise formulation may be: AI is to cognition what electricity was to physical work — but cognition, unlike physical work, is the substrate of governance, law, and meaning-making. That asymmetry changes everything about what “utility regulation” would need to accomplish.

Prompt (follow up)

so when we say

Banks stop buying loan origination software from Finastra or FIS and start consuming intelligence directly from hyperscaler or frontier AI provider infrastructure, with their own workflow logic sitting on top. That’s a competitive and architectural shift driven by cost, capability, and vendor economics — nothing to do with Treasury roundtables.

that discounts Fianstra, TIS, Temenos building their own layer and selling that.

Correct — and that’s a significant gap in the framing worth closing properly.

What the incumbent vendors will actually do

Finastra, FIS, Temenos, and Mambu are not going to sit still while hyperscalers commoditise their application layer. Their most probable response is to become AI orchestration intermediaries — wrapping frontier model inference in domain-specific banking context: regulatory compliance, GL logic, Basel capital treatment, IFRS 9 provisioning, AML typologies, jurisdiction-specific lending rules. That is genuinely defensible territory because it is not something Azure OpenAI or Anthropic’s API will provide out of the box, and it is expensive and slow to build from scratch inside a bank.

The value proposition inverts slightly but remains intact: instead of selling workflow software, they sell pre-configured intelligence layers — trained on banking-specific data, pre-validated against regulatory frameworks, with audit trails built in. Temenos Infinity becomes less a front-end application and more a domain adaptation and governance wrapper around whatever the current frontier model happens to be. The software vendor becomes the model integrator and compliance abstraction layer.

Why this matters structurally

It means the GaaS transition in banking is unlikely to be a clean architectural displacement. It is more likely to produce a three-tier stack:

Frontier model / hyperscaler infrastructure at the base (the utility layer — Jensen’s frame)

Domain-specific orchestration and compliance wrapper in the middle (the incumbent vendor’s survival play)

Bank-owned workflow, identity, and ledger substrate at the top (the thin core)

⠀The incumbent vendors’ survival depends entirely on whether that middle layer is defensible. Two factors determine this:

First, whether frontier models become sufficiently capable and context-aware that the domain adaptation layer can be commoditised away — which is precisely what Mythos/Capybara suggests is approaching. If a model can reason through Basel IV capital treatment without a Finastra wrapper, the wrapper loses its value.

Second, whether regulators — including Treasury’s FSOC initiative — mandate the kind of explainability, audit, and model validation requirements that make a managed intermediary layer not just commercially attractive but legally necessary. This is the scenario where regulatory friction becomes the incumbent vendors’ most durable competitive moat, which is an uncomfortable irony: Finastra’s best business case in 2030 may rest on regulatory complexity rather than software capability.

The historical precedent worth noting

IBM did not disappear when the PC commoditised hardware. It pivoted to services and middleware — exactly the orchestration intermediary role described above. It survived but was permanently reduced in strategic importance. The parallel for Finastra or Temenos is plausible: they persist, they remain relevant to procurement decisions at tier-two and tier-three banks, but they are no longer the architectural centre of gravity. The frontier AI provider and the bank’s own AI team bracket them from both ends.

The wildcard

The vendors who move earliest and most aggressively to acquire or deeply integrate frontier model access — rather than building their own models, which they lack the capital and talent for — may carve out durable positions as trusted domain intermediaries. The risk is that they become resellers with thin margins rather than platform owners with pricing power. The distinction between those two outcomes is whether the domain IP they accumulate — the regulatory logic, the training data from client deployments, the compliance ontologies — is proprietary and non-replicable, or whether it gets absorbed into the frontier model layer through training data at scale.

That last question is unresolved and is probably the most strategically important question facing every core banking vendor over the next five years.

OSFI. Canada, Bank Regulation, Banking technology

Leave a comment