Here is result of research on Agentic State of play outlining currently understood risks which could develop into issues. In fact there are multiple indications of Agent AI deployments that will be cancelled due to the evolving landscape of risks. Financial Services in particular are seeing gaps in compliance and regulatory areas as protocols which assumed human employee engagement bump up against Agents which will act on what they observe, and have no way to act on what they cannot see. My take is that insufficient attention is being paid to formal and informal data linkages.
Prepared for Splunk session May 12th, 2026 — Colin Henderson
Splunk / Cisco summary of session considerations
Together, we’ll dive into the most urgent challenges leaders are facing today, including:
- Scaling AI responsibly while reducing complexity, cost overruns, and operational blind spots
- Strengthening resilience with AI and for AI to detect threats faster and protect a rapidly expanding attack surface
- Uniting SecOps, ITOps, and Engineering with real-time visibility across data, applications, networks, and AI agents
- Building an AI Center of Excellence that creates governance, structure, and enterprise-wide alignment
- Modernizing your digital operations with agentic SOC and agentic observability capabilities powered by Splunk + Cisco
ToC
- The Inflection Point
- Risk Category 1: Cybersecurity
- Risk Category 2: Regulatory Compliance
- Risk Category 3: Deployment Difficulties
- Risk Category 4: Strategic Deployment Conflicts
- Financial Services: Sector Deep Dive
- Emerging Sector Activity
- The Splunk Relevance Frame
- Summary Assessment
The Inflection Point
2025 was declared the year of agentic AI. 2026 is execution year. Gartner projects 40% of enterprise applications will embed task-specific AI agents by year-end — up from less than 5% in 2025. The shift is from copilots that suggest, to agents that act: multi-step autonomous workflows that query databases, trigger transactions, send communications, and orchestrate other agents — with limited human involvement.
The governance frameworks, identity systems, and regulatory regimes that enterprises rely on were built for human actors and deterministic software. They weren’t built for this.
Risk Category 1: Cybersecurity
The headline number: In a Dark Reading poll, 48% of cybersecurity professionals identify agentic AI as the top attack vector for 2026 — ahead of deepfakes, ransomware, and passwordless adoption failures.
Why agents expand the attack surface differently:
- Every deployed agent is a non-human identity — requiring API access, machine-to-machine credentials, and elevated permissions across multiple systems. Legacy IAM was never designed for this at scale.
- Prompt injection is the defining new attack class: malicious instructions embedded in data the agent reads (emails, documents, vendor feeds) cause the agent to execute attacker intent while appearing to operate normally. A realistic scenario — an agent summarising vendor email executes an injected instruction, exfiltrates 60,000 customer records to an offshore server. No human credential touched. Firewall logs show nothing unusual.
- Multi-agent cascade risk: In orchestrated agent architectures, a compromised orchestration agent holds API keys for all downstream agents. One breach = full stack compromise. A confirmed 2026 supply chain attack on an OpenAI plugin ecosystem harvested credentials from 47 enterprise deployments, accessing customer data and financial records for six months before detection.
- Shadow AI compounds everything. Employees importing unsanctioned AI tools create unmonitored agent deployments with no security governance. Over one-third of data breaches now involve unmanaged shadow data.
- Agent-to-agent impersonation: In multi-agent systems, implicit trust between agents creates a new attack surface — session smuggling, capability escalation, and identity spoofing between agents that legacy security architecture cannot detect.
Cisco’s State of AI Security 2026 frames this as AI-driven operations now connecting directly to core business systems — ticketing, source code, cloud dashboards, databases — with the ability to open pull requests, book services, and trigger automated workflows.
Risk Category 2: Regulatory Compliance
The governance gap is documented and widening. EY’s 2026 Global Financial Services Regulatory Outlook reports more than 70% of banking firms are using agentic AI to some degree, but there is a general lack of robust governance frameworks.
Core compliance problems with agentic systems:
- Accountability vacuum: When an agent sends a communication, executes a transaction, or acts on client data — who is accountable? The agent’s multi-step reasoning chain executes in seconds; a compliance officer cannot reconstruct it with conventional review tools.
- Existing rules apply, no carve-outs: FINRA Regulatory Notice 24-09 is unambiguous — books and records obligations, supervision requirements, and Reg BI apply fully to AI-generated actions. No carve-out for autonomous systems.
- FINRA’s 2026 Regulatory Oversight Report identifies specific agentic risk vectors: agents acting without human validation; scope and authority exceeding user intent; auditability challenges in multi-step reasoning; potential misuse of sensitive data.
- Regulatory fragmentation: US, EU, UK, and Canada are taking divergent approaches. The EU AI Act is the most prescriptive; North American regulators are still working from principles. Global institutions face a patchwork compliance environment with no unified standard.
- Data foundations: Agentic AI amplifies data quality risk. Poor or fragmented data leads to hallucinations at scale — not in test environments, but in live regulatory workflows.
The audit trail requirement is now a hard procurement gate in financial services, healthcare, and public sector. Procurement teams are explicitly asking for: continuous evidence of control operation, traceable approvals for agent-initiated actions, and workflow-level auditability — not just platform-level logging.
Risk Category 3: Deployment Difficulties
Failure rate: Gartner projects more than 40% of agentic AI projects will be cancelled by 2027. The causes are structural, not technical.
Primary failure patterns in production:
- The 80/20 data blind spot: Agents deployed on structured data alone see ~20% of the relevant information environment. They process invoices without seeing contracts. They recommend pricing without seeing competitor intelligence in analyst reports. They trigger procurement without seeing the email thread where terms were modified. The agent performs correctly on the data it can see — and causes compounding damage at speed across hundreds of transactions before anyone catches it.
- Ungoverned autonomy: Agents given authority to act without governance rules to act by. Not a model problem — an organisational problem. Business owners deploy agents, then the agents drift from intended scope.
- Legacy integration friction: Most enterprises run on heterogeneous, API-inconsistent infrastructure. Agents fail in production where RPA also struggled — inconsistent data schemas, non-standardised APIs, siloed operational data. McKinsey identifies security and risk concerns as the #1 barrier to scaling, but legacy integration is the #1 operational failure mechanism.
- Performance in complex tasks: Agents succeed on approximately 50% of complex real-world tasks. Quality and latency remain the leading production barriers. This creates selective deployment pressure — agents work well in bounded workflows (coding, reconciliation, structured customer support) and fail unpredictably in edge-case or judgement-heavy workflows.
- Runaway cost: Without token-level and API-call-level monitoring mapped to business outcomes, agentic deployments consume budget on open-ended experimentation with no ROI visibility. Enterprise ROI data shows $3.50 per $1 invested when properly scoped — but that average conceals wide dispersion.
Risk Category 4: Strategic Deployment Conflicts
The MIT Sloan / BCG 2025 AI Strategy report identifies four strategic tensions for enterprises deploying agentic systems. The most operationally significant:
- Standardisation vs. adaptability: Agentic systems work best in standardised processes — but over-standardisation eliminates the humanlike adaptability that makes agents valuable in edge cases and system failures. Organisations optimising for efficiency are finding they’ve locked themselves out of the adaptive responses that justify the investment.
- Speed vs. governance: Enterprises are deploying agents faster than they can control, explain, or audit them. The governance deficit is not incidental — it is a deliberate trade-off made under competitive pressure, with the expectation that governance can be retrofitted. That assumption is increasingly challenged by regulators and by production failures.
- CIO vs. business unit ownership: Business units are deploying agentic tools outside IT governance (shadow AI at the agent layer). IT is simultaneously trying to establish enterprise agent platforms. The collision between decentralised experimentation and centralised governance is now a C-suite conflict at many large organisations.
- Vendor lock-in concentration: The consolidation of agentic infrastructure around a small number of platforms (Microsoft Copilot Studio, Salesforce Agentforce, Google Cloud, OpenAI enterprise) is creating strategic dependency risk. The supply chain attack on OpenAI’s plugin ecosystem in 2026 demonstrated that vendor concentration creates single points of failure at enterprise scale.
Financial Services: Sector Deep Dive
Financial services is the leading agentic AI adopter with ~53% of institutions already running agents in production, and a $50 billion global market spend recorded in 2025.
Top use cases in production:
- Fraud detection and AML monitoring
- KYC automation
- Loan origination and underwriting
- Compliance reporting and audit trail generation
- Back-office reconciliation, dispute resolution, invoice management
- Wealth management personalisation
- Customer support automation (23% of current deployments)
Sector-specific risk profile:
Regulatory: The accountability question is acute. When an AI agent executes a trade, approves a loan, or files a regulatory report, the supervisory model assumes a human decision-maker. FINRA, OCC, FCA, and OSFI are all working from existing rules that predate autonomous action. Firms that cannot demonstrate agent-level supervision are exposed on books and records, Reg BI, and suitability obligations simultaneously.
Operational: Canadian Big Six banks present a specific dynamic. Mainframe-core architecture creates both a natural constraint on agent deployment (agents cannot easily penetrate legacy transaction processing layers) and an accidental governance hedge — the automation surface available to agents is bounded by architecture. Mid-tier banks on modern platforms (Temenos, FIS SaaS tiers) have greater agentic surface area and correspondingly greater exposure.
Systemic: Multi-agent fraud detection and trading systems operating simultaneously across institutions create potential for correlated behaviour. Agents trained on similar data, responding to similar signals, may act in concert in ways that amplify market stress rather than absorb it. Regulators have not yet addressed this systemic dimension.
Emerging Sector Activity
Healthcare: Fastest-growing regulated deployment — patient intake, documentation, clinical research automation, scheduling. The constraint is data residency and HIPAA. Sovereign AI (locally hosted models) is emerging as the compliance response.
Manufacturing: Framing agentic AI as an institutional memory tool in the context of the “silver tsunami” — mass retirement of experienced engineers. Predictive maintenance, root cause analysis, procurement automation. The human-in-the-loop requirement for safety-critical actions is the defining governance principle.
Telecommunications: 48% of companies actively deploying (NVIDIA State of AI 2026). Concentrated in customer-facing automation and network operations.
Retail/CPG: 47% adoption rate. Inventory intelligence, customer support, supply chain coordination.
Energy/Utilities: Smart grid monitoring, anomaly detection, energy optimisation agents. Security risk is elevated — these are critical infrastructure environments.
The Splunk Relevance Frame
Splunk’s core platform is observability and security information — precisely the capabilities enterprises need to govern agentic AI deployments. The emerging market need maps directly to Splunk’s stack:
- Agent action logging at tool-call and API-call granularity
- Anomaly detection for agent behaviour drift
- Audit trail generation for regulatory compliance (FINRA, EU AI Act, OSFI)
- Identity and access monitoring for non-human identities (NHI) at scale
- Cross-agent telemetry in multi-agent orchestration architectures
The governance crisis in agentic AI is, from Splunk’s perspective, a pipeline story.
Summary Assessment
| Risk Category | Current Status | Trajectory |
|---|---|---|
| Cybersecurity | Active, underestimated | Accelerating — NHI identity gap is unresolved |
| Regulatory compliance | Exposure widening | Enforcement action in 2026–27 likely |
| Deployment failure | 40%+ project failure projected | Improving with governance frameworks |
| Strategic conflicts | C-suite level, unresolved | Consolidating around platform choices |
| Systemic (financial sector) | Emerging, unaddressed | Regulatory attention expected 2027 |
The core message for 2026: The technology works well enough. The governance, identity management, auditability, and regulatory frameworks do not yet match what the technology can do. The gap is where the risk lives.
Sources: Gartner, FINRA 2026 Regulatory Oversight Report, EY Global Financial Services Regulatory Outlook 2026, Cisco State of AI Security 2026, Dark Reading, MIT Sloan / BCG AI Strategy Report 2025, NVIDIA State of AI 2026, McKinsey, Fortune/Yale CELI
#briefing #agentic-ai #cybersecurity #regulatory #financial-services #splunk
