Machine vs machine: Defending critical financial systems in the era of Agentic AI
The industry has embraced AI faster than it has defined its guardrails, raising an unavoidable question: are humans really still in control?
AI has evolved from a novelty to a necessity in financial services. From reconciling accounts to spotting fraud, it sits at the center of critical relationships between consumers, providers, and third-party ecosystems. While the promise of scalability and efficiency is immense, the industry has embraced AI faster than it has defined its guardrails. This leaves a critical question: who is truly in control - human or machine?
The next wave of cyberattacks will not rely on human operators probing defenses. Instead, attackers will deploy AI agents that interrogate systems continuously, adapt in real-time, and move faster than defenders can react. Once an agent compromises one system, it can impersonate others and chain access without oversight. Because these agents mimic legitimate behavior, they blend into normal activity, making detection incredibly difficult.
This gap is the industry’s greatest liability. Without clear, enforceable boundaries across the agentic lifecycle - creation, identification, monitoring, and retirement - autonomous systems can overreach by design, not just by malice.
The innovation dilemma
Financial institutions cannot stand still. AI-driven personalisation and streamlined operations are already setting new competitive standards. However, a dangerous imbalance has emerged: most institutions maintain strong identity controls for people while providing weak or non-existent controls for non-human identities, including AI agents.
In this environment, efficiency becomes a risk multiplier. When an AI agent is granted standing access to high-value systems, a single error can trigger a machine-speed chain reaction that pulls sensitive data, approves payments, and changes entitlements instantaneously.
If the AI agent is over-restricted, its utility evaporates, but if it is left unchecked, the potential for systemic damage is profound. The goal, therefore, is to build foundations that allow for both speed and safety.
READ MORE: Security is tired of alert fatigue: Will AI finally let SOCs get some well-earned rest?
Why today’s agents risk overreach
The risk of overreach is exacerbated by the way many businesses currently treat agentic AI. By viewing it as a trusted internal tool, organisations often grant broad access to data lakes and APIs via standards-based mechanisms like the Model Context Protocol (MCP). This approach assumes AI behaves predictably, yet AI follows patterns rather than intent; it generalises, infers, and guesses. This leads to several primary vulnerabilities.
Attackers no longer need to "break in" if they can compromise an AI agent that already holds the keys to protected APIs or PII. Furthermore, a model trained for simple tasks may over-generalise, applying the same logic to high-risk transfers without necessary checks. Finally, high-impact actions often lack the essential "human-in-the-loop" escalation triggers needed to stop a flawed inference before the loop is closed.
READ MORE: Financial tech raises systemic risk by accelerating bank runs, Bank of England warns
Delegation entitlements: The new standard
To govern these machine-led decisions, the industry must adopt a new layer of control: delegation entitlements. These are not mere permissions; they are enforceable contracts for AI behavior that define exactly what an agent can do, under what conditions, and for how long.
This requires a shift toward granular access, where an agent might only read metadata for fraud detection rather than accessing full histories. It also necessitates ephemeral permissions, ensuring that agents receive access only for the duration of a specific task, expiring in minutes rather than remaining open-ended.
Furthermore, these controls must be context-aware, shifting based on data sensitivity or risk levels to automatically trigger human intervention if an agent deviates from established patterns.
The path forward
Every agentic AI decision must remain explainable and auditable. Financial institutions must be able to log what the agent acted on, why it had permission, and what guardrails shaped its behavior. This transparency supports both safety and evolving regulatory expectations.
Financial services do not need to slow AI adoption, but they must govern it. Institutions must transition from treating AI as a trusted tool to treating it as a dynamic entity enabled by identity-centric security. The future belongs to those who build expiry into every access pathway and plan for agentic threats today. This is the only way to innovate with confidence while keeping the global financial system secure.
Adam Preis is Director of Product Solutions Marketing at Ping Identity