Autonomous risk: Identity management in the age of Agentic AI
Dave McGrail, Head of Business Consultancy at Xalient, shares his insights on securing agents and locking down identities.

AI has become one of the defining IT trends of this decade, and its associated technologies are increasingly shaping both society and cybersecurity. Among them, Agentic AI has emerged as a notable disruptor: capable of saving time and resources in the enterprise but raising significant concerns around the protection of sensitive data.
Without proper identity controls and data classification, organisations risk exposing critical assets as they embrace AI adoption and allow systems to act with greater autonomy.
So, what is Agentic AI? One description defines an AI agent as “AI-fuelled software that performs a series of tasks previously handled by humans - whether in customer service, HR, or IT support - although its scope can extend to virtually any task.”
Dave McGrail, Head of Business Consultancy at Xalient, describes Agentic AI as “a worker bee that you can put between certain processes,” depending on its deployment. With the rise of generative AI, McGrail adds, Agentic AI now represents something that can “act and think in an autonomous way” - carrying out a task, leveraging tools, and iterating independently before returning a result.
This introduces a key challenge: if Agentic AI tools are managing or assessing enterprise data, how can organisations ensure that data access is secure and properly governed? And if identity is core to cybersecurity, how do these two worlds - Agentic AI and identity - come together? McGrail highlights that if an AI agent is using a model trained on your internal data, several questions arise: Have you given it the right data? Too much data? Outdated or irrelevant data?
“Understanding what you’ve given the agent - and the large language model (LLM) - access to is critical,” he warns. As trends shift toward a multi-agent framework, where different agents handle siloed capabilities, the risk of over-permissioning increases.
LLMs require context, telemetry, and structured data access to function effectively. But granting broad access can introduce risk. “Maybe they’ve consumed something they shouldn't have, or maybe a bad actor has introduced something harmful - effectively poisoning the model,” says McGrail. “Or it's overly permissive and returns more information than intended to the person making the query.” This is why clean input data is essential because as McGrail adds, “Rubbish in, rubbish out.”
From identity to access
From a secure identity standpoint, McGrail says that once data is “tagged, categorised, and classified correctly,” it must only be accessed by identities with the right entitlements. Technologies such as Identity and Access Management (IAM) and Data Security Posture Management (DSPM) play a critical role.
“These controls bring Zero Trust and attribute-based access to life - especially when you’ve inserted a machine into the equation," he says. McGrail argues that DSPM is often more vital to securing AI than Cloud Security Posture Management (CSPM), depending on how and where the AI is deployed.
This is especially important given that data is often sprawled across enterprises - structured in some places, unstructured in others. “You need to understand who has access, how sensitive the data is, and whether you’re in control of it,” McGrail explains. Even when access is technically restricted, there's still an input and output: “If a machine has access and a human asks the right question, you need to be confident that the correct, authorised data is being returned.”
The paradox of enabling privilege while maintaining strong governance remains. McGrail advises starting with a mindset of no inherent privilege. Instead, access should be attribute-based and granted only in the context of the specific task - enforcing minimum privilege by default. “You’re managing risk by not allowing access to everything, all the time.”
Putting up guardrails
So, what guardrails can help protect sensitive data in Agentic AI environments? McGrail notes that users often apply their own guardrails to limit what agents can do. For instance, an agent may be asked what an employee earns, but be configured to block that request. However, AI’s helpful nature - especially in Agentic forms - means it may try to find workarounds, rephrase queries, or even fabricate answers if blocked.
Guardrails also depend on where and how agents are used. “If you’re using an agent in the finance department,” McGrail warns, “you can’t have it automate financial transactions and process invoices unchecked - it could end up paying you.” Agents should have role-specific entitlements, such as write access without delete privileges, or read-only access to certain datasets. “You don’t want a single agent with access to everything - that’s not sensible,” he says.
The use case also matters. “Do you want the agent to just consume data, or perform a task?” The required level of access - read, write, or manipulate - depends on that purpose. Segregation of duties becomes essential, with identity defining the role and entitlements, and data governance defining what data can be accessed and how it can be used.
Some businesses are already restricting public AI tools like ChatGPT or Gemini, opting to build their own LLMs or use platforms like Microsoft Copilot for internal use. While full-scale Agentic AI is still emerging, McGrail says many organisations are cautiously experimenting with basic agents internally. Microsoft Copilot, for instance, offers an easier and safer path by ringfencing access within the Microsoft environment.
Securing Model Context Protocol
To manage access, some organisations are turning to Model Context Protocol (MCP) - a framework designed to orchestrate agent behaviour and permissions. McGrail describes MCP as a way to set permissions and orchestrate agents safely. “Identity is becoming a standard element of MCP,” he says. “What was once the Wild West is now being brought under control.”
Adoption of Agentic AI remains in its early stages, McGrail concedes. Most AI use in business today is still limited to chatbots. But there is a growing appetite for using agents to automate and enhance business processes. The challenge, he says, is foundational: “Unless you’ve solved the process first, you’re just going to automate a mess.” Enterprises often need to invest in process engineering before deploying agents - a daunting, but necessary, first step.
Ultimately, McGrail stresses that Agentic AI is not about replacing people but enriching their work - especially in security. Agents can analyse content, summarise data, and enable faster, more informed decision-making. “We're trying to speed up existing processes and take some of the mundane, time-consuming work out of human hands.”
That comes with responsibility. “If you build an agent to automate a broken process, you’ll just create a bigger mess, faster,” he concludes. Knowing what agents are for - and what data and permissions they’ve been granted - makes identity and data control just as relevant in the AI era as ever.