"AI agents are like cyborg teenagers": How to responsibly raise autonomous models

"Without appropriate rules and boundaries in place, agentic AI will be left to run riot, with speed and independence spiralling into risk."

Just like real humans, AI agents need to be given the right guidance on their way to adulthood
Just like real humans, AI agents need to be given the right guidance on their way to adulthood

AI agent adoption is soaring across the enterprise. 82% of companies already use AI agents, with a further 92% planning to expand their AI agent deployments in the next year. From data analysis to workflow automation to augmenting decision-making processes, the efficiencies agents offer enterprises are undeniable.

But there’s a catch. AI agents often demand broad access permissions, move at machine speed, and communicate with data and systems in ways that are unpredictable. Left unmanaged, these tools can create a compliance nightmare – accessing privileged systems and sharing data without authorisation.

It’s not that enterprises should avoid AI agent adoption. Deployed responsibly, these systems can and will deliver significant productivity gains. That’s why we need to start viewing AI agents as ‘cyborg teenagers’. Capable and intelligent, but likely to mess up from time to time unless the right rules are put into place, because they are still learning.

Why agents need a ‘responsible adult’

Traditional identity security was built for a world where identities were largely human. Although the emergence of machine identities, such as service accounts and virtual assistants, has complicated the picture somewhat, they remain far easier to control because they operate within predefined rules and follow specific instructions.

Now, AI agents have changed all that. These systems are human-ish - able to reason, plan and execute tasks independently. They may act on behalf of users one moment, then autonomously access data or trigger workflows the next. They can freely interact with other systems and even delegate tasks to other agents. But much like teenagers, agents need a responsible adult to watch over them. Worryingly, 80% of organisations have already reported that their AI agents have taken unintended or rogue actions - including accessing or sharing data in ways they weren’t expected to.

READ MORE: "Where risk and invisibility collide": Mapping Shadow AI in the enterprise

Here are the three steps organisations need to take in order to ensure their AI agents don’t ‘run riot’ in the enterprise.

Step one: Implement proper discovery mechanisms

You can’t control what you can’t see. That means enterprises should start by gaining visibility of every agent in their system. Security and IT teams need to understand who can use each agent and what each agent is capable of, including what data it can access. From HR to operations to customer service, agents are now effectively interacting with data that touches every part of the business. Without proper discovery mechanisms, organisations can quickly lose track of their agents, resulting in an ‘identity explosion’. A worrying thought, when you consider 98% of organisations are planning to deploy new AI agents within the year.

Step two: Ensure no agent is ‘orphaned’

Every AI agent should be assigned a ‘responsible adult’ who watches over them. These owners need to understand and control who can use the agent, what the agent can do, and monitor behaviour to ensure the agent isn’t going off the rails. Additionally, if the responsible adult leaves the company or goes on holiday, security teams need succession planning in place, so that agents aren’t left orphaned.

Step three: Classify data and apply stringent policy controls

Access control does not stop at coarse-grained permissions. Agents are hungry for data and will try to consume anything they can get their hands on. Data must be governed and secured at a fine-grained level to ensure that agents operate within their bounds. Compliance policies that apply to humans must also apply to humans that are using agents within the enterprise. Identity security platforms can automate the classification of data that is available to agents and apply policy controls to curb access and ensure a zero-standing privilege posture for highly sensitive data.

As well as zero-standing privilege, leaders need to ensure they have an ‘emergency brake’ in place to shut down AI agents instantaneously if the worst possible scenario does occur, and they do go rogue. Many identity security platforms now offer this capability via a centralised control plane, making it possible to quarantine an agent in seconds and capture a full audit trial for investigation.

READ MORE: Your SBOM won't save you: Closing the provenance gap in software security

AI agents have fundamentally changed the ways in which we work, unlocking efficiencies and simplifying employee workstreams. However, as things stand, organisations are still running before they can walk - adopting AI agents before the right tools are in place to secure them.

Much like teenagers, AI agents need guidance. Without appropriate rules and boundaries in place, they will be left to ‘run riot’, with speed and autonomy spiraling into risk. Good governance means monitoring every AI agent’s access to sensitive data, assigning clear ownership, and enforcing approval workflows before access is expanded or granted.

Safe, effective AI adoption must be underpinned by visibility, human oversight, and zero-standing privilege. With these three elements in place, enterprises can continue to innovate, without losing control.

Follow Machine on LinkedIn