Non-humans in the loop: AI agents and a shift to autonomous threat response
"By 2028, agentic AI will be embedded in a third of all enterprise software, with a growing share of operational decisions made autonomously."

The integration of AI into cybersecurity technologies and processes has come a long way in a short space of time. Initially, the focus was on AI assistants, where the technology proved valuable in supporting threat research, processing intelligence at speed and easing the burden on analysts. Despite these benefits, bots are inherently limited.
Built around a human-in-the-loop model, they require constant prompting and manual intervention to turn insights into action. But it’s that gap between understanding and execution which is now driving the next phase of AI development, where automation has a far more important role in the form of AI agents.
In this context, AI not only informs decisions but also acts on them. Unlike assistants, agents operate much more independently, analysing threats, making decisions and initiating responses across the threat lifecycle. They offer a way to close the loop between detection and action, allowing security operations to become faster and less reliant on human input for routine tasks.
This shift is already underway. According to Gartner, by 2028, agentic AI will be embedded in a third of all enterprise software, with a growing share of operational decisions made autonomously. For defenders, this represents a big step toward the holy grail of proactive cyber defence, where intelligent systems help organisations stay ahead of threat actors.
Agents on the front line
But what does this look like in practice? Put simply, AI agents are designed to take action with minimal human input. Their role is not just to suggest what should happen, but to ensure it actually does, efficiently and at scale.
Embedded across the security stack, AI agents can ingest large volumes of threat data, triage alerts, correlate intelligence and distribute insights in real time. For instance, agents can automate threat triage by filtering out false positives and flagging high-priority threats based on severity and relevance. They also enrich threat intelligence by cross-referencing multiple data sources to add meaningful context and track Indicators of Behaviour (IoBs) that could otherwise go unnoticed.
READ MORE: The rise of Dark LLMs: DDoS-for-hire cybercriminals are using AI assistants to mastermind attacks
What makes these tasks well-suited to automation is their structure. Each follows a clear, rule-based process that lends itself to repeatability, areas where AI performs strongly without compromising on accuracy or control. In these circumstances, agents are designed to operate within well-defined parameters, enhancing decision-making without displacing the human element.
That distinction is important. While the capabilities of AI agents are expanding, their value today lies in augmenting security professionals, not replacing them. By handling high-volume, low-risk tasks, they free up analysts to focus on more strategic challenges at a time when speed and the ability to scale are crucial.
Embracing complexity
A big part of the current challenge for security teams is the inherent complexity they have to deal with. In many organisations, this isn’t about a lack of data or tools; what it comes down to is a lack of coordination. Intelligence is often fragmented across systems, teams and workflows, creating delays that adversaries can exploit. Addressing this gap requires more than just automation; it requires orchestration at scale.
This is where AI agents come into their own. Operating well beyond simple input-output models, they integrate directly with detection systems, threat intelligence platforms, SOC tools and incident response playbooks to coordinate activity across the security lifecycle. Their value lies not just in analysing threats, but in turning that analysis into action across multiple domains in real time.
By continuously collecting and correlating data from disparate sources, for example, AI agents can identify connections that human analysts might miss. More importantly, they can trigger appropriate workflows, such as updating blocklists, generating incident tickets or escalating alerts, among various other important activities, and do so without manual intervention at every step. This removes operational bottlenecks and enables security teams to act at machine speed.
READ MORE: Cost of a data breach hits historic all-time high in the US, drops globally: IBM warns of growing shadow AI risk
Agents enable organisations to deliver a hyper orchestration workflow model, where threat data moves efficiently between systems and decisions are executed with consistency and context. Instead of relying on predefined scripts or static playbooks, AI agents adapt to dynamic threat environments to orchestrate responses in a way that is both intelligent and autonomous.
Crucially, human oversight remains a key component in that analysts continue to set the rules and review high-impact decisions. But with AI agents managing routine tasks and ensuring interoperability across the stack, security teams are much better placed to focus and scale their efforts.
Staying in control
Clearly, these technologies will become even more capable over time – something that raises a fundamental question: who’s in control? The key issue here is not just what AI can do, but what it’s permitted and trusted to do within operational workflows. The need to square this circle is driving the emergence of new partnership models that blend automation with oversight.
For instance, in the ‘AI-in-the-loop model’, humans remain in control, using AI to process data, identify patterns and make preliminary assessments. This represents a low-risk route for organisations starting out with AI, where analysts validate every action before it’s executed. By contrast, the ‘human-in-the-loop’ model gives AI greater freedom to operate independently, only bringing analysts in when confidence thresholds drop or specific circumstances arise.
Both approaches have value, but striking the right balance depends on clearly defining responsibilities. For most, a hybrid model will be the best fit because it allows AI agents to scale routine tasks while keeping humans in control of complex, high-stakes decisions. Looking ahead, getting this balance right will determine how effectively AI is integrated across current and future security operations.
Dan Bridges is Technical Director – International at Cyware