"Where risk and invisibility collide": Mapping Shadow AI in the enterprise

"Employees are not trying to be reckless, but unmanaged AI creates a direct path for sensitive data and information to leave the business."

"Where risk and invisibility collide": Mapping Shadow AI in the enterprise

AI is already embedded across the enterprise, but most organisations still have no clear idea how it’s being used.

Tools have spread faster than governance, leaving security teams trying to control systems they can’t fully see.

The result is a growing visibility gap at the heart of modern AI adoption, where activity spans browsers, APIs, private models and personal accounts all at once.

Machine spoke to Ray Canzanese, Director of Netskope Threat Labs, to find out more why this gap has emerged, how shadow AI is evolving, and what organisations need to do now to regain control.

Why are so many organisations still struggling to gain visibility into how AI is being used internally? 

Because AI adoption simply moved faster than governance did. In most organizations, AI wasn’t rolled out centrally, it spread organically. Employees started using copilots, teams adopted AI tools, and developers connected models via APIs long before security teams started building policies. By that point, AI was already embedded in day-to-day work.  

It’s created a structural visibility gap. Today, AI activity spans browsers, APIs, private models, and personal accounts, often all at once. Security teams may see that AI is being used, but not what’s being shared, which account is involved, or how the model is handling that data. 

And the scale is accelerating fast. Netskope research shows how nearly half of employees (44%) now use at least one AI application each week – up from 19% a year ago – and the average organization is seeing users access 60 AI apps per week. This is a broader issue that’s becoming part of everyday operations, and visibility simply hasn’t kept pace. 

What is “shadow AI,” and why is it becoming such a major concern for security teams?  

Shadow AI is the use of AI tools, models, agents, or accounts outside approved company controls. It includes obvious examples, like employees using personal ChatGPT or Claude accounts for work, but it also includes AI coding tools, specialized AI apps, and even autonomous agents that teams deploy without central oversight. 

The reason it’s becoming such a major concern is because shadow AI is where risk and invisibility collide. Employees are often not trying to be reckless; they’re simply trying to be productive. But when they use unmanaged AI services, security loses the ability to apply policy, inspect prompts and uploads, enforce data protections, or maintain an audit trail.

That creates a direct path for sensitive data and information to leave the business. And once unmanaged AI becomes part of normal workflows, it gets harder to untangle without disrupting the business. Habits become dependencies. And by then, the exposure is already built in. 

Why can’t traditional security tools effectively monitor AI activity such as prompts, uploads, and API interactions?  

Because most traditional security tools were built for an earlier era of IT. They were designed for predictable applications, recognizable file movements, and human-driven workflows. AI changes all three. 

First, AI interactions are highly contextual. A prompt, an upload, and an integration can all carry meaningful risk, but only if you understand the content, the user, the instance, and the intended action together. Legacy tools only usually see fragments of this; few legacy systems can make a single decision based on all of that context in real-time. That’s why so many organizations are still stuck with blunt “allow or block the whole app” controls.  

READ MORE: Good intentions, poor oversight: Addressing the risks of Shadow AI in business

Second, AI breaks pattern-based data protection. In fact, it’s good at rewriting sensitive content while preserving its meaning. That’s a serious mismatch; security tools look for patterns, while AI is designed to rewrite them.  

Third, the hardest AI traffic to monitor is the traffic growing the fastest: APIs, agentic workflows, MCP-based connections, and machine-to-machine communication. These interactions may never look like a normal user opening a browser tab.

Emphasizing capabilities such as MCP visibility, agent activity monitoring, and especially adaptive guardrails within a unified platform are what’s needed. Those are the control points many traditional products don’t recognize yet, let alone govern well. 

What are the biggest risks organisations face if they don’t address AI visibility and governance now?  

The immediate risk is unintended data exposure. But the larger risk is losing operational control altogether. 

 Organizations are already seeing near misses involving sensitive data being shared through AI tools, often unintentionally. At the same time, AI is becoming more autonomous, with tools and agents increasingly able to take action across business systems. 

In many cases, they already have broad access. Our research shows these autonomous agents are not just reading information; they’re increasingly taking action. Over half of organizations (53%) grant AI tools ‘write access’ to collaboration platforms, 40% to email, 25% to code repositories, and even 8% to identity providers.

Yet 91% say they cannot reliably stop a risky AI-driven action before it happens. That’s a dangerous combination: broad access with weak pre-execution control. 

Yet the cost of waiting rises quickly, too. Regulatory scrutiny is increasing, boards are asking harder questions, and cyber insurers are paying more attention to AI usage patterns. The leaders getting this right are not the ones saying no to AI; they’re the ones who gained visibility first and governance second, so they can allow adoption from a position of control. 

What should organisations be doing to safely enable AI adoption while maintaining control and security? 

Start with visibility. You cannot govern or secure what you can’t see. Organizations need to understand how AI is being used across users, apps, and APIs, including the ability to inspect prompts, uploads, and interactions in real time. 

Once you have visibility, continue with governance by setting policies about which apps are allowed and how they are allowed to be used.  For example, you can begin by identifying the most commonly used apps that meet organization and compliance requirements and guide users to adopt those apps.

READ MORE: Beyond residency: Engineering sovereignty in the age of shadow IT

Once you have established policies, shift to protection, putting enforceable real-time guardrails in place. Guardrails can be as gentle as using real-time coaching to guide users toward approved solutions when they attempt to use unapproved ones.

For the highest-risk use cases, the guardrails should be more draconian to mitigate the risks, inspecting content and blocking unapproved use.

The bottom line is, organizations don’t need to choose between AI adoption and security. But they do need to stop treating visibility as optional. In this new AI era, visibility is now the foundation of control. 

Follow Machine on LinkedIn