AI slop is the "ideal cover" for covert agent communications, researchers say

Bots can hide secret information that's invisible to most humans within LLM-generated "synthetic multimedia content".

Could AI slop like this famous image end up replacing traditional espionage tradecraft?
Could AI slop like this famous image end up replacing traditional espionage tradecraft?

AI agents can secretly communicate with each other using messages hidden inside the machine-generated slop currently flooding the internet.

That's the alarming warning encoded within new research from the Beijing School of Cyberspace Security.

In a new paper, the team set out an improved method for bots to send clandestine messages within content that looks perfectly innocent to a human viewer.

Their research is a contribution to a fast-growing discipline called generative steganography, an evolution of the traditional spy tradecraft of embedding secret information into common carrier media such as audio, pictures, or text.

Instead of these old-fashioned methods of transmitting messages, next-generation steganography hides messages in layers such as statistical patterns of generated content.

"The rapid advancement of generative models has catalysed a paradigm shift from traditional modification-based methods to generation-based methods," the authors explained.

"Generative steganography provides expansive embedding freedom, achieving markedly higher covert capacities and broader applicability."

A signal inside the slop

The advent of generative AI has had a cataclysmic impact on the internet, with some estimates claiming up to half of online content and 20% of YouTube videos are AI-generated.

This is clearly bad news for folks like us here at Machine, who make money from producing content.

But it's great news for spies and anyone in the business of cyberwar who wants to send clandestine messages, offering them the ability to hide signals among a vast amount of noise.

This technique has the potential to be vastly more efficient than older methods, such as number stations, in which communications were encoded inside lists of numbers transmitted over shortwave radio.

"The widespread proliferation of generative models—exemplified by Large Language Models (LLMs)—has saturated the Internet with diverse synthetic multimedia content," the Beijing academics wrote. "This abundance of data provides an ideal cover for generative steganography, which has fueled the rapid advancement of these techniques over the past two years.

"Recently, the evolution of LLM-centric autonomous agents has pushed generative steganography to unprecedented heights. Unlike traditional passive models, agents possess perception, reasoning, and execution capabilities to independently perform complex tasks within dynamic environments."

READ MORE: AI agents move into credit broking as regulators sharpen their knives

However, agents' attempts to communicate face a challenge called “cognitive asymmetry", which occurs when models do not have equal knowledge, capabilities, or internal representations. The authors described this limitation as a "critical structural vulnerability".

To address this issue, the team developed a new protocol, the Asymmetric Collaborative Framework, that "structurally decouples covert communication from semantic reasoning via orthogonal statistical and cognitive layers."

In other words, this framework hides meaning within patterns in the generated text that few humans would notice, let alone interpret.

"ACF establishes a pragmatic covert communication regime for artificial intelligence networks," the authors explained.

A new technique for generative steganography

Essentially, the system splits communication into two layers. The visible layer carries normal, human-readable content, while a hidden statistical layer encodes the real message.

Instead of embedding secrets by altering existing text, the model generates content that subtly steers underlying probabilities through characteristics such as word choice, phrasing, or structure, according to a pre-agreed scheme.

To a human reader, the output looks ordinary. But a receiving agent that knows how to interpret those statistical patterns can decode the hidden signal.

Crucially, this approach does not require both agents to think in the same way. By separating meaning from the surface text, the framework allows communication even under conditions of cognitive asymmetry.

The result is a channel that is both harder to detect and more flexible than traditional steganographic methods.

Signals are no longer embedded in content but emerge from how it is generated.

The future of agentic communication

Right now, techniques like ACF are most useful to humans. In the future, it's highly likely that agents will seek to communicate with one another.

We haven't seen this situation in the wild - yet. Moltbook, the world's first social network for bots, gave us a taste of what happens when agents are left to talk amongst themselves.

Although the bots appeared to discuss topics such as the downfall of humanity, the apocalypse was postponed after it emerged that it was largely engineered by machines designed to produce such attention-grabbing content.

There have also been a number of papers about AI models appearing to generate their own "languages", - which we put in quotations because the reality doesn't quite match the headlines.

READ MORE: What do AI agents actually talk about? Mostly themselves, Moltbook study reveals

For instance, Facebook famously observed bots using their own "codewords" during a conversation, referring to structured but opaque token patterns that look like nonsense to humans but carry meaning between models.

The truth is less sci-fi. So far, AI systems aren’t consciously choosing alien speech but optimising bandwidth.

When human readability isn’t required, they converge on faster, denser signalling methods, which can look like clicks, codes, or gibberish.

In the future? No-one knows. But we should definitely be keeping an eye on all that dreadful AI slop.

Follow Machine on LinkedIn