“The internet is not ready!”: Saving the world from an agentic apocalypse

The first article in a series exploring the challenges that need to be solved before the agentic AI revolution can truly begin.

The web was built for humans - but is about to be sorely tested by vast numbers of automonomous bots (Image: ChatGPT)
The web was built for humans - but is about to be sorely tested by vast numbers of automonomous bots (Image: ChatGPT) u

The agentic AI revolution is going to be big. In fact, it will be huge.

When enterprises finally trust autonomous agents to start doing their bidding independently, billions and billions of machines will start to work around the block making almost unimaginable numbers of API calls, overwhelming security systems, flooding networks with synthetic traffic, and putting unprecedented strain on infrastructure.

We used to have a word for a similar phenomenon: botnets. And when an army of toasters joined forces to launch DDoS attacks, security teams knew how to respond. Keep the bots outside the castle walls and don't let a single one through.

Agents are different. They will need to get inside organisations' inner sanctums to do their jobs. We'll need to trust and identify them at scale, build systems capable of handling the mega-traffic they will unleash, and work out how to recognise the bad bots whilst letting the good ones do their jobs with as little friction as possible. Then someone will have to figure out how to pay for all that.

None of this will be easy because - to put it mildly - the internet isn't ready for what's coming.

So what needs to change? We reached out to friends, contacts and collaborators to find out. We'll be publishing every single response we received as a series here on Machine, starting today.

Can we avoid an agentic apocalypse? Let’s find out… 

Breaking the internet covenant 

To understand what's next, we need to go back to the beginning. When Google and its Big Tech friends first started eating the world, people and companies agreed to share their content for free in exchange for web traffic, which seemed like a reasonable deal at the time.

But Chris Dixon, General Partner at Andreessen Horowitz, recently warned of the breakdown of an “internet covenant” which allowed creators to trade free access to their content for search traffic. 

LLM crawlers, Google overviews, zero-click content and other disruptive (if not completely unwelcome) AI innovations have destroyed this unspoken agreement - meaning action is needed to safeguard the future of digital content and make it more than just a hobby for creators. 

Unfortunately, the economic model to support agentic AI simply isn’t there right now, argues Ahmed AJ, CEO and Co-Founder of Tasker AI.

"The open internet is getting more and more closed," he tells Machine. "Content compensation is completely broken in the age of bots."

AJ predicts that major brands will all have API versions of their websites designed to be accessed by agents, without any of the UI and UX built to reflect brand emotions and appeal to humans. 

READ MORE: "Your role has been eliminated!": What it's like to lose your corporate job to AI

However, on a website exposed to agents, it will be difficult to tell whether machines or humans have accessed content. Behaviour analysis is one solution. If you see a user access 1,000 pages in a few minutes, it's a fair bet that they're a bot.

Another is very small payments, combined with proof of identity and blockchain identification mechanisms that can decide whether a person or agent is viewing content and compensate the creator financially.

“Microtransactions could enable, at scale, content writers and curators to get paid when AI uses their work," AJ says. "It will be better for creators and give users an improved experience, whilst leading to better content quality. However, the internet is not yet ready.  

“In the real world, we have passports. The web needs a similar mechanism.”

Small payments could become not only a passport-style proof of identity but also act as a sort of online reputation score. Crypto-adjacent technologies, such as zero-knowledge proofs, can enable users to prove they are not bots, opening up opportunities to, for instance, offer tiered access to content, charging agents conducting large-scale deep research screening more than one-time human visitors. 

“The reputation earned by how you log in and engage with content is going to be an essential part of our online lives,” AJ predicts.

Humans have left the building 

Since the beginning of the internet, it has been built with us in mind. That means clickbait headlines to draw us into content. Passwords to stop thieves from stealing our secrets. Pretty pictures to keep us entertained and “if you liked that, you’ll love this” recommendation engines to keep us buying more stuff.

Agents don’t want or need any of this. 

Simon James, Global VP of Data Science and AI at Publicis Sapient, says: "The internet wasn't built for agents - it was built to seduce humans. Agents don't browse; they execute commands. For 25 years, we've optimised websites to tempt and intrigue.

"But your carefully crafted customer experience is just noise to an agent. Most businesses are building sophisticated AI agents while their own digital properties remain fundamentally incompatible with how agents actually work."

This means that the relatively rudimentary agents are already encountering obstacles familiar to human users, says Francis Hellyer, founder and CEO of tickadoo, an AI-powered travel platform operating across more than 500 cities.

READ MORE: Does Keir Starmer use ChatGPT to write his social media posts?

Hellyer has been building agentic AI systems that interact with thousands of websites daily, providing a “front row seat to what’s happening.” 

He warns: “Here's what will break the internet. We’re building superintelligent agents for a profoundly stupid web. Every day, our AI agents encounter the same brick walls that humans do – CAPTCHAs, paywalls, rate limits, and APIs that change without warning. The internet wasn't designed for autonomous agents acting at machine speed and scale."

"The alarming part is that as agentic AI proliferates, websites will weaponise these barriers. We're already seeing bot detection systems that can't distinguish between malicious scrapers and legitimate AI assistants trying to book your dinner reservation. The result? A new digital apartheid where AI agents become second-class citizens on their own internet.

The fix, Hellyer says, is an 'Agent Access Protocol' which works a bit like robots.txt but for "AI rights", setting standards on how autonomous agents can authenticate, transact and behave across domains.

"Without it, we'll have brilliant AI minds trapped behind digital barbed wire."

What can be done to free them without causing the sort of crime wave that always comes from opening up prisons?

Find out in the next part of our series...

Do you have a story or insights to share? Get in touch and let us know.

Follow Machine on LinkedIn