Worried about Mythos and the AI apocalypse? Don't panic: The UK government has written a letter
Westminster responds to a new Anthropic model that's been (over)hyped as an omen of imminent cybersecurity doom.
The world has spent much of the past week debating whether Anthropic's latest model, Mythos, is about to break the internet or is just another example of apocalypse-bait hype marketing.
But there's no need to worry. The UK Government has stepped in to save the day by doing what it does best: holding a meeting and writing a letter.
Earlier this month, Anthropic announced that Mythos Preview has found "thousands of high-severity vulnerabilities," including bugs in "every major operating system and web browser". The AI firm claimed its new model could "reshape cybersecurity", which is why it's not been publicly released.
Mythos marks the dawn of an era in which AI models have "reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic wrote. It is now spearheading a new initiative called Project Glasswing, along with tech giants ranging from Apple to Palo Alto Networks, to "secure the world’s most critical software".
"The fallout – for economies, public safety, and national security – could be severe," it wrote.
Mythos autonomously identified and exploited a 17-year-old remote code execution (RCE) bug in FreeBSD (CVE-2026-4747), which "allows an attacker to obtain complete control over the server, starting from an unauthenticated user anywhere on the internet," in Anthropic's words.
The UK’s AI Security Institute found that Anthropic’s Mythos is a "step up" from previous models and achieved 73% success on expert-level hacking challenges that no model could complete before April 2025.
Mythos is capable of autonomously attacking "small, weakly defended and vulnerable enterprise systems where access to a network has been gained", the AISI wrote.
However, the test was quite different from real-world environments with advanced security features and active defenders, prompting the AISI to admit it "cannot say for sure whether Mythos Preview would be able to attack well-defended systems".
Industry reaction to Anthropic's new model has been mixed, edging towards scepticism. Patrick Garrity of VulnCheck analysed 75 CVEs attributed to Anthropic, reporting that just one publicly disclosed bug has been credited to Mythos so far.
Keep calm and carry on spending public money
The announcement of Mythos's alleged capabilities triggered an extraordinary response on both sides of the Atlantic, with urgent, closed-door meetings between the Treasury, Federal Reserve, and CEOs of systemically important banks in America.
Meanwhile, in the UK and Europe, regulators, central banks, and finance leaders have held parallel talks with banks and industry groups, warning that Mythos could pose risks to financial stability and national security.
Andrew Bailey, governor of the Bank of England, said: "It is a very serious challenge for all of us. It reminds us how fast the AI world moves."
Meanwhile, Liz Kendall, Secretary of State for Science, Innovation and Technology, and Dan Jarvis, Minister of State for the Cabinet Office, put their names to an open letter to business leaders which claimed the nation was not "standing still" in response to a potential threat to businesses, government systems and critical infrastructure.
The government is, of course, monitoring the situation and probably standing by to unleash a quango or two if things get too hairy. To be fair, it's also launched the AISI, which gives Westminster the "most advanced capability of any government in the world for understanding frontier AI systems".
"This ensures that your government can have an independently verified, robust assessment of current capabilities," ministers wrote.
READ MORE: Ex-Lord Mayor “alarmed” over government's response to Atlantic undersea cable risks
The government didn't just stop at one letter. It went full shock and awe by unleashing a second missive penned by the National Cyber Security Centre (NCSC).
Dr Richard Horne, CEO, struck a more positive tone and wrote: "A step change in frontier AI models’ capabilities to find vulnerabilities in code can ultimately be a good thing for our cyber security."
Yes, AI will make it easier to discover vulnerabilities, he argued, posing a growing threat to companies that are unprepared for this new reality.
But "by getting the fundamentals right and carefully adopting frontier AI models for good", defenders will be able to "retain an advantage and help keep the UK safe online".
Beyond this pair of letters, Westminster has been working to support some businesses by providing funding, whilst slowing others down with well-intentioned new rules.
British companies should be reassured that the Cyber Security and Resilience Bill, currently working its way through Parliament, will soon help them to forget all the risks lurking in an ever-grimmer threat landscape - because they'll be too busy dealing with hefty new compliance burdens.
The government has also launched a £500 million Sovereign AI Unit to help scale British AI startups.
Industry reaction
Here's what experts and Machine contacts are saying about the launch of Mythos and the UK government's response.
Aaron Beardslee, Threat Security Researcher at Securonix: "Anthropic looked at what Mythos could do and decided broad release was a bad idea. Attackers don’t need a perfect autonomous system. They need leverage. Give them something that speeds up recon, sharpens phishing, shortens exploit development, or helps a mid-tier operator punch above his weight, and it gets used."
Jamie Akhtar, CEO and Co-Founder of CyberSmart: "It’s good to see ongoing efforts to raise awareness of Cyber Essentials, as awareness remains low despite clear evidence of its effectiveness from the 10-year impact study. Fundamentals like patching, access controls and logging matter more than ever."
Oliver Simonnet, Lead Cybersecurity Researcher at CultureAI: "AI doesn't just introduce new threats, but fundamentally changes the speed and scale at which existing ones can operate. Models might not yet be able to invent entirely new attack techniques, but they compress years of technical expertise into something far more accessible and efficient."
Lee Sult, Chief Investigator at Binalyze: "The uncomfortable truth about Mythos is that most people haven't seen it, used it or had access to anything beyond the marketing. Leaders reacting to hype rather than evidence risk distorting priorities and misallocating resources, with the knock-on effect of erosion of trust with their teams."
READ MORE: AI slop is the "ideal cover" for covert agent communications, researchers say
Martin Kraemer, CISO Advisor at KnowBe4: "The significantly new development is not vulnerability discovery. What is new is autonomous exploit chaining at scale. In Anthropic's own framing, the model surpasses all but the most skilled human security researchers. In other words, the model does not only 'find bugs' but also 'writes working exploits without human intervention'. That's the real news."
Julian Totzek-Hallhuber, Senior Solutions Architect, Veracode: "What’s really striking here is the pace. Project Glasswing is about connecting vulnerabilities into far more complex attack paths in a fraction of the time it used to take. In some cases, that’s already surfacing issues that have been missed for years, which shows how quickly risk can build."
Martin Riley, Chief Technology Officer at Bridewell: "Claude Mythos Preview is not a glimpse of the future, it is a warning about the present. This changes everything for security teams as the patch window has collapsed. AI-generated exploit chains will bypass detection tools built on known indicators. Organisations still running quarterly vulnerability cycles or relying solely on endpoint detection are already behind."
Feedback on the UK Sovereign AI initiative
The government described its £500 million funding pot as a "bet to back homegrown AI founders, drive growth and create jobs across the UK".
It will be used to help AI firms in areas like supercomputing and drug discovery, giving startups "access to support normally reserved for the biggest players in tech".
This will include free access to the UK’s largest AI supercomputers, with up to 1 million GPU hours available per startup to train AI models, and visa decisions within one working day for companies receiving funding.
Here's what our industry contacts are saying:
Greg Hanson, Group Vice President and Head of EMEA North, Informatica, from Salesforce: "As AI moves toward more autonomous, agentic use cases, sovereignty will increasingly be defined by the data that powers those systems, much of which sits across public sector and enterprise environments. It’s not just about who owns the models or where datasets sit, but about building a trusted data foundation that governs how models are trained, how they operate, and the decisions they make."
Harshul Asnani, President and Head of Europe Business at Tech Mahindra: "As countries move to build sovereign AI ecosystems, it is equally important that this momentum does not come at the expense of global collaboration. AI is inherently a cross-border technology and true potential will be realised when nations combine local strengths with shared innovation and collective learning."
Ash Gawthorp, CTO and Co-founder at Ten10: "If the Unit is to deliver meaningful economic value, it needs to go beyond supporting innovation and focus on helping organisations to operationalise AI at scale. That means building the skills, structures and accountability required to move from pilot to production, and ensuring AI is embedded into day-to-day decision making, not just isolated use cases."
READ MORE: "Harvest now, decrypt later is a today problem": Collapsing uncertainty in quantum security
Sam Robinson, Head of AI at the Social Market Foundation: "The UK needs to strengthen the whole ecosystem for AI. That has to involve making it easier for data centre projects to get built quickly through faster, more flexible grid connections; making the most of the UK's rich data by connecting public sector services through a modern data exchange; and ensuring Government can act as a key player that can rapidly scale innovative businesses and ideas by streamlining procurement and subsidy routes."
George Tziahanas, VP of Compliance and Associate General Counsel at Archive360: “Sovereign AI investments are smart, but countries shouldn’t over index on building fully domestic AI supply chains. Not only will they be difficult to achieve at speed, but they also risk falling behind the ongoing innovations in other countries. Countries should also consider prioritising flexibility to support the use of multiple AI tools to ensure individuals and companies are not locked into any one model or one tech company."