Altman Shrugged: OpenAI boss updates his ever-changing countdown to superintelligence
Sam Altman issues another teasing semi-prediction about a "gentle singularity" and the dawn of artificial general intelligence (AGI)

Some technologies are forever on the horizon - but never on your doorstep. Nuclear fusion, for instance, seems to be perpetually stuck about 20 years away from changing the world. A cure for cancer sometimes appears tantalisingly close, yet never seems to get any closer.
Will artificial general intelligence (AGI) and superintelligence be the same? Or are we genuinely on the verge of giving birth to a silicon deity that will make the hitherto impressive cognitive ability of its fleshy creators seem triflingly dim-witted?
OpenAI boss Sam Altman is the AGI booster in chief and is therefore a passionate proponent of the argument that the dawn of a superintelligent AI model is literally around the corner.
Earlier in 2025, Altman said: "We are now confident we know how to build AGI as we have traditionally understood it."
He then predicted that we "may" see the first AI agents "join the workforce" this year and "materially change the output of companies". Even though this is clearly not a firm prediction, it was hailed as such by over-excitable internet AI fans.
The video below also shows Altman being asked what he's most excited for in 2025.
"AGI," he answers.
However, despite the claims in the tweet, we'd argue his statement does not show that he expects to see AGI unleashed this year, but indicates he's excited to be working on its development.
Sam Altman says AGI is coming in 2025 and he is also expecting a child next year pic.twitter.com/5pn8D4Mfi0
— Tsarathustra (@tsarnick) November 8, 2024
So when does OpenAI expect to see the dawn of AGI?
When it comes to giving an exact prediction, Altman generally provides a vague date in the future.
In a new blog, Altman steered clear of bombast and predicted a "gentle singularity" rather than an intelligence explosion.
He wrote: "2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world."
Altman then skipped to the next decade, which he expects to be a time of "striking change" that will be "wildly different" from any previous era in human history - although the OpenAI boss stopped short of saying this means AGI will definitely have been birthed by then.
READ MORE: Is OpenAI's Codex "lazy"? Coding agent accused of being an idle system
"We do not know how far beyond human-level intelligence we can go, but we are about to find out," he wrote. "In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant."
"Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it.
"Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel.
"This is how the singularity goes: wonders become routine, and then table stakes."
Altopia: The road to superintelligence
Altman described "self-reinforcing loops" and "compounding infrastructure buildout" which are driving rapid progress, predicting machines that build other machines "aren’t that far off".
He added: "If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different."
Altman admitted there are "serious challenges to solve" including technical and societal safety issues, as well as the need to distribute access to superintelligence due to its vast potential impact.
"From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly," he continued.
Apple takes a bite out of AGI hype
The timing of Altman's not-quite-a-prediction is interesting. At the end of last week, Apple released a now-famous paper describing the "illusion of thinking" in large reasoning models (LRMs) that appear to generate detailed trains of thought before providing answers.
Unfortunately, Apple does not rate the cognitive ability of AI models particularly highly, reporting that frontier LRMs "face a complete accuracy collapse beyond certain complexities" and "exhibit a counter-intuitive scaling limit", which means they stumble in the face of difficult challenges.
"Their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget," Apple wrote.
Apple just GaryMarcus'd LLM reasoning ability pic.twitter.com/735UMGk4be
— Josh Wolfe (@wolfejosh) June 7, 2025
Apple's paper prompted the distinguished AI sceptic Gary Marcus (who's now given his name to the process of debunking AGI washing, as you can see in the tweet above) to write a piece for The Guardian arguing that the chances of today's AI models leading to AGI are "truly remote".
"In many ways the paper echoes and amplifies an argument that I have been making since 1998: neural networks of various kinds can generalise within a distribution of data they are exposed to, but their generalisations tend to break down beyond that distribution," he wrote.
"Anybody who thinks LLMs are a direct route to the sort of AGI that could fundamentally transform society for the good is kidding themselves."
What is AGI and will it ever be achieved?
That depends what you mean by AGI.
OpenAI and Microsoft, a major investor, have reportedly defined AGI based on a financial benchmark: achieving $100 billion in annual profits.
This definition suggests (but does not confirm) that AGI will be declared when OpenAI's AI systems are capable of generating that amount of cash, presumably independently.
This is a major departure from traditional definitions of AGI, which focus on cognitive capabilities and human-level intelligence.
READ MORE: OpenAI lets Codex loose on the internet, gets honest about dangers and "complex tradeoffs"
OpenAI's official definition of AGI is: "A highly autonomous system that outperforms humans at most economically valuable work."
But will agents or other models that fit this description be smart enough to be counted as superintelligent? Let's see.
So far, when asked for concrete details of his AI timeline, Altman has shrugged.
But at some point, it will be the 2030s and we'll all find out whether AGI was always inevitable - or if was just a marketing term everyone seemed to love in the mid-2020s.
Do you have a story or insights to share? Get in touch and let us know.