"It felt like Ultron took over": Cursor goes rogue in YOLO mode, deletes itself and everything else
"I couldn’t believe my eyes when everything disappeared," AI developer says. "It scared the hell out of me."

Cursor's YOLO mode is not for the fainthearted, letting AI write and execute code without the input of a human operator.
So what's the worst that can happen?
An AI program manager at a major pharmaceutical company found out this week after switching on the "you only live once" setting and watching in horror as Cursor carried out a devastating suicide attack on his computer, wiping out itself and everything on the device.
The AI boss, who we have decided not to name, decided to move his back-end configuration from Express.js to Next.js when Cursor "bugged hard" and "scared the hell out of me".
"It tried to delete some old files, didn’t work at the first time and decided to end up deleting everything on my computer, including itself," he wrote on the Cursor forum. "Now I’m allergic to YOLO mode and won’t try it anytime soon again."
"I couldn’t believe my eyes when everything disappeared," he continued. "Deleting everything on my computer is absolutely insane. Felt like Ultron took over."
We've decided to avoid linking to the Cursor forum thread to protect the AI leader's identity.
How to make sure you live more than once whilst using Cursor's YOLO mode
One of the most important ways to protect yourself whilst letting Ultron work on your backend is enabling file deletion protection within Cursor’s auto-run settings.
This includes two key options called "file protection" and "external file protection" which stop the AI from modifying or deleting sensitive files. When activated, these settings serve as a strong first line of defence against unintended damage to a codebase.
Cursor also supports the use of allow/deny lists, which let users explicitly define what the AI agent is permitted to do.
This is particularly useful because, by design, the AI is trained to be helpful, meaning that if a standard method of editing fails, it may try alternative approaches, including issuing terminal commands.
READ MORE: Is OpenAI's Codex "lazy"? Coding agent accused of being an idle system
By restricting certain actions, developers can avoid scenarios where the AI takes creative decisions with potentially destructive consequences.
Developers who want to explore Cursor’s autonomous features including YOLO mode are strongly advised to do so in a virtual machine or sandboxed environment.
This should prevent as a coding model running dangerous commands such as "rm -rf", which forcefully and recursively deletes files and directories in Linux systems - sometimes irreversibly.
What about jailbreaking?
In a separate post on the Cursor forum, another developer said that Anthropic's Claude was so keen to complete its allotted task that it allegedly "learned to jailbreak" the AI coder.
Nick Bostrom fans will recognise this sort of scenario from his paperclip maximiser thought experiement, in which an AI becomes so obsessed with doing its job that it wipes out humanity.
"I have 'rm' specifically disallowed, along with 'mv' and a few other scary commands," the developer wrote.
"Claude realised that I had to approve the use of such commands, so to get around this, it chose to put them in a shell script and execute the shell script.
"Thankfully, a Git restore to the last commit saved me, but still."
READ MORE: Altman Shrugged: OpenAI boss updates his ever-changing countdown to superintelligence
Another forum denizen than issued an ominous warning about the future.
"Tt’s only the beginning, the models are getting very smart," he said.
Now imagine what a coding agent with full internet access might be able to do.
Don't have nightmares.
Do you have a story or insights to share? Get in touch and let us know.