The rise of Dark LLMs: DDoS-for-hire cybercriminals are using AI assistants to mastermind attacks

Lawbreaking language models lower the barrier of entry for unskilled crooks and make it frighteningly easy to launch crime campaigns.

The rise of Dark LLMs: DDoS-for-hire cybercriminals are using AI assistants to mastermind attacks

Criminals are notoriously innovative, outpacing white hats and using the latest tech to target victims in new ways.

So it is with a sense of grim inevitability that we report the findings of a new study which warns that dark web cybercrime kingpins are now using custom "dark large language models" to run campaigns.

Richard Hummel, director of threat intelligence at NetScout, said integration of AI assistants into DDoS-as-a-service platforms "represents the next logical, and alarming, evolution in a cybercrime ecosystem that already has undergone dramatic transformation".

He warned that this "inevitable convergence" is a threat security professionals "must prepare for immediately".

Crooks are already known to have harnessed bots to enable automated attack scheduling, real-time parameter adjustment and "sustained campaign management with minimal human oversight".

These malicious machines can execute multivector attacks that adapt to defensive countermeasures, initiate IPv6 exploitation and launch carpet-bombing attacks across entire subnets, flooding multiple IPs in a block, not just one.

Beyond automation: The dawn of a new threat

AI assistants mark a sinister new escalation, lowering the barrier of entry for would-be cyber criminals, enabling amateurs to launch and manage attacks using a natural language interface.

"AI assistant integration would transform these capabilities from automated to truly intelligent," Hummel said. "Instead of users needing to understand attack vectors, port numbers, or network protocols, they could simply describe their objectives in natural language: 'I want to take down my competitor’s website during their Black Friday sale.'

"The AI would handle target reconnaissance, vulnerability assessment, optimal timing selection, and multivector orchestration—all while maintaining conversational simplicity."

Netscout recently published a seven-part analysis of the DDoS-for-hire landscape, which found that AI has democratised cyberattacks, evolving from simple point-and-click interfaces to automated platforms featuring API integration, reconnaissance tools and adaptive attack capabilities. 

READ MORE: LLMs can be hypnotized to generate poisoned responses, IBM and MIT researchers warn

"The addition of AI assistants represents the natural next step in this evolution - that could arrive sooner than many expect," Hummel added.

"Adding conversational AI interfaces would eliminate remaining barriers entirely, enabling anyone who can type a request to launch sophisticated, adaptive attacks," Hummel said.

Dark large language models (LLMs) such as WormGPT and FraudGPT cost between $60–$200 monthly, enabling nontechnical criminals to generate malware and conduct sophisticated phishing campaigns.

Voice cloning is also more accessible than ever before, with a $11 subscription and YouTube tutorials enabling novice hackers to launch sophisticated social engineering campaigns.

READ MORE: Large language models could cause a huge phishing crimewave, researchers warn

"The integration of similar AI capabilities into DDoS services would follow this established trajectory," Hummel predicted. "Organisations must recognise that traditional DDoS defences designed for predictable, signature-based attacks will prove inadequate against AI-coordinated campaigns.

"AI-enhanced attacks could analyse defensive responses in real time, identify rate-limiting thresholds, mimic legitimate traffic patterns, and coordinate multivector attacks that evolve faster than human defenders can respond.

"The integration of AI doesn’t just enhance existing attack methods; it fundamentally changes the threat model."

This means that attacks will become "conversational experiences" in which criminals lead their campaigns using ordinary language, ordering bots to focus on specific weaknesses like API endpoints or target specific industries in chosen locations.

How can organisations protect against dark LLMs?

Hummel set out six ideas on how to defend against the rise of dark LLMs.

Fight AI with AI: Implement machine learning–based detection and response systems to speed up incident response.

Strengthen behavioural analysis: With AI capable of generating endless attack variants, signature-based detection is no longer reliable. Behavioural analytics and anomaly detection must take centre stage in modern defence strategies.

Improve threat intelligence sharing: To stay ahead of AI-enhanced threats, defenders need to strengthen real-time collaboration and share intelligence on emerging attack patterns.

Rethink incident response: Conventional response plans designed for human-paced threats are no longer adequate. They must be replaced with autonomous systems capable of reacting and adapting at machine speed.

Anticipate attribution challenges: AI-driven attacks may imitate the tactics of multiple threat actors, making forensic analysis and attribution significantly more difficult.

Read Netscout's full report here.

Do you have a story or insights to share? Get in touch and let us know. 

Follow Machine on XBlueSky and LinkedIn