The commoditisation of cybercrime: Inside the AI arms race
"The barriers to entry have dropped dramatically."

Once upon a time, becoming a hacker required deep technical knowledge, coding skills, and a strong understanding of networks.
Now, it's frighteningly easy for an amateur to become a hardened cyber warrior. With GenAI tools, even non-technical users can launch sophisticated attacks or write malicious code with just a few prompts.
The proliferation of offensive AI capabilities bears comparison to the spread of the situation with drone warfare, which now poses an increasing threat on both the battlefield and within civilian spaces across the world. All attackers need to do to cause mass casualties is buy or make a cheap drone, load it up with bombs or noxious substances and then aim at their enemies.
No need to spend the $70 billion that was needed to develop the F-35 Lightning II when you can do a pretty good job of killing people for less than $20,000.
So are defenders at risk of being outgunned - or can they too leverage the power of AI to level up their own capabilities in response to a changing threat landscape?
We spoke to Josh Jacobson, Director of Professional Services at HackerOne, to find out.
How are threat actors using AI and why should security professionals be worried?
"AI is having a multi-faceted impact on the risk landscape. Threat actors are utilising the technology to create well-written, compelling messages and content, mimicking regional vernacular, internal corporate language and the levels of professionalism expected in official marketing or communications materials. The same can also be said for video and voice-recording-based scams. Armed with GenAI capabilities, deepfake video, audio and imagery can be extremely hard to differentiate from genuine interactions, tricking existing cybersecurity defences.
"Advanced GenAI technologies are also being used to automate attacks, identify new vulnerabilities, and massively increase the volume of threats. As a result of these various innovations, the barriers to entry for cybercrime have dropped dramatically. What once required significant technical and subject matter expertise has been commoditised, with some criminal organisations now offering an ‘as-a-service’ approach. This has a significant impact on IT and cybersecurity teams, with our recent research revealing that 48% of security leaders consider AI to be one of the greatest risks to their organisations."
How are security teams responding? Are they building AI into their defences?
"Security teams are also harnessing AI to address these threats and, in the process, making themselves faster, smarter, and better placed to identify and mitigate risks at scale. AI technologies are delivering significant efficiency benefits. For example, vulnerability reports that previously required detailed technical remediation instructions can be analysed using AI to create clear, actionable steps. These capabilities help security teams focus their time on tasks of greater strategic importance.
"AI also has the potential to help address the chronic skills gap that exists across the global cybersecurity industry, particularly as routine but time-consuming tasks are automated on a larger scale. AI also enables security teams to employ advanced behavioural analytics to flag potential attacks for faster incident response, automate threat detection in real-time, spot phishing attempts and identify vulnerabilities."
AI is known to ‘lower the barrier of entry’ for cybercriminals, but can it also do the same for security teams?
"Yes. Beyond the benefits I mentioned earlier that free up time for security teams, security researchers are also getting a boost. Security researchers actively search for vulnerabilities within software and systems and responsibly disclose threats to companies before cybercriminals can act on them. AI is lowering the barriers for the researcher community, meaning they can match similar advanced exploits bad actors might employ to find threats before cybercriminals. In this context, the role of AI is to augment human experience and expertise.
"At the same time, hackbots can be used to automate testing and vulnerability assessments so security weaknesses can be addressed before being exploited by malicious actors. These tools are an important addition to a researcher’s arsenal. Our research suggests that 38% of security researchers are using AI, with 20% already viewing it as essential."
We’re right at the top of the AI hype cycle, with speculation rising that the AI "bubble" may burst. What do you think?
"Following the DeepSeek AI stock selloff, the industry is under greater scrutiny to demonstrate how its technologies deliver bottom-line benefits. While this market turbulence is still playing out, upcoming earnings reports from the likes of Nvidia will play an important role in validating the sector's health, particularly in the short term. As far as the cybersecurity industry is concerned, however, investment in AI remains an important innovation focal point for the years ahead. While there will be winners and losers, the greatest risk is for those employing an all-or-nothing approach — avoiding AI entirely will leave businesses lagging behind their competitors but adopting without concern for the risks AI introduces will result in incidents and lost customer trust."
As cybersecurity risk remains high, what would you like to see from the industry?
"AI systems require robust security measures to minimise the risk of unauthorised access and data breaches. If malicious attackers breach these systems, they can gain access to confidential data from training datasets or manipulate the training dataset by injecting malicious inputs. This could result in decisions, predictions, and recommendations that reflect the injected bias or lead to unintended consequences, compromising the AI system's integrity and trustworthiness.
"As organisations innovate in AI, they need to work to minimise these risks by embracing emerging best practices surrounding AI safety and security, including conducting regular, external testing of AI systems. Keeping security front of mind throughout the development process is key. The security researcher community already understands how cybercriminals think and are up-to-date on their latest tactics. With their input, organisations can help secure deployments, staying two steps ahead of the bad guys."
Josh Jacobson is the Director of Professional Services at HackerOne, where he leads the implementation and security advisory teams. With over a decade of experience in ethical hacking and information security, he initially began his career in network and hardware penetration testing.
In 2015, Josh was responsible for designing, building, and managing the successful bug bounty program at United Airlines. This initiative utilised the expertise of ethical hackers worldwide to enhance the airline's security and rewarded them with millions of miles for their contributions. Before joining HackerOne, Josh oversaw the vulnerability management program for Sony Pictures, where he managed application and endpoint security testing.