The dark side of vibe-coding: AI-written code now the biggest security risk facing developers

“The danger isn’t obvious breakage but the illusion of correctness. Code that looks polished and professional can conceal serious security flaws."

The dark side of vibe-coding: AI-written code now the biggest security risk facing developers

Vibe-coding is now the most serious security threat facing organisations, according to data from a study describing itself as the "largest and longest-running study of real-world software security practices".

In the 16th edition of the Building Security In Maturity Model (BSIMM) report, Black Duck warned that AI-written code is creating a huge security risk internally, whilst externally, threat actors are using AI to level up their attacks.

The research found that AI code has "overtaken all other forces in reshaping security priorities" to become the "the biggest force reshaping application security", overtaking cloud, DevSecOps, and software supply chain risk.

A big part of the problem is that AI tools can generate code that looks professional and appears to work well - at a fraction of the cost of traditional processes.

But although the vibes look good for this machine-created code, it often hides deep security flaws that require expensive human attention to fix.

Vibe-coding is cheap, sure. But buy cheap and you'll probably have to buy twice - or much, much worse if the dodgy code causes a breach.

“The real risk of AI-generated code isn’t obvious breakage—it’s the illusion of correctness. Code that looks polished and professional can still conceal serious security flaws,” said Jason Schmitt, CEO of Black Duck. 

“We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts."

Bad vibes in AI coding workflows

Black Duck's study is based on assessments of 111 organisations across multiple industry verticals including financial services, healthcare, technology, and independent software vendors, providing insights into real-world application security practices protecting approximately 91,200 applications developed by 223,700 developers.

It found that organisations are simultaneously securing AI-powered coding assistants and defending against AI-enabled attacks, highlighting three major -shifts:

  • A 10% rise in teams using attack intelligence to track emerging AI vulnerabilities.
  • A 12% increase in the use of risk-ranking methods to determine where LLM-generated code is safe to deploy.
  • A 10% uptick in applying custom rules to automated code review tools to catch issues unique to AI-generated code.

New threats facing software developers

The BSIMM16 report also found that government regulation was now one of the biggest forces reshaping application security. New global mandates are pushing organisations to spend more, move faster, and prove what they were doing - especially around software supply chain transparency and locked-down development environments.

Almost 30% more organisations were producing SBOMs - a Software Bill of Materials listing all the software components and dependencies inside an application.to meet these transparency demands.

The report recorded a surge of more than 50% in automated infrastructure security verification, alongside roughly 40% growth in efforts to streamline responsible vulnerability disclosure, with momentum driven by the EU Cyber Resilience Act and evolving U.S. government requirements.

BSIMM16 also found that software supply chain security was rapidly moving from a “nice to have” to a core priority. Organisations were no longer treating security as something that applied only to the code they wrote themselves.

READ MORE: "CISOs should act now": Preparing for the UK Cyber Security and Resilience Bill

Instead, they were expanding their focus to cover the entire ecosystem of third-party components, tools, and dependencies that modern software relies on. Alongside the jump in SBOM adoption for deployed software, the report observed more than 40% growth in organisations establishing standardised technology stacks — a clear signal that firms were trying to reduce supply chain sprawl and regain control.

Application security training was shifting, too. The era of multi-day security courses was fading, replaced by just-in-time, bite-sized learning designed to fit real development workflows.

BSIMM16 reported a 29% increase in organisations delivering security expertise through open collaboration channels, giving developers instant access to guidance when they actually needed it. And after years of decline, traditional awareness training was beginning to rebound — suggesting that, under regulatory pressure, companies were realising culture still mattered. 

For the first time in its history, BSIMM16 introduced no changes to the framework structure, which Black Duck described as "signaling the maturity and stability of application security practices across the industry."|

You can download the BSIMM16 report here or read the detailed blog post.

Follow Machine on LinkedIn