The AI IOU: Counting the hidden costs of artificial intelligence in software development

Organisations are wasting money and inadvertently accumulating a debt that will become painful if they don't pay it off quickly.

Image illustrating the hidden financial, security and legal costs of AI-powered software development, used in an article on AI-generated code, technical debt, licensing risk and trust

The future is not exactly how we imagined it in movies like 2001: A Space Odyssey. But we still need to adapt - and quickly. The technology landscape is evolving rapidly, and AI adoption is accelerating just as fast: 91% of organisations now use AI in software development and 15% allow AI involvement despite having no confidence in it.

We all know that AI is incredible and speeding up innovation to an unprecedented rate. However, it also accelerates risk, racking up debt and creating a hidden “IOU” organisations may not realise they are accumulating. These hidden debts can be refined into three categories: security, licensing, and trust. 

Bad vibes: The security risks of AI in coding

AI does not simply spawn code out of thin air. Large Language Models (LLMs) are trained on unfathomably massive datasets that include insecure patterns and vulnerable code. It prioritises functional output, not secure output. Plainly stated, the more generated code an organisation implements into their projects, the more vulnerabilities appear, scaling rapidly across important material.

Open-source AI models introduce unforeseen attack vectors: prompt injection, model poisoning, and output handling flaws, and teams are unprepared to properly secure AI-generated code: 26% lack confidence and 93% lack AppSec confidence but still use AI anyway. 

This lack of confidence is justified, as studies have shown that ~35% of copilot-generated code found in GitHub contains vulnerabilities. While these vulnerabilities are present, surely the co-pilot can and will correct them, right?

Apparently not, as an additional study has shown that Copilot reproduced the vulnerable code 33% of the time, fixed it 25% of the time, and did something completely different the other 42% of instances. Enterprises cannot afford to play the odds when it comes to securing their systems, and a 33% chance of fault is higher odds than winning a hand of blackjack. 

Licensing & IP Debt 

Further reminding us to remove our implicit trust, AI-generated code is not inherently yours, and can muddy the waters when it comes to intellectual property (IP). Because models are trained on open-source repositories (such as GitHub), outputs may contain licensed Open-Source Software (OSS) snippets that trigger compliance obligations. Think plagiarism with legal consequences that you weren’t even aware of. Even though it feels new and fresh, the output may contain fragments of others' material. 

These models also introduce a new kind of provenance problem: traditional Software Composition Analysis (SCA) tools track known OSS components, but AI-generated output obscures the boundaries of what is “included” in a codebase.

This issue’s complexity only intensifies with the LLMs own licensing terms and training data requirements, accepted by organisations during adoption. Some model licenses restrict commercial use or redistribution, while others require disclosure of datasets or training methods. Teams now must track not only the code that is generated, but also the models used to generate it. 

Worryingly, questions of IP ownerships remain unsettled. The industry has not reached a consensus on who owns AI-generated code, whether the code qualifies for copyright protection, or if generated snippets should be treated as derivatives of the model’s training data. 

Are we losing trust in AI?

Developers (especially junior roles) trust AI assistants more than they should. AI’s confidence masks its fallibility; flawed outputs lead to runtime errors, performance issues, and hidden vulnerabilities.

It is easy to fall into this trust pit, especially as fresh graduates and junior employees drown in a list of tasks to complete by the end of day. Why not speed it up? These consequences scale with every single developer relying on AI without oversight. 

As AI adoption accelerates, development, security, and business stakeholders are experiencing the strain in different ways. Developers want smooth speed, frictionless workflows. Security teams need visibility and assurance. Business leaders are after innovation without introducing unquantified risk.

AI massively amplifies all these sectors simultaneously. Sometimes in harmony, but often in conflict. To properly align these priorities, there are four key approaches to help teams orient around shared outcomes. 

Aligning security, development and business priorities

First, we must meet developers where they are. Security must be embedded into developer workflows and processes, not tacked on as an afterthought. This idea includes integrated testing with pull requests, IDE plugins, and CI/CD automation that reveals risk while it’s still early.

The goal is to ensure that secure development is frictionless, minimizing context switching and preserving the speed benefits of AI powered coding. If we match security tools pace to the pace of AI code generation, developers are less likely to skip guardrails. 

Second, integrating and automating testing across the stack. As we have assessed, AI-generated code introduces risks that have numerous layers. Continuous, pipeline-driven testing creates fast feedback loops, empowering teams to catch vulnerabilities or license issues before it's too late. 

Third, we can cultivate developers’ security capabilities. While AI tools can assist with remediation, developers still need to understand why the fix was needed. AI-powered security technologies help developers learn secure patterns by generating suggested fixes, validating whether those fixes address the underlying issue, and prevent new issues from being introduced continuously. This can help generate a feedback-driven learning cycle, helping teams build secure coding habits while still benefiting from the acceleration of AI output. 

Fourth, always plan for evolution with strategies that actually last. We’ve seen the ludicrously fast evolution of AI, but we cannot treat it as a race. Organisational processes must evolve in parallel, not dragging behind. Unifying policies, governance frameworks, and visibility across all teams allows organisations to match the speed AI enables without losing control. Keeping our pace prevents advancements from becoming a runaway train. 

Bridging the gap between development, security, and business is not and will never be a one-time effort. It requires sustained attention: integrated workflows, automated guardrails, continuous skills development, and a long-term strategy designed for the AI-first world we are speeding headfirst at. 

Settling the debt and investing in the future of AI

We know that AI is powerful, but the risks pile up just as fast. Security, licensing, and trust debts don’t just vanish, they accumulate out of sight. The truth is that the “AI IOU” will come due, either proactively through rigorously devised preparation, or reactively through breaches, outages, or legal trouble. If you choose to ignore this harsh reality, it’s not a neutral stance: it’s choosing risk. 

The way forward is to pair AI-driven development with AI-driven security, working closely with one another. The guardrails you place shouldn’t slow innovation but instead make sustainable innovation with best practice possible. It’s obvious that AI doesn’t wait. It won’t slow down, it won’t plateau, at least in the foreseeable future. The question isn’t whether to use it, but if you will choose to use it responsibly before the IOU costs more than you signed up for. 

Boris Cipot is Senior Security Engineer at Black Duck.