Skip to content

How AI Has Weaponized the Software Supply Chain (and How To Respond)

Industry veteran Theresa Lanowitz says the modern software supply chain has become too complex to see, too critical to ignore, and too exposed to secure the old way.

The software supply chain has quietly become one of the most critical and least controlled risk areas in cybersecurity. But according to industry veteran Theresa Lanowitz, that’s starting to change, driven by a surprising source: the CEO.

In a recent episode of CYBR HAK CAST, Lanowitz traced the evolution of today’s software risk landscape back to decades-old challenges in application security, where development and security teams often operated in silos. While tooling has improved and DevSecOps has gained traction, many of the same underlying problems persist — only now, they’re amplified by AI and global software dependencies.

Full Episode:

CYBR.HAK.CAST Episode 11: Theresa Lanowitz
As AI accelerates development and expands the attack surface, organizations are waking up to a harsh reality: the software supply chain is now their most fragile and least understood security risk.

“We’re advancing rapidly in innovation,” Lanowitz explained, “but many of the same core issues are still there. We’ve just changed the form—from SQL injection to prompt injection.”

That shift — from traditional vulnerabilities to AI-driven risks — is reshaping how organizations think about security. AI is accelerating code generation at unprecedented speed, but it’s also introducing new, less visible risks into the software supply chain. Developers are no longer just writing code — they’re assembling it from open-source repositories, third-party components, and increasingly, AI-generated outputs.

The result is a sprawling, fragmented ecosystem where visibility is limited and accountability is unclear.

“Where does your software come from?” Lanowitz asked. “Internal code, third-party vendors, open source and now AI agents. Each layer adds complexity, and most organizations don’t fully understand that chain.”

That lack of visibility is what makes the software supply chain so dangerous. According to Lanowitz’s research, many organizations acknowledge the risk — some even identify it as their top security concern — but few are taking meaningful steps to address it. Instead, security often devolves into a checkbox exercise, where organizations rely on vendor assurances rather than verifying actual practices.

One anecdote she shared illustrates the problem clearly: a conference attendee admitted he understood the importance of supply-chain security but simply didn’t have time to investigate it, so he chose to trust vendors at their word. That mindset is exactly what attackers exploit.

2025: The Year Cybersecurity Became Unmanageable
2025 revealed a harsh truth for CISOs: nation-state attackers, legal risk, and supply chain chaos have outpaced defense.

Compounding the issue is the rise of AI-generated code, which introduces a new class of risks into the supply chain. Large language models are trained on vast datasets that may include insecure or outdated code, raising concerns about whether vulnerabilities are being unknowingly replicated at scale.

At the same time, software supply chains are becoming more interconnected. Third-party vendors often rely on their own suppliers, creating a cascading chain of dependencies that organizations rarely map in full. Even hardware components, such as chips in critical infrastructure systems, can carry hidden vulnerabilities that are difficult to detect without deep analysis.

AI-Generated Code Is Already Running Critical Infrastructure
Embedded systems are already running AI-generated code. Security leaders now face scale, speed, and regulatory risk gaps.

Despite these challenges, there are signs of progress. Lanowitz pointed to a growing alignment between executive leadership and security teams, with CEOs increasingly recognizing the business impact of supply chain risk. That top-down awareness, combined with practitioner-level guidance from frameworks like OWASP’s Top 10 for LLMs, is creating momentum for change.

Still, closing the gap will require more than awareness. It will demand cultural shifts, better collaboration between developers and security teams, and a renewed focus on foundational software engineering practices.

“We’re starting to see a return to quality,” Lanowitz said. “And that’s a good thing—because in today’s environment, you don’t always get a second chance to fix it later.”

Latest