Skip to content

AI-Generated Code Is Already Running Critical Infrastructure: Can AppSec Keep Up?

Traditional security tools were designed when code changes were measured in hundreds of lines per sprint and development cycles lasted weeks. Today, AI accelerates code production to thousands of lines daily with fundamentally different patterns than human-written code.

When it comes to embedded systems development, AI—production systems controlling medical devices, power grids, automotive platforms, and industrial control systems are already near-ubiquitous. 

According to RunSafe Security's 2025 AI in Embedded Systems Report: AI Is Here. Security isn't, based on 200 professionals responding throughout the US, UK, and Germany working on embedded systems in critical infrastructure; roughly 84% percent of organizations have already deployed AI-generated code, despite 73% citing considerable risk concerns. 

The report presents a picture of the embedded systems industry at an inflection point. While AI adoption has accelerated quickly, the security controls designed for human-written code may not keep pace with the volume and velocity of machine-generated output. This gap between AI adoption and security readiness could pose a risk for enterprises managing critical infrastructure.

The Confidence Paradox

The survey highlighted a contradiction that security leaders should take note of. And that contradiction is that while 96 percent of respondents expressed confidence in their ability to detect vulnerabilities in AI-generated code, 73 percent simultaneously rated the cybersecurity risk as "moderate or higher." "Respondents recognize there are very strong cybersecurity risks associated with AI-generated code," Joe Saunders, founder and CEO at RunSafe Security, told CYBR.SEC.Media. "Despite the risks, their high confidence levels indicate that they believe their existing tooling and processes will be able to detect vulnerabilities AI coding tools may introduce. As AI is incorporated more fully in the coming years, we will see if the confidence level holds up, or if there may be an overconfidence bias that hasn't been tested yet by real-world incidents," he said.

Saunders explained that the survey did not specifically ask respondents if they have put their detection tools to the test against AI-generated vulnerabilities. "However, more than 80 percent reported deploying AI-generated code in at least some systems over the past year. They are likely running this code through the same vulnerability discovery methods they rely on for all embedded software, including static analysis, dynamic testing, manual review, fuzzing, and runtime monitoring," he said.

"The confidence respondents express suggests a confidence in the maturity of their current security tooling, despite the known security risks of AI-generated code," he added.

That would make the underlying problem structural: traditional security tools were designed when code changes were measured in hundreds of lines per sprint and development cycles lasted weeks. Today, AI accelerates code production to thousands of lines daily with fundamentally different patterns than human-written code. Existing static analysis (SAST), dynamic testing (DAST), and manual code review processes cannot scale to this new reality.

Roughly one-third of organizations—34 percent—reported experiencing cyber incidents involving embedded software in the past 12 months. While AI has not been identified as the direct root cause of reported incidents, the increasing pace of AI-accelerated development is creating conditions in which software flaws reach production faster than security teams can identify and remediate them. The report notes that memory safety vulnerabilities alone account for 60-70 percent of all embedded software exploits. If AI systems are trained primarily on legacy C/C++ code, these flaws are likely to perpetuate at scale.

Regulatory Fragmentation Creates Uncertainty

A finding particularly relevant for enterprise risk managers: 44 percent of organizations rely primarily on internal security standards because no single authoritative framework adequately addresses AI-generated code in critical infrastructure. While automotive has ISO/SAE 21434 (adopted by 41 percent), industrial systems reference IEC 62443 (28.5 percent), and the EU Cyber Resilience Act applies to only 24.5 percent of respondents, the vast majority operate in a regulatory gray zone with inconsistent guidance.

This fragmentation creates uneven security postures across the supply chain. Enterprises should expect regulatory pressure to intensify, particularly in medical devices and energy sectors, within the next 2-3 years.

What Enterprise Security Managers Should Do

Survey respondents indicate that organizations are preparing to increase security investments significantly. Ninety-four percent plan increased spending over 2 years, with 38% expecting significant growth. Their priorities are clear: code analysis automation (62 percent), AI-assisted threat modeling (51 percent), and runtime exploit mitigation (44 percent).

This investment trajectory reveals an industry consensus that defense must evolve from preventing vulnerabilities in development to containing their exploitation at runtime. The shift from "we verified the code is correct" to "even if the code has flaws, the system remains secure and operational" represents a fundamental rethinking of embedded security architecture.

The data suggests three immediate priorities. RunSafe advised organizations first to assume AI-generated code is in their critical systems and establish traceability to identify which components used AI tools and what additional validation they received. Second, prioritize runtime protections and exploit mitigation—60 percent of organizations already recognize this as essential, particularly for memory safety. Third, begin conversations with suppliers about their AI tool usage and validation processes; third-party risk assessments must now explicitly address AI code-generation practices.

Organizations that move proactively on runtime resilience, automation, and supply chain visibility now will have significant advantages as regulatory requirements crystallize around AI-generated code security. Those who continue to rely on pre-AI security architectures will find themselves increasingly exposed to both technical compromise and regulatory violations.

The RunSafe report's core message is clear: AI is transforming embedded systems development faster than security practice can adapt, and the window to establish adequate controls is narrowing.

HOU.SEC.CON CTA

Latest