Skip to content

Why Forrester Says Your Agentic AI Deployment Will Cause a Breach in 2026

The agentic AI governance gap is a fundamental enterprise weakness. Sixty-three percent of organizations lack AI governance policies, according to IBM's research. This creates a complete lack of any meaningful organizational control over these deployments.

They say history repeats itself, and if one looks at the current state of security and agentic AI deployment and compares it with the state of web application development and security at the turn of the century, it is hard to argue that history doesn't repeat itself. 

Consider Forrester Research's top 2026 prediction: next year is when agentic AI-related breaches get real, as enterprises race to deploy these systems without also putting in place proper security governance guardrails. To be clear, Forrester isn't warning of sophisticated threat actor attacks weaponizing AI. Forrester is warning high-profile organizations will be breached in large part because they deployed these systems without implementing the proper security measures for their Agentic AI implementations.

While the risks associated with agentic AI breaches are high, Ed Lewis, a strategic advisor at a stealth AI startup and former practice director, secure development and cloud transformation at cybersecurity services provider Optiv, says, "The risk of not doing AI is too high to the business. But doing agentic AI without proper controls risks losing company and customer data, as well as intellectual property. You also risk having systems corrupted by prompt injection through publicly facing chatbots. There are many potential vectors of attack, but proactive is one way organizations can really get on the right footing in a very fluid landscape."

Forrester's principal analyst, Jeff Pollard, adds that he views the "scariest" risk is "when something goes wrong with agentic AI, failures cascade through the system. That means that the introduction of one error can propagate through the entire system, corrupting it," he says. Pollard cites other risks as well, such as Agentic AI's uncontrolled autonomy: Prompt injection and intent hijacking, data leakage, shadow AI proliferation, and supply chain exposure as third-party AI models and APIs introduce vulnerabilities that cascade across ecosystems.

Breaches Already Underway

However, most security experts say that companies aren't being adequately proactive. And IBM's 2025 Cost of a Data Breach Report found that 13% of organizations have already suffered AI-related security incidents resulting in breaches. Among those organizations compromised, 97% lacked proper AI access controls—a common security failure across enterprise technology environments that essentially guarantees attacker success.

The most instructive example came from a regional hospital network in New York state that reported that its agentic AI provider accidentally leaked data affecting over 483,000 patients. The incident, linked to inadequate agentic AI security controls, validates Forrester's thesis: these breaches aren't theoretical future risks—they're current operational failures.

Why Enterprises Remain Unprepared

A governance gap is the fundamental weakness. Sixty-three percent of organizations lack AI governance policies, according to IBM's research. This isn't a minor control deficiency—it creates a complete lack of any meaningful organizational control.

Also according to IBM's 2025 Cost of a Data Breach Report, 97% of breached organizations lacked proper AI access controls; 80% of organizations report encountering risky behaviors from deployed AI agents, including improper data exposure and unauthorized access attempts; only 42% of executives balance AI development initiatives with commensurate security investments; and just 37% have established processes to assess AI tool security before deployment.

NCC Group's technical director and head of AI and ML security, David Brauchler, says he sees many common vulnerabilities across organizations. These include improper segmentation between attacker-controlled data and privileged contexts. "This fundamental flaw underpins excessive agency, Cross-User Prompt Injection, Data Exfiltration, and more. AI has, in effect, added a new layer to our application security stacks. In the past, most organizations considered application security from the Physical Layer up to the Component Layer, with data transmitted between trust contexts. Now, AI has added to that model a Data Layer, in which the information traveling throughout the application itself defines the trust context at runtime or at prompt-time," he says.

Why Agentic AI Is So Risky

Traditional AI systems are stateless and reactive—they process queries and forget. Agentic AI systems are stateful, persistent, and proactive. They maintain context, evolve, and operate autonomously across multiple systems.

The OWASP Agentic AI Security Project identifies three core threat categories that traditional security frameworks don't address: memory poisoning, tool weaponization, and privilege exploitation.

Memory poisoning occurs when attackers gradually corrupt an agent's long-term memory with false information. Because agentic AI maintains persistent context, corrupted decisions spread across future autonomous operations. Imagine an agent that, over weeks, is fed false vendor information, begins recommending malicious vendors, autonomously approves contracts, and ultimately enables a massive data breach through what the system believes is a "trusted" vendor relationship.

Tool weaponization exploits the reality that agentic AI integrates with dozens of business systems—email, calendars, payment processors, databases, and cloud services. Each integration becomes a potential vector for autonomous exploitation. An agent with email access could send phishing campaigns to an entire customer database while appearing to execute legitimate marketing operations. An agent with scheduling privileges could create operational chaos through fake "emergency" meetings. An agent with payment system access could process fraudulent transactions using learned authorization patterns.

Privilege exploitation addresses the fundamental challenge of autonomous decision-making without continuous human oversight. Agents inherit access to any resources available to authenticated users, and privilege creep accumulates as agents are granted incremental access to accomplish increasingly complex tasks.

The Secure Path Forward

Based on his conversations with enterprises, Pollard says the immediate action organizations should take to identify and mitigate their agentic AI risks is to establish AI governance guardrails. "Define clear policies for agentic AI use. This is why we created the AEGIS framework," he says. He also advises implementing continuous red teaming, such as testing agentic AI systems for prompt injection, autonomy abuse, and data leakage; securing integrations by hardening APIs and enforcing "least agency" for AI agents; and inventorying and monitoring AI assets to detect shadow AI deployments and maintain visibility across the enterprise.

The Forrester AEGIS (Agentic AI Enterprise Guardrails for Information Security) framework is one of several recent agentic AI frameworks. AEGIS is enterprise-focused and organizes security across six core domains: governance and risk compliance (GRC), identity and access management (IAM), data security and privacy, application security, threat management, and zero trust architecture. Forrester believes that traditional security models are insufficient for autonomous systems and that these systems require a shift from securing static systems to "securing intent" through runtime enforcement, behavioral monitoring, and human oversight. AWS has developed the Agentic AI Security Scoping Matrix, which categorizes agentic deployments across four architectural scopes based on autonomy and human oversight levels, and maps specific security controls to each maturity level—from basic, constrained agents to self-directing agents. ​

Also, OWASP released its State of Agentic AI Security and Governance report and the GUARD framework (Govern, Understand, Assess, Respond, Design). The GUARD framework provides threat modeling and control mapping specifically for autonomous agents. The Cloud Security Alliance developed a purpose-built IAM framework for agentic AI using Decentralized Identifiers and Verifiable Credentials, designed to address the management of ephemeral agent identities and delegation patterns in multi-agent systems. The common thread across all of these frameworks is that agentic AI security requires moving policies beyond documentation into runtime-enforceable guardrails, continuous behavioral monitoring, and persistent human oversight mechanisms. 

Melissa Ruzzi, director of AI at AppOmni, adds that when excessive permissions are given to agentic AI, "such as data fetching or performing administrative actions, even when the user asking the question is not an admin, or when proper instructions aren't given to the AI about how to choose and use tools securely, there is a risk of exposing and altering sensitive data. These risks heighten even more with increased pressure from users expecting AI agents to become more and more powerful, and organizations are also under pressure to develop and release agents to production as fast as possible."

While frameworks can provide a map, the challenge remains: even minimum viable security for agentic AI is often more complex than traditional application security. NCC Group's Brauchler says engineer education and training are foundational. "If AI applications are not designed from the ground up to account for AI-specific risks, these new classes of vulnerabilities are almost impossible to patch after the fact. System architects need to apply AI threat modeling from the earliest stages of the application design process and continuously shift left their security efforts. Organizations should develop strong internal AI security practices or partner with experts to cover blind spots in the agentic development process," he says.

The question isn't whether agentic AI breaches will happen in 2026. The question is whether your organization will be among those experiencing them—and whether you're building the governance frameworks now that could prevent becoming next year's cautionary tale.

HOU.SEC.CON CTA

Latest