Skip to content

When AI Becomes the Insider Threat on the Plant Floor

Agentic AI will increasingly cut out humans in many management and operational decisions. These systems are designed to plan, decide, and act — continuously — without a human in every loop. Here's what that means for OT security.

Agentic AI has already stepped onto the plant floor, and most OT security teams are not ready for what that means for either safety or uptime.

In his OT.SEC.Con presentation, Ian Bramson, VP, global industrial cybersecurity at critical infrastructure engineering firm Black & Veatch, explained how the industry has spent the last few years discussing agentic AI as a tool for defense or as a force multiplier for adversaries, but very little discussion on autonomous software agents and how they are an “insider threat.”

“Agentic AI is one of the more existential threats to safety and uptime,” Bramson said. “It has the power to reshape how we operate, what we're responsible for, how we deliver, and even how we think about cybersecurity."

More from OT.SEC.CON:

OT Security Starts with Understanding the Plant: Inside Mike Holcomb’s OT.SEC.CON Training
Mike Holcomb’s OT security training cuts through theory and brings IT and OT professionals together around one goal: understanding how industrial environments actually work and how to secure them before failure becomes physical.
OT.SEC.CON: Where Cyber Meets the Physical World, and Failure Is No Longer an Option
Cybersecurity has outgrown the SOC. As attacks spill into water systems, hospitals, and critical infrastructure, OT.SEC.CON will bring together the practitioners, policymakers, and operators redefining what defense looks like when cyber risk becomes physical risk.
#FollowFriday: Five OT Security Leaders Speaking at OT.SEC.CON.
With OT.SEC.CON. coming up, this week’s #FollowFriday celebrates five leaders in the space we’re looking forward to seeing there.

Additionally, because agentic AIs are an “insider threat”, they essentially create a new class of virtual employees, complete with credentials, permissions, and the power to take actions in control environments that humans may not fully understand or monitor. Finally, Bramson explained that AI agents don’t need malice to cause harm; they only need a bad prompt, model drift, or poisoned training data. 

Fortunately, there is time for organizations to respond. According to the 2024 SANS OT survey of 330 industrial respondents, AI in OT/ICS organizations is “nascent,” with deployments primarily limited to pilot projects and lab environments.

Humans are stepping out of the decision loop

Over time, agentic AI will increasingly cut out humans in many management and operational decisions. These systems are designed to plan, decide, and act—continuously—without a human in every loop. You don’t just ask for a report; you give an objective, “optimize throughput,” “reduce energy costs, or whatever command, and authorize the agent to execute. That could mean automatically tuning setpoints, adjusting flows, or shifting loads in real time across distributed assets. This isn’t occurring in any measurable way in OT/ICS environments currently, but it will over time. 

However, Bramson stresses that once this happens and organizations do remove the human from the decision, OT security owns the risk whether it likes it or not. How a company chooses to deploy agentic AI in operations will dramatically change the organization's threat landscape. That’s a serious shift for programs that still struggle with basics. “What we're talking about is when you start taking the human out of the loop when you have an agent, agentic doing those functions, that's the big scary place,” he said.

 Scary indeed, especially when one considers the 2024 SANS ICS/OT survey, which found that only about half of organizations have an ICS-specific incident response plan, and a sizable minority had no ICS network monitoring to speak of. Against that backdrop, the idea of dropping autonomous decision-makers into the supervisory layer, which Bramson said he expects to be the main home for such agents, should make most defenders nervous. The best practice today, he stressed, is to keep true autonomy away from the devices that touch the physical process and the controllers that run them, where a bad decision maps directly to physical consequence. But a misbehaving agent at the physical control and process levels can still create cascading failures when its outputs feed other agents or systems up and down the stack. 

Bramson shared the “paperclip” thought-exercise as a concrete illustration of why agentic AI is so dangerous in OT. The problem isn’t evil or self‑aware machines; it’s systems that relentlessly optimize a narrow goal without human common sense or contextual judgment. In the classic thought experiment by AI expert Nick Bostrom, a perfectly obedient AI is tasked with maximizing paperclip production, and it dutifully consumes all resources to achieve the goal.

In an OT environment, that can look like an agent overdriving equipment, cutting corners on process safety, or depriving one part of the system to super‑optimize another—pursuing its target metric at the expense of safety and uptime.

That type of alignment challenge comes in addition to the familiar AI failure modes, such as model drift, training-set poisoning, and hallucinations. Drift in a GenAI recommendation engine is a correctable challenge; drift in a control‑adjacent agent risks quietly nudging a plant into unsafe operating regimes. Training data polluted by rare‑but‑bad states can trick an agent into chasing outliers, while hallucinated synthetic telemetry feeding back into models can compound errors over time. 

There are also governance failures OT/ICS organizations already know too well. These “shadow agents,” such as unauthorized agents, can be deployed with stolen credentials, and even as “agent chains” where one bad output ripples through an entire ecosystem. For attackers, this creates a substantial opportunity: instead of laboriously pivoting into OT, they can instead compromise identity, spin up or hijack an agent, and then let the environment’s own automation deliver the damage. 

That warning lands on OT/ICS managers already under pressure. In the 2025 SANS State of ICS/OT Security survey, 22% of organizations reported at least one ICS/OT cyber incident in the past year, and 40% of those incidents caused operational disruption. While detection is improving, with nearly half of breaches identified within 24 hours, remediation is still measured in days to months, with 19% of incidents taking more than a month to fully resolve. Imagine those same environments populated with learning agents embedded across supervisory layers, quietly “optimizing” processes until something breaks. 

Safety is a hard constraint, not a goal

How do OT/ICS organizations best ensure safety and uptime as agents are deployed? Bramson advises remembering that safety is always a hard constraint, not a goal to be optimized toward, and that agents must operate inside this deterministic, non‑negotiable bound. Also, organizations should enforce “least agency” alongside least privilege so agents can only do the narrow set of things they were designed to do. 

Additionally, actions taken by agentic systems must be reviewable and reversible, with sufficient logging and version control to trace what happened, and the ability to roll back to a known-safe state. Keep training and production strictly separate and keep agents away from managing physical processes. 

Today, there is a narrow window where OT security leaders can shape how agentic AI fully lands in their organizations. They can define ownership, set red lines around autonomy, implement hard safety constraints, mandate monitoring and kill switches, and build platforms to register, monitor, and govern agents before the business simply “jumps the chasm” in search of agentic efficiency. Or they can wait for their first agent-induced near miss—or worse— and then try to meet the challenge retroactively. 

Latest