Presenters:
Transcript
[ 00:00:09 ]This is the final track 10 seminar of the conference that we have with us. Lee Maccarone and Titus Gray, both with Sandia National Laboratories. And they will be talking on enhancing operational technology cybersecurity using modeling and simulation. Thank you. Yeah, so as he said, I'm Titus Gray. A lot of my background comes from IT cyber. I gota couple of the certifications there, but I am now a cybersecurity engineer at Sandia. And my name is Lee Maccarone. I'malso a cybersecurity engineer at Sandia, but my background is actually in mechanical engineering and control systems. I also do some risk analysis work at Sandia. And a couple of our colleagues are also Andrew Hahn, cybersecurity engineer at Sandia. His background is actually in nuclear engineering.
[ 00:01:10 ] Our group does a lot of cybersecurity R&D for advanced reactors and existing fleet. And our colleague from INL, Scott Bowman, sitting up front, his background's in cybersecurity and cyber threat intelligence. So, the reason we showedthis slide is really to highlight how multidisciplinary the team is, and to establish that that's really a basic requirement that you're going to need to conduct this kind of modeling and simulation that we'll talk about for cybersecurity analysis of OTsystems. In order to have a good model and a good simulation, you need to have people who have experience in all of the backgrounds so that each one can come in as an expert and talk about how to improve the system and things like that. So, the first question that we always get is why are modeling and simulation needed?
[ 00:01:57 ] And there's a lot of tools out there that people are like, 'Hey, well, we can use this instead.' And so we'll walk through kind of some stuff that explains why we think it's a necessary process for design and improving your system. So the example we're starting off with is if you're buying a brand new house, it's a nice castle on a hill, you're working with your people and you're trying to have an appraiser tell you what the value of the house is as you're buying it. And there's a couple different methods that can be used to look at it. So, the first one, we can look at some blueprints for when it was built; sale prices, a list of previous repairs. And this is a really great starting point.
[ 00:02:36 ] You can get some really good insights. You can see how the house may actually be. But given the opportunity, a lot of people are going to want a detailed inspection of the current state. Someone that can come in and tell you, 'Hey, here are some pieces that you're going to need to fix the next couple of years.' Here's a little bit farther down the road. And so thekey point here is option one is great. But if we have the opportunity to look more in depth and actually kick the tires and see
what the current state is, that is a way we can add on to our current practices. So the same applies when we start moving into OT cyber analysis. And so we've got a list of questions.
[ 00:03:19 ] I'll kind of pop them up here as I talk through them. But the first one is when we're looking at indicators of compromise, a lot of times we only have historical data. And even that historical data can be lacking because either the company that was breached is protecting the information because it has their info in it and they don't want to have the embarrassment of publishing, 'hey, here's all of how we were breached.' Some of it is just they didn't have the capability to log it at the time. And then there's the piece of the physics data. And physics data is an invaluable piece when we're looking at indicators of compromise because it's not something that an attacker can change. When an attacker is in your system, they can change what IP address they call back to.
[ 00:04:04 ] They can change signatures on files. They can change the names. They can't change how physics works. And so if you can look at your system and say, hey, when an attacker's in here, this is generally how it affects our system, youcan start looking at that as an early warning sign that someone may be in your system. The next step is how do we identify issues before they occur? If we're only operating historical data, we have to wait for someone to be breached before we canstart protecting against it. And so if we have these models and simulations, we can experiment with them and mess with them and see how they react and see where an attacker may want to go, see the effects they can have and try to address them before the attacker even gets there.
[ 00:04:47 ] So we can also step through the design process. On the nuclear side, there's a lot of regulation and steps before a plant can be operational. And I know other areas have similar regulations and things like that. And if you're able to come to them and say, hey, we've done this analysis, we have acted as if we've breached each piece of this, and here's how our design has mitigated these risks or things like that, you have the ability to have a much more greater evidence that yourdesign is secure. And then we can step into identifying indicators of attack. And then the last piece is AI and ML. Everyone's crazy for AI and ML right now. But in order to train those models and design those systems, you need a lot of data.
[ 00:05:37 ] And so, I mean, Lee's got a couple different projects he's worked on and Scott as well, where we can look atpast cases. But again, we need that data to train those models. And then once we have it, we need to be able to verify that they're giving us the data that we need. And so with a modeling simulation environment, you're able to do whatever you want and pull all of the data out. And then you can integrate that into other tools to make sure that they are effectively working. So kind of a big summary, modeling simulation, it's risk-free and cost-saving. We're going to kind of step throughhere some of the design aspects of different physics simulators, different ways to do network emulation, and then we're actually going to show you some results we have from the nuclear side and where our current work is in oil and gas as well.
[ 00:06:29 ] And so as we move through that, this is kind of the key point we want to emphasize: if you don't have amodeling simulation environment, you're not going to be able to go in and see what happens when I turn everything off.What happens because you can't have that outage. If you have a modeling simulation environment, you can do whatever you would like and identify problems before they occur. So our key piece here is taking in different models and simulationsthat exist and trying to bring them together into an environment where we can have cyber and physical systems
combined so that we can mess with the cyber side and see how physics responds. And I will pass it over to Lee as he talks about the physics piece. Thanks.
[ 00:07:13 ] So even though we're working on cyber-physical simulations, we're going to actually start on the physical side of things first. So first we're going to talk about different types of physics simulators and what makes a good physicssimulator for cybersecurity research. On the vertical axis here we have physics fidelity, and on the horizontal axis we have basically the scope or how comprehensive the plant simulation is. And some of these simulators are going to be better suited for cybersecurity than others. Starting on the right-hand side with training simulators, these are really designed to give someone who's operating the facility a sense of how to interact with the facility, what courses of action to take in certain scenarios. So they're generally going to be pretty comprehensive in terms of their scope, but they may be missing some of the physics fidelity.
[ 00:08:07 ] There might be lookup tables for certain scenarios baked in rather than the actual governing equations of thephysics. On the other side of the spectrum, we have research simulators. Those are often very high fidelity in terms of thephysics because they're meant to answer a specific technical question. So an example might be like a fluid dynamics simulation that's studying how the fluid behaves through a certain section of piping. So the physics are very high fidelity butthe scope is very narrow. And that leads us to kind of this middle ground of engineering simulators, where we have a pretty high degree of physics fidelity and we also have pretty comprehensive scope. So, we want to leverage these engineeringsimulators as part of a multifunctional simulator integrating cyber and physical analysis, which is that kind of orange box on the upper right, where we have high fidelity it's very representative of the real world and it also covers the entire scope of our analysis.
[ 00:09:16 ] So going into a little bit more detail about an engineering simulator: It primarily models the physics domain, and it must include your control surfaces within the physical system. That doesn't mean it has models of PLCs; it just means thatour actuators are modeled within the system. So our pumps, our valves, our motors – all of those items do need to be part of your physics modeling. And the breadth of operating conditions that we cover defines the valid scope of our cyber-physical analysis. So we obviously need to cover our normal operating conditions, but if we also cover these off-normal scenarios,that increases the breadth of insights that we can get from our cyber-security analysis, where our adversary is trying to push us to those off-normal scenarios.
[ 00:10:07 ] And lastly, it does need to be capable of transient analysis, and we need to be able to operate it in real time so that it can interact with our cybersecurity emulations. And I do have a quick figure on the right. This is actually a model ofa pressurized water reactor called the Shira, which was developed by the International Atomic Energy Agency specifically to be used in this kind of multi-simulator environment for cybersecurity R&D. So getting into multifunctional simulators, we're really trying to connect the dots between different tools that might be used in a standalone sense to gain a greater insight. So we have complex interactions and interdependencies that we need to handle between the different components to make sure that they're exchanging data effectively.
[ 00:11:00 ] And one example that we've been working on is actually an integrated Physics model cyber security analysis tool, and actually physical security as part of that as well along with basically consequence analysis for for nuclear facilities. I'll turn it back over to Titus. Yeah, so we've talked about some of the physics side. We're going to talk about some of thenetwork and host emulation
pieces now. So when you're talking about something on this scale, you typically want to you typically smart start small, build something, and then we start trying to scale it up to however big we can make it. And when you get there, you end up with thousands of virtual machines in your environment. And you have to learn how to handle all of those.
[ 00:11:50 ] But you also need to know all of the data that happens in them. If you're using some of these commercial options, they may have security mechanisms built in that your system doesn't have. And so if you use those to do yourresearch, You may have problems with certain attacks that aren’t actually prevented in your system, but places like AWS orAzure have protections built in that may slow them down. You also need to be able to pull all of the data out. And a lot of these commercial options, they aren’t transparent. They’re designed, again, for security. And so some of that comes through obscurity into the network and how it actually is operating. And so as we’re evaluating these commercial options, We were struggling to be able to use any of these for our system.
[ 00:12:40 ] So the tool we ended up settling on is called MiniMega. It is a virtual machine management tool that is designed entirely for research. It doesn’t have security built in. It is completely open source. And its big thing is that everything is capturable in the environment. Any information you want, you can pull out. You can pull out hard drives at different points inthe In your scenario that you're running, you can pull out all of the network traffic and it's relatively easy to use. You can have it on a laptop or a cluster of nodes in a server farm. The only limitation is how much RAM can you give it? And they have it built in to where it handles some of the RAM and things like that for you.
[ 00:13:24 ] So you may be able to have virtual machines that actually the total is more RAM than you have, but it is ableto do some of the. Maneuvering there to allow you to run all of them. So this is an example tying into the Ashira model that Lee was talking about. This is the network that we have designed for there. So over here on the left we have the controller network. So this is kind of what you would see above ground and it's really tiny, but this is all of your engineeringworkstations, your PLCs, all of that is modeled here on the left. The management network on the right is how we essentially trick those PLCs into thinking that they're real PLCs and that they're communicating with other PLCs.
[ 00:14:07 ] Because all of these systems need to be operating as if they are normal everyday devices. And if you don't have that, they're not going to function properly and that's going to ruin the fidelity of your environment. The other piece is that PLC emulation. We need to be able to have thousands of PLCs, and that would be way too expensive and way too difficult to have a physical environment that has all of those that you're constantly reprogramming, bringing up and down, and things like that. And this piece is actually where we need a lot of support from vendors. Because each tool has its own limitations. Some may not be able to connect to external devices. And so, that's going to have problems there. And some may be.
[ 00:14:49 ] Need more optimization or more things like that and so this is where a lot of our efforts right now is identifying the solutions that we can use and then working with the vendors to see if we can get improvements to these softwares. Finally, we have Hardware-in-the-Loop, hardware in the loop is one of the cases where maybe you have something that can't be emulated, the Manufacturer
doesn't have an emulation for it, maybe it's an older equipment, something like that. And we need to be able to look at the system and evaluate how it operates in your plant or your facility. So that's where Hardware-in-the-Loop can be great. A lot of manufacturers don't have emulations for their stuff that's at least public. They may have them internally, we're not sure, but we don't have those.
[ 00:15:36 ] And so we want to be able to test those in our environment, try attacks against them, and get into some of these physical attacks. Because if you're emulating PLC you can't go in and try to do a physical attack on it because it's an emulated PLC. This is photo over on the left is actually one of our environments at Sandia that we've got, where we're building out a wall of PLCs and be able to integrate it with hardware in the loop, to where we can put it in an environmentimmediately, take it back out and use it in a new system. And so now we can see we have network, and we have physics, and the key is then tying them together. So when we're doing it, that we have to figure out how they all need to communicate.
[ 00:16:20 ] We have our physics, which internally we've used Matlab and Simulink, and Flownex are the two software's weprimarily worked with. And then we have network, which can be any number of protocols. You're going to have Modbus, OPCUA, any system that you need to communicate. And then you have Hardware-in-the-Loop where everything needs to be thinking it's a physical device. It's super time sensitive. We need to bridge the gap between all of these in real-time so that every system works exactly how it will in your facility and the tool that we've developed is we call the data broker andthis is actually an entirely open-source as of a couple weeks ago. The key pieces on it are those connections and maintaining as real-time as we can.
[ 00:17:09 ] And with this system, we have from a physics simulator into a PLC and back in the realm of about 10 milliseconds of communication. And so you have your simulator, which again, we talked about like Matlab or anything likethat, Simulink. It is connected through shared memory into the data broker so that we can have that quick communication. A lot of the physics simulators don't have built-in ways to communicate externally. Or if they do, they're going to be slow or have difficulties there. And so we have these shared memory connectors that we've developed that connect into the data broker, and the data broker uses endpoints. And the endpoints are so that there's distributed computation, because if the data broker is trying to communicate with a thousand devices at once, that's going to be difficult.
[ 00:17:57 ] And so we've built out these endpoints, and the endpoints are what handles all the communications withexternal devices. They reach out and communicate over Modbus, OPCUA, any sort of IO that you need. And the key piece here, too, is if you have a new protocol you want to communicate over, you don't have to modify the data broker. You go in,the endpoints are in Python, relatively easy. You just handle the communication there and then communicate with the databroker. And those communicate over UDP and ZMQ. Trying to make it be as easy to use and expand on because at the labs, we're developing tools and then trying to hand them off to industry to use how they need them. And so, we build a couple of examples, and hand it to you, and say, 'Develop what you need.
[ 00:18:40 ] Expand on this and let us know what you need from us. So now, I'm going to share with you some use cases that we've come across for advanced nuclear reactors. The advanced nuclear reactors are really in a unique spot. Thecurrent fleet of nuclear power generally consists of pretty
large facilities, usually using pressurized water as the coolant. And the new reactor designs typically have very resilient fuel, leverage different types of coolants, and are generally smaller scale. So, a lot of the both cyber and physical securitystrategies that we've applied at these large facilities aren't really economically viable when we try to scale down to smaller reactors. And so a lot of what the work Titus and I work on seeks to do is to enable advanced reactor designers to ensurethat they're facilities can leverage the most current digital technologies in a cost-effective and secure way.
[ 00:19:56 ] So, our group has actually done some work with the Nuclear Regulatory Commission developing regulatory guides to achieve that kind of security by design, security built in from the start rather than a bolt-on solution after we'vealready done our engineering work. So I'm going to give you a brief overview of that kind of design process. That could be awhole other talk in itself, but I'm happy to answer any questions later on once I go through the first walkthrough. So theapproach is called the tiered cybersecurity analysis, and it's basically a three-tier approach. That seeks to leverage any security by design capabilities within your facility and use that to inform the application of cybersecurity controls. So the first tier is called design analysis and in this tier we make the most conservative assumption that we possibly can and we assume that the adversary is already in the system, every digital device has been compromised, and the adversary can do whatever they want.
[ 00:21:04 ] And the only thing limiting the consequences of those cyber attacks are the physics of your facility itself. And why this assumption is important is that it can't be invalidated by any cybersecurity designs you make later on. We've already assumed that the facility has been breached. So if the attack is mitigated with network topology A, it's going to bemitigated if you have network topology B. It doesn't matter. So any accident sequences that we aren't able to mitigate through that first tier of analysis continue on to tier two, denial of access. And in denial of access, we look at the key functions that need to be compromised to cause one of those unmitigated accident scenarios. And we design our networkarchitecture to separate those functions into different security zones.
[ 00:22:02 ] and assign passive cybersecurity controls based off of how critical those functions are. So the primary output of that tier of analysis is your defensive cybersecurity architecture. And then finally, for scenarios where we aren't able to separate those functions, we look at denial of task analysis. And in this tier, we're really developing very detailed adversary technical sequences and figuring out how we implement active cybersecurity controls in order to detect, delay, and respondto that adversary activity before a physical consequence occurs in the plant. Those are the three tiers. The results that I'm going to show you are mostly focused on tier one of the analysis. So assuming that the adversary is already in, can do whatever they want, how do our plant physics mitigate the attacks?
[ 00:23:01 ] And so we simulated a variety of unsafe control actions, or UCAs, that could be caused by an adversary for a high-temperature gas-cooled reactor, HTGR. What makes this reactor design interesting, it uses TRISO fuel, which isbasically these very resilient fuel pellets, and they kind of get cycled through the reactor like a big gumball machine. So you have your fuel bed. The fuel pellets come out, they get measured to see how burnt up they are, and if they can be reused, they're recycled through the reactor. And the reactor is actually cooled with helium. So very different from the traditional pressurized water reactor design. And we want to look at our specific physics of the system and identify how basically our fuel resilience might mitigate the effects of the attack.
[ 00:23:59 ] So, we actually worked with X-energy and were able to leverage their existing engineering simulator in this modeling and simulation environment. So no extra work for X-energy. We took the physics modeling that they already had to do anyway and leveraged it as part of this multifunctional simulator. For our first set of UCAs that we looked at, we assumedthe adversary had full control over the turbine control valve. And we had UCAs of setting the valve to be fully open or fully closed. And we plotted the maximum fuel temperature and average fuel temperature over time as a result of those cyberattacks. And you can see that for the bottom, The bottom chart, we get our average fuel temperature rising at first, and thenwe get a drop off due to the inherent physics of our plant.
[ 00:24:55 ] So whether or not we have cybersecurity program A or cybersecurity program B doesn't matter. We know that if this turbine control valve is compromised, the physics of the plant will mitigate those attacks. And we did a separate analysis of compromising the helium coolant circulators. So there are two coolant circulators, circulator A and B. We looked at manipulating circulator A in a couple different ways. And then we looked at manipulating both circulators. And we basically have similar results where we get some changes in our fuel temperature. But the inherent plant physics and non-safety systems mitigate the results of the attack. We actually have about a 700-degree difference between the max safe operating threshold of the fuel and the peak that's achieved from that attack scenario.
[ 00:25:59 ] And this is all good, but when an adversary is targeting a plant, they're likely not to look at just causing a single unsafe control action. Nuclear power plants tend to have defense in depth built in, meaning you need to overcome multiplelayers of defense in order to cause a consequence. We investigate those scenarios by combining UCAs or unsafe control actions. So we have both our turbine control valve manipulation as well as our helium circulator manipulation. And again, wesee even in the scenario where those UCAs occur simultaneously, our plant physics mitigate those attacks. And this is important to X-Energy because it tells them that devices, even if they're compromised, they don't lead to an attack scenario, maybe we don't need to assign our most stringent controls, our most expensive, perhaps, controls to those devices.
[ 00:27:07 ] Maybe they can go in a lower security zone than potentially originally planned. And I'll turn it back over to Titus. Yeah, so a lot of the current work we're doing and the work can be a lot more relevant to the this conference is that we're developing this similar ideology, but for an oil and gas compressor station. So, we start off and we have a physics model that we're building out and just kind of moving left to right: We can see we get gas from upstream; it goes through the filterand separation column to get out any contaminants that may be there; it moves through the turbine to be compressed, the heat exchanger, and then out through the pipeline.
[ 00:27:45 ] One of the key pieces that we want to look at is right here-this surge control valve because we want to look at the the consequences that can happen if that is compromised: because in the event that the turbine needs to be restarted, we don't want to have a pressure event; so, if they're able to control the surge control valve and then have this, we need tomake sure that we can protect that valve. The next piece that we're moving into is building out that network: because we're doing that cyber and physical combination, we need to make sure we have both. And so here, we
can see the different security zones-we have the blue and red. And then we also have the engineering workstations and HMIs that connect into each of those controllers.
[ 00:28:33 ] Because attacks may be leveraged from an engineering workstation into the controller, which will then impactthat physics. then we have this next diagram, which is great to look at and We can see where they connect in So we have each controller and we tried to kind of have the security zones highlighted here and so we can see where our physics and our network overlap and this is kind of where a lot of our research is beneficial in that we can look at how these systems interact and things like that because If you just look at physics or just look at the network, you're going to miss the edge cases of where they work together and how affecting one can cause effects in the other. So this is as we're developing out.
[ 00:29:20 ] And this is one of the cases, too. We are always looking for partners. And so as you may have a physics model or a network and things like that, you can come to us and be like, hey, can we help you develop your system? And thenwe can help then move there just like we did with X-Energy. All right, so our future work. Really one of our biggest priorities is completing verification and validation of the physics model that we've developed. Like Titus mentioned, always looking for industry partners, and this is really an area where we could use some more eyes on the physics model. We want input onto what's representative of your facility, what improvements we can make, those types of things. We also want to integrate this modeling and simulation environment into a cyber analysis system.
[ 00:30:09 ] So we want to couple this with adversary emulations to be able to go through these detailed cyber attack scenarios. And we also want to be able to test any cybersecurity tools that we've developed at the labs in this kind of environment. So, for example, The Cybersecurity for Operational Technology Environments Programmer, Cyote at INL, hasdeveloped a suite of tools that range from journaling of human-identified observables to machine detection of cyber attack observables. And we want to be able to use this environment to test out these tools in examples that are representative of the real world. Next, we also want to increase the scope of our modeling and simulation to the pipeline scale. So you canthink of the model that we have now as kind of one Lego piece or one building block, and we want to be able to take that and expand it to a larger scale.
[ 00:31:15 ] Before we do that, obviously, we need to do the V&V step to make sure the building block is sound, but we want to be able to use this modeling and simulation to drive decision making at a large scale also. Next, we want to be able to integrate our physics analysis with 3D models for both tabletop exercises as well as human-in-the-loop experiments. So, forexample, one of the tools that I mentioned for human perception of observables of attacks would really benefit from havingthis kind of real-world 3D interaction. testing. And the last is identifying artificial intelligence and machine learning opportunities and integration points with this suite of tools. So, we know that we're going to be able to have host data, network data, due to the transparency of MiniMega, as well as a vast amount of physics data coming from our physics model. And so we need to be able to handle that data in a sensible way and then leverage it for development of future tools.All right. Thank you all for your time and attention. Titus and I have our contact information up there. And we also have Scottin the front row, too, if you'd like to reach out to him for more information. Do we have time for some questions? Does anybody have any questions?
[ 00:32:55 ] Thank you so much. I really appreciate this because what you talk about is just what I'm doing. So first of all,when we are doing this modeling, I'm trying different PRC software to see which one can duplicate. Like say, I can duplicate 100 PRCs. Now I found an answer. That is great. So what I'm interested in the oil and gas industry, just like compressor is one piece of that. So my question is, number one, for this compressor model, you use Matlab, okay, how many pieces of submodel do you need, right? For example, do you have all the compressor-related model in one Matlab file, or you have one file for one particular unit in that? And the second, how many units do you have in that PRC and the VM entire system.
[ 00:33:46 ] And the third thing is, let's say in the future, if we want to collaborate, I mean, do you have a standard, let's say, data protocol or interface you need so that you can integrate different pieces together? Yeah, so talking like kind of the scale of the model and things like that, we're starting small and building it up from there. And so right now we have it as oneSimulink model with some Matlab to set it up and things like that. And eventually it will scale. Yeah, we're going to need different systems that are going to connect in from there. And then moving on to some of the collaboration, that's where our data broker tool comes in.
[ 00:34:24 ] And since that's open source, we're able to share that with people and be like, 'Hey, if you want us to use your model, here's some ways that we can connect in.' And so we have that Simulink and currently Flownex connections into the data broker. And it's something where if someone comes to us and says, 'Hey, we have another tool, can you investigate if itcan connect in?' and we can work from there on that collaboration. And that way, we can take your physics model, and you can use the data broker, drop it right in, and connect it to your VMs and see that. Does that answer your question?
[ 00:35:01 ] You also mentioned the hardware in the loop for a compressor, especially the gas industry scale compressor. It'shard to get it at a lab scale for hardware on the loop. How do you deal with that? Yeah, and that's another one. So we start, and you have your physics model, and you have the network, and then you can start getting one piece at a timeand doing Hardware-in-the-Loop and building it out. And that way, you don't have to have the entire lab up at once forHardware-in-the-Loop. You can do it by component. And then if in the event you have it, you can start putting together a bigger environment. Anyone else? Yes. Hey there. Thanks again, you guys, for the talk.
[ 00:35:47 ] I think a lot of asset owners at this point have realized the benefit of using digital twin environments. Have you all looked at integrating the inputs and outputs of something like that, something like a digital twin at a site? operator training station into these kind of modeling and simulation environments? That's question number one. I'll do two after you answerthat one. So one thing that we found from talking with different people throughout industry is that a lot of times they mean different things by digital twin depending on who you're talking to. So I think the kind of going back to that first diagram that we showed of fidelity and scope. As long as the digital twin matches that level of fidelity that we need to be representative, which I would imagine it would by the name, then we'd be able to leverage it as part of this multifunctional simulator.
[ 00:36:45 ] The other question is just the scope of the digital twin. If it's a very comprehensive twin, then perfect. We wouldlove to use something like that. Yeah, at least in my experience. I've been on
both sides of the fence, an asset owner and DCS vendor, digital solutions consultant. Most times, a lot of our customers have digital twins. They have an exact replica of their current operating system on its own little island that doesn't touch anything else. So, I would figure it would give you that level of fidelity that you would probably want. Second question would be: How do you avoid reinforcing the kind of survivorship bias inherent in using historical data in simulation and modeling? So, analyzing the results of real-world data of the survivors that actually offer up the data of cyber incidents versus taking all the potential inputs and outputs of a cyber event.
[ 00:37:49 ] That maybe don't get reported so that's that's kind of one of the goals of the environment is if we build out these environments using the digital twins and things like that we can move on from the survivorship bias and we can actually then run attacks against our own system and see how it responds and that way we're not relying on other organizations reporting hey here's how we mitigated this something like that we can actually see how it affects the system and I think at that point it also comes down to having a robust experiment design to make sure that you're covering a comprehensive set of scenarios that are both representative of what's happened historically. We don't want to disregard what's happened in the past, but we don't want to limit ourselves to that.
[ 00:38:34 ] So we need to make sure that the set of cyber attacks that we're exposing our system to are beyond just the historical TTP sequences. So I worked with these guys on this. And your question is really important to me because I come from the threat intelligence kind of background. And whenever I create an intelligence product, I have some kind of general audience. Usually it's like, hey, Scott, write this report about this thing. Whenever you're a very specific operator or you're a very specific engineer doing a very specific task, if I gave them my typical general audience, it's not going to make any sense. So one of the processes we're working on, like our workflow is changing for cyber threat intelligence for us.
[ 00:39:30 ] We're looking at taking cyber threat intelligence and turning it into threat intelligence bundles, and then turning that into an adversary emulation that's very close to it. And then we run it against the model, and the model will help usdefine what actually happened within the physics, you know when does the physics become affected right? And so and thenyou can make adjustments in those precursor events that don't really have any effects on the physics. And you can say the operator may not see anything happen right now, but if at this stage they have compromised your PLC, they may do a firmware update, and so you might not be looking for something in the physics, you may be looking for something like a file has changed, and so we have other suites of tools where you can refine that threat intelligence to be more specific to what stage of the attack you're worried about.
[ 00:40:30 ] Like an engineer is probably not going to be like, hey, I really want some PCAPs, you know, and they're not going to be like, I want some snort rolls, you know, I want to go change the firewall. They're not going to do that. But forearly in the attack, that may need to be coordinated and you may need to say, 'Hey, engineer, I'm going to be changing these firewall rules' because at this stage, after this extensive experimentation on the physics model. It's not going to show in the physics, it's going to show at this level so be aware that we we aren't doing any kind of maintenance stuff but it's kind of like we're
[ 00:41:02 ] going to have to make a quick change to our architectural design and so I come from a nuclear background too and we would have safety minutes and one of the things that that I always found really useful is when people were actually telling me about something where they got hurt or something like that and they talked about how they you know they worked back through their lessons learned and I have been promoting the idea of a cyber safety minute kind of thing whereyou say, yeah, the physics may not be affected and you may not be at risk, but there may be an announcement that this specific ticket to change these rules or something like that could have downstream effects if it's not done.
[ 00:41:40 ] So you have a shared awareness of the potential physical effects that could be. And that way you don't just like escalate to like the worst case scenario before it's necessary, but you kind of build up the anticipation of something thatcould happen. And so, but we think that threat intelligence is really missing that full picture of. We don't care because it's not a messing with the physics right now, but you might need to, but not yet You know what? I mean, like it's those kinds of precursor things so we can get people aware of how it could eventually and in this transparency with these tools, you can create your own like for your specific environment. If you have a digital twin and it's separate, you could run the adversaryemulation within that environment and you could know exactly what would happen, you know what I mean?
[ 00:42:31 ] And then you can create your detection rules, your architectural recommend changes there, you know what I mean? And it's like we've already got the plan here's the here's the playbook and so if we see these things happening because it's been happening to our buddies, then you know. If you've heard of volt typhoon, everybody says volt typhoon,what do you do? What's going to be different for your environment because you don't have the same living off the land tools that everybody else does. So that's kind of the scenarios and the use cases that we're seeing. And I really appreciate you saying that people actually do have the digital twin. And I don't know if that's everywhere. But kind of my thought and why we want partners so bad is because I'd love to have like some of our team setting up some of the open source toolsand kind of the stuff that we're pre-commercial with, and have like that kind of risk-free zone where we know exactly what
[ 00:43:21 ] you have and it's not so intrusive for us to work with you that we're gonna disrupt you but we're actually going to be like, hey man, this is literally how we would do this if we were going to do an assessment of your architecture or physics –maybe you can consider redesigning things over time so that it operates in a better way, kind of like they discovered withthe reactor, right? Like, the physics protects it, so you can start rewriting your playbooks with engineers and your CTI analysts that don't always know that. And people will get a shared understanding. And your board will appreciate that you went to that level of planning. And so that's why whenever you said 'digital twins everywhere', dude, I was like, I would...
[ 00:44:06 ] So that picture that had the blue lines everywhere, that's all the pipes in Texas, by the way. And so my kind of vision and why I love working with these guys is I was like, so. what if we wanted to create the entire state of Texas like infrastructure like we just took all the compressor stations and we had Legos they we used them as Legos and just builtTexas's compressor stations and then we did this mass mass adversary emulation or different types of scenarios Whatwould that look like? When do we get to a state of emergency? And because people don't know, like only until like Colonial happened, we were like, oh, this could be a real thing.
[ 00:44:47 ] So, we would love to have that kind of emergency planning type of tabletops where it's like joint, joint, you know, industry, public, private partnerships where you get to keep your data, right? You're not exchanging your data, but we use this process to make sure that you can take. some kind of CISA advisory and quickly turn it into something that applies to your specific environment. And so it's a lot to say, but I'm glad that you brought up what you said. So thank you. Any otherquestions? Yes. I cannot reach across two different lanes. Hello. A very cool presentation. I liked it a lot. One question I did have is when we're talking about the physics base, does that take into account over time? So if the set point might say that 100%, that's too much, right?
[ 00:45:50 ] But if I run it at 99 for a long period of time, can that deprecate the machinery as well? Does it kind of take intoaccount an over time basis of running at a? It really depends on what you've included in your specific physics model. I know we do not have degradation effects over time built into the model we currently have. But if that's something that your company has built into your physics simulators, then that would just increase the scope of the analysis that we could do.