Skip to content

Moderator:

Panelists:

Transcript:

We're going to start at the end here with Trey for the CISO, for Bug Crowd and do quick intros kind of coming down the line. Trey. Howdy, folks. Tre Ford. I'm the CEO of the America's, for bug crowd. Came out of private equity. Used to help produce the Black Hat Conference. Been kind of all over the industry on the vendor and the delivery side.


Excited to tear into the machines. I, for one, welcome you. You're welcome. Our overlords. All right. Great. You know, allegedly saying please and thanks to the AI, burns millions of dollars a year of overhead just by trying to be polite to the overlords, but. Okay. Clint, would you like to introduce yourself? No. Hi. I'm Clint.


I'm an AI addict. So actually be founder of The Kitchen. I'm also Lee, director of cyber innovation at Morgan Franklin Cyber. President of the AI overlords. Welcome. Committee. Wrote a couple books. 30 years in cyber. 12 years in AI development. Thank you. Ron Chichester, Ron Chichester I'm the token attorney on the panel. How I got into I was, in, genetic algorithms back in the 80s.


I learned genetic algorithms from the guy who invented genetic algorithms, which was John Holland. He was a professor at the University of Michigan, where I went and did my undergrad in grad school. So fortunate that I was able to take genetic algorithms and employ them into, aircraft design. When I was an engineer, aerospace engineer, and general Dynamics, just over in Fort Worth.


That's how I ended up in Texas. And then, I started my first neural network in Fortran, back in the 1980s. Weren't doing flutter suppression on the F-16. So that was my intro to AI. Fortunately, I, I stuck with it, after I went to law school, in back in the 1980s and got my law degree in 1991.


I've been an attorney ever since to mostly intellectual property, because I had a little experience with cybersecurity back in the defense industry. I got into that, too, especially with, trade secret misappropriation. So, so with the engineering background and the technical background, I, I've always been working with technology companies, and that's been my bailiwick all time.


My last job, right before I retired, I was doing I was combining, law and engineering and computer science. So I had the dream job. And, so I continued doing I still do, do I, in automating the legal practice right now, if there's one industry that is more risk averse than cyber, it would be the law.


Yes. Yeah. Thank you. And then Justin Hutchins. Yeah. Justin Hutchins also go by Hutch. I am an innovation principal at trace three. So focused largely in my day to day job on emerging technology. Worked closely with various different VCs, as well as startups, and also the author of the book The Language of Deception Weaponizing Next Generation AI, which, of course, focuses on the adversarial misuse capabilities of emerging AI technology.


Brilliant. A lot of diverse perspectives here, and we're going to tear it open. So the first question is, what do you envision for the future of AI? I mean, we're being a little broad here, obviously. But particularly for business. And how do you see it evolving? So, Trey, I want to start with you, and then we're going to jump around a little bit.


Wow. Start with the cold call. So crowd, we often talk about how we view AI as a tool, as a threat, and as a target. And so, regardless of how you're interacting with AI in-house developing, if you're making use of technology providers, if you're implementing that via feature in your technology stack, or if you're on the research side and weaponizing this agent or other patterns.


I think we're going to see this touch in a lot of ways. What I'm most excited about is, we spent a lot of time on human experience, the UI, the UX, and how we interact with, applications and technology today. I don't think we spent nearly enough time thinking about how we're exposing rapids, our, our applications to the agent partners, and so how we're going to allow the AI to work laterally between our tech stack.


And that sort of thing. We were referencing Star Trek in the first session. You know, you talk to the computer and it's talking to the health, it's talking to the air conditioning, it's talking to the engine. Talk to the weapons. One interaction point. And I think, we're still suffering from that RSA experience is, you know, you walk the RSA floor, which is coming shortly.


You're asking the question, is this a feature? Is this a product or is this a platform? And so when you think about the continuum of what these capabilities look like and how we're partnering with and leveraging, I think it's going to evolve rapidly. Yeah. Fair point Clint, what do you think. So especially on the OT side. Like how do you see AI intersecting?


I think that I'll piggyback on what Tre said a bit, but I think that first of all, everyone needs to expect that there's going to be pains before the game. Like all technologies that get integrated, there's going to be resistance, which is futile. And there are going to be pains and uncomfortableness while we're figuring out if and how to integrate AI technology, especially in OT, because while OT is already, you know, still 10 to 20 years behind regular cyber and, you know, safety above all, and we don't want to integrate these things that can stop or disrupt, the process.


But once that I guess the reluctance is over and I think it's inevitable that we're going to see AI. Well, in let's, let's, let's kind of back up a little bit. Right. So AI in terms of like machine learning and traditional AI has already been integrated into OT in many places. And then especially in cybersecurity, you know, especially when, silence kind of entered the field using AI for their algorithms for categorization and, analysis and behavioral analysis.


So it's already here. So if we're talking about generative AI, where we're using it to analyze and make sense of context, I think it's a little different. But I'll just in my little spiel there with it's going to be uncomfortable. There's going to be pain, but inevitably we're going to see it to be used once we once we figure it out.


But it is helpful. So I'm curious, Ron, where does the law fall? And a lot of the conversations around AI, are you early in the conversation or are you playing catch up, especially in companies today? Well, I was I, I used to be an adjunct professor. I taught law for 14 years at three different law schools. And so what I always told my students was, law lags, technology.


And so, you have to first be able to describe it before you can legislate it, to turn into a statue, to turn it to a law. So there's always a lag. Now, having said that, you know, students always ask me, you know, I your AI is all different. It's really new, is it? You know, and what I tell them, it's not exceptional that because the word artificial is in front of it.


That's an adjective. It's replying to something that's already existed. The law is ahead on intelligence, on fraud, anything, you know, business, all of that. That's all. That's all well settled for the most part. And AI is just kind of a different flavor or kind of a different aspect about it. Now, fundamentally, what's law. Right. And and what do you kind of the one sentence explanation about what law is, is the regulation and or the effect of, actions between people in a jurisdiction.


And AI is all about making decisions. Decisions are the predicates to action. And therefore, because they affect actions, they affect the law. So I is absolutely, very much, involved with this. Now, what I say to CEOs, actually one CEO one time told me, he says the hardest thing he did was trying not to make mistakes. And but what's going to happen is that every decision that's made in the company is going to be subject to automation through I.


Right every decision. Right. And and the thing that you learn, if you guys at one time I was, CIA, certified information systems, auditors. One of the things you learn is that the owners of the company trust no one. And once the AI gets trained and they can trust it, they they're going to they're going to use it.


So I and because I was all about making decisions and most of what companies do and the controls within a company are all about making decisions. So it's going to be just in fundamentally embedded. It might not have an outside user experience, but within the company it's going to be paramount. So everybody by the way, the person that should worry most is the CEO.


They're the ones that actually cost the most money and the ones that the AI is going to want to replace most quickly for cost benefit analysis, and then we'll work our way down. That's sobering. So as a former CEO, that's very sobering. Do you do you see AI is challenging the law and actually rewriting law as we roll them out, especially around intellectual property rights?


No, I think, well, it's going to have an effect. Well, first of all, intellectual property, the United States, it's under the Constitution that a human, is the author and a human is the inventor. So that's established law since the 1800s. So that's not going anywhere. It's just a who's who's going to own it. Ultimately, that's the that's the key.


But also, who's liable. Right. And a lot of all, corporate law is about shifting responsibility away from the company for something that the company did and allowing it to get out of liability. So there's obviously liability aspects. And the courts are very adept at figuring out okay, who made the decision. And there's what are the ramifications from that decision.


Right. So if the AI made the bad decision, boom, whoever owns it or whoever's in charge of it, you're going to get it. So that's not going to change it. Replacing the human with AI does not change the law in one iota. The law might go through and say, and look, a lot of people are out there trying and saying, hey, it does say it does.


In fact, try to make it so that we can just go shove all the different oh no, the AI made the mistake. It's okay. You know, it's benign. No, it's not going to be that way. So it's not a good change. Someone is still responsible. I like that. Yeah. Ultimately, I should replace lawyers. Well, I actually go look in China.


They're way ahead of things and stuff. They actually have, AI adjudication. They have AI judges. So most if you look at electronic discovery, like 90 plus percent of the information is in digital form. Right. And just throw it into a little or nine and 11. But through a neural network that that goes through and you know, here's the input and here's the output crunch.


And they're doing that now. In fact, a lot of, I think probably eight eventually, this is something I've been harping about for more than a decade. In trying to go tell this to a bunch of trial lawyers, they go nuts. But there's going to be a whole different form of arbitration. Basically AI form of arbitration.


And the, I guess, you know, certain amount of data and cranks out an output in, you know, minutes, seconds versus weeks or months. I mean, the last, last case I chaired, was in Nebraska was a breach of contract case, and it took a year and a half. Instead, you're going to get it in minutes. Somewhere between wonderful and terrifying.


Yeah. Hutch, what do you think? So I, I guess I should caveat my response with pointing out that when I talk about the future of AI in business, I'm generally looking a few years down the road. I think that anybody that tells you they know more than a decade, what this is going to look like is either lying to you or delusional because things are moving too fast.


But what I do think the next few years holds in AI is a rapid transition from conversational artificial intelligence, which is largely what we're relying on right now to a genetic AI being deployed within the environment. We're rapidly seeing in the startup community and in the investment community, investments into MCP or Model Context Protocol, which is a relatively new protocol created by anthropic, which gives AI systems and specifically clients access to various different tooling in a way that is scalable and consistent.


We're also just saw the release of the H-2a protocol for agents to now talk to each other. And I do think that as we get to a point where we can more reliably trust the outputs and the performance of AI systems, we're going to go beyond just having conversations with them, and they're actually going to be doing things and taking on roles and responsibilities within our organization.


And I think that that itself is going to drive another interesting trend, which is moving away from these general large language models and moving towards custom, purpose tailored, smaller language models. There's been a lot of research that shows that when you have. So you can use these large general models for purposes in your organizations, and they're going to fail anyway in the area of like 3 to 5% of the time.


Well, that's a huge problem for operational use cases. And it turns out that these massive models, GPT four, was thought to be in the trillions of parameters. You can take something like a 4 billion parameter llama and struct model, and if you fine tune it for a particular task, that 4 billion parameter model will outperform even the largest general models.


So there will still be a place for scaling, because those smaller models are largely distilled from the the frontier models. So there will still be a huge market for these frontier massive, large language models. But I think as we move into agent AI in the enterprise, we're going to be moving towards those smaller purpose fine tuned models that are particularly performing specific tasks.


Well, and that's interesting because as you shift to smaller models, you're also going to probably shift to more on prem local compute as opposed to someone running it for you in the cloud. Yeah, I agree with that. And I think that that actually is a driving factor in itself, because many organizations don't want their data going back and forth between them and OpenAI or Microsoft or Anthropic and so being able to keep it within their own boundaries and keep full control of their data is actually a positive point for a lot of organizations.


So one more risk before we move on to the next one, I'm curious about this, especially in the context of moving AI into your data center and repatriating the data center. Part of what Scott talked about a little while ago. Where do you see I.T organizations and their ability to actually repatriate the models? I mean, this is a whole new skill set that most organizations aren't ready for.


Yeah, I think this is one of those areas that we talk about with reskilling that there is I do think that we are going to see some level of at least maybe not job displacement, but at least skill displacement, where we are going to have to be more adaptive as individuals and as organizations. And I think this is one of the areas where we do need to be reskilling and upskilling our teams in order to be able to address how to run those small scale artificial neural networks or those GPU processing systems within our own data centers in order to be able to enable our organizations for the future.


Definitely a different skill set. So all right, let's jump to the next topic here. Kind of riffing on the future and thinking about where we're going with this. Do you think AI and quantum computing timelines are converging, especially around threat vectors? Right. This is kind of a it's a big question, right? Let's start with Trey, though.


I'm curious on the end, especially from you know, the vendor side of it where you're addressing threats in the marketplace and thinking about the future. Where do you see this going? That's two for two. I'm gonna get all the cold calls. So I in quantum look AI again. But when I think about the tool target threat concept, when I think about AI, there's going to be a lot of use, both from the research, the prep.


There's a lot of work in fingerprinting and targeting and post exploitation in identifying movement patterns both offensively and defensively. And I think AI is going to support and move a lot of that forward. I think of Jimmy Neutron from Boygenius, really smart, really fast lack of wisdom. We're going to be leaning on AI to do a lot of things to inform us, to make informed trade offs, data, information, knowledge, wisdom.


We're going to apply wisdom back into the lower layers. So from an attack standpoint, whether we're talking about crowdsource delivery, whether we're talking about in-house resources, maximizing impact from our employees, I think that's fairly straightforward. When I think about quantum computing and we're talking about this defense ability, CIA triad, but confidentiality and possession, integrity, authenticity, the idea of being able to break or decompose what was a custodial protective mechanism like that's going to change the cost of computing is going to be driven down.


I think the nation state concept, depending on who's in your attack or what your tech model or what value you're providing an attacker, it's going to be radically different. You think about the cost of computing being so fast and so cheap. As bug crowd started tackling kind of the quantum proofing as far as you know, your transport mechanisms, your encryption keys, things like that.


Like where are you all in the conversation. So we're on the FedRAMP journey today. So, moving towards that are a moderate and there's specific controls and all that stuff. What are you the Fips 140 and that stuff in terms of quantum, there's a number of researchers and customers playing in that space. But in terms of our core infrastructure, I mean, right now we're aligning towards the federal market space fair.


Clint, what do you think, especially around OT? I mean, that's that's a weird area. I don't think there's a specific answer for OT here, because and here's why. I think the reason why this question exists, you know, what do we think of AI and quantum computing or the timelines converging? And I think the reason that question is asked is because there's both an excitement and fear around when you get the speed of quantum computing combined with large language models, what it can produce, because now compute power is no longer the limitation.


However, I don't think that they are converging on a timeline as soon as what people would, would think or even hope or fear. And that's because AI is evolving faster, much faster than quantum computing is. Just so you know how big of a nerd I am, I read quantum mechanics books for fun. And so I've been doing a lot of thought on this, where Quantum Computer is today is where it's the equivalent of room sized Whopper computers.


Right? That's the equivalent of where we are with quantum computing today. I the technology is compute heat energy in terms of graphics cards and things like that. Whereas quantum computing, you have so many other physical factors involved, like keeping everything cold enough. How many qubits chips you can put in an area and all that good stuff. So at the end of the day, yes, they are converging, but not as soon as I think you'll make a difference to, anybody here.


We're a ways off from having quantum computing and AI converging to where it gets really scary or really exciting. I'm looking for the AI to drive the quantum. I think that would just solve a lot of problems for us. So, Justin, I want to give you a word. And then, Ron, I want you to have the last word here from the law perspective.


Yeah. So I my answers may not be extremely helpful, I guess. When to ask the question if AI and quantum computer timelines are converging, my answer would be not yet. And I think that then begs the question of when they will converge. And my answer to that is I don't know. Now, I there's a lot of people that are very close to this technology that I've had conversations with, and even there you get no real assurance of what the timeline looks like.


You will hear estimates anywhere from this technology will be available within the next couple of years, and stable and scalable, all the way up to speculation as to whether or not we're ever able to really harness at scale the full power of quantum computing. But what I do think is an interesting discussion here is less about when the timelines converge and more about the implications of what happens when they do, and why that matters is, and I think it it's fascinating to take a step back here because all computing technology throughout history has been done on binary.


It is built on semiconductors that have two possible states, yes or no, 1 or 0, on or off. And then we have these mad physicists like Schrödinger and others that have literally pierce the veil of reality and determined that at the most fundamental level of matter, the atomic and subatomic level, we have particles that exist in superposition and can simultaneously exist in a theoretically infinite number of states simultaneously.


And so I thought it was fascinating. Scott mentioned kind of the power of exponential scaling in his keynote this morning, and he talked about with IP addresses, we went from, two to the power of n, we went from two to the power of 32 for 32 bit addresses, and we're running out of IP addresses. And if it wasn't for network address translation, we would have already run out of them.


And then we went to IPv6 with two to the power of 128. And we now have more IP addresses available than the grains of sand on Earth. And that shows you just going from two to the power of 32 to 2, to the power of 128. That significant of a difference. Well, what happens when that base number is no longer two?


It's no longer 1 or 0. But that base number is a theoretically infinite number at the very base unit level. And then you add the exponential scaling on top of that. And you ask the question, what does that look like? Well, the answer is it's impossible to even explain what that looks like. Wrapping your brain around the true potential of scaling within quantum computing is it's unfathomable.


And so it's fascinating to take a step back and understand what this technology is. And then I think that also contributes to kind of some of the speculation as to whether or not we can ever really, truly harness this power. But if we do what that means for stuff like artificial intelligence, which is already moving fast enough, that it is kind of straining the elasticity of our society and our ability to adapt as fast as transformation or transformation is happening.


So, yeah, I think that we may see convergence at some point. If we do, I think that that will result in transformation that is far beyond what we're already seeing with artificial intelligence. I think most people have a tendency to envision the future within the context of the current paradigm. And this is a complete paradigm shift. And so it is it's almost hard to get your arms around like what it's actually going to look like.


But I think, you know, to Trey's point right now, a lot of it is just hardening systems and preparing, especially for the post-quantum, you know, era around encryption and cryptography. But it's going to be a brave new world. So, Ron, can you bring us some of this and thinking about kind of the intersect of the law. And then I'm going to double click on you for the next one to, as far as the law is concerned, it sees this coming, and to the extent that it prevents or precludes or restricts the government's ability to surveil people and or companies, it will try to make up for that lost somehow.


I'm not sure how, but it's going to try. So, the governments all over the world have gotten used to being able to, to have a substantial amount of surveillance on people. And so and they love it. And, and companies want to have the same thing. You know, I'm not you know, they're kind of mini governments, if you think about it.


And said so their ability not to be able to survey, is, is going to be a problem for them because they're used to it. And so they're going to look for other things.


Are you signaling me or reaching if I was signaling in yes or no telling you to leave? So just to clarify real quick, just to simplify what quantum computing, the basic what quantum computing really is. So all chips, processors, are built on, transistor technology. It can be either 1 or 0. Okay. It's a gate.


And what a qubit is a quantum computing. It means that now instead of being 1 or 0, it can be one, it could be zero, and it could be one and zero at the same time. And that's the really basic fundamental of what it is. So that's why you can get so many more permutations at the same time.


So that's what speeds up the compute. So just for those of you that may be wondering what what is quantum computing? So thank you. All right. Let's move on here. Before Michael dings this little bell at me, to the next slide. How do you see the laws and regulations impacting AI and this kind of wrong goes back to my is is it before or is it after?


How does intersect? Are companies specifically discussing governance and risk free AI? Like what what's the level of the conversation today to the CEO, to the board around AI? Ron, can you take us out? I haven't been seeing nearly enough, discussion about, I think, because of the laws, especially like the AI act in the EU and some of the other laws and such that have been enacted in the US, I think they're going to go through and my my view is they're going to go through and they're going to use AI internally.


And, on things that don't front face to the customer base or, or the public and, and focus on that, one it gets them out of the that issue for governance and risk. In fact, it actually is going to lower risk tremendously. So I think that's where it's going to go eventually. You know, as like I tell my students, you know, it used to be that e-discovery was all the big thing.


It was all a really crazy thing. Now it's just standard what we call civil procedure. And AI is going to be incorporated just like any other technology. It's just a vastly better, methodology than regular algorithms. All right. And so, you know, what you used to be able to do with a regular algorithm was a small subset of problems.


AI enables you to handle many more problems, but it's still just still some problem solving. It's not all that different. And, and yes, there is this tendency in a lot of kind of panic and AI does things, but it's really a lot of, you know, the lawmakers and stuff just don't know what it's capable of. And eventually it's going to settle down and it'll be subsumed in everyday life and people will get used to it.


And then then, you know, you have the other issues. The big other issue, actually, and actually, there's a bill in Texas, about requiring companies to disclose are they using AI? And and then the issue, of course, is, you know, what language language language model using. How are they using it? How do they prompted that kind of transparency?


That's all about informed consent. And ultimately that leads to agency. And so we can talk about the law of agency. But but that's that kind of consent, that kind of control. It's it's always it's always comes down to a matter of control, a matter of risk, a matter of allocating, you know, who's going to get responsible and who's going to be liable for it.


It's just another one. So you you're triggering an idea in my head, is anyone building like the George Washington model LM that goes all the way back to the Constitution, the Federalist Papers, and basically creates to your point, you know, if China is advocating or adjudicating based on their laws, could we build a model like that? At the Founding Fathers model?


It is well on the way. It's already been. We've been talking about that for years. It's fun. Hutch, what do you think? So, breaking down the two different questions on the regulation side, I think that we're going to see, once again, similar to privacy, Europe leading the charge in terms of more stringent regulations. I think here in the United States, we are going to see very limited regulation because of the broader geopolitics of this, and let's call it what it is.


This is essentially an arms race between the United States and China for AI supremacy. And because of that, there is going to be reluctance to do any kind of heavy handed regulation here for the betterment of society at large because of the fact that we want to be the AI superpower in the world. So I think we're going to see limited regulation here.


I and to the extent that we do see regulation, I think it's largely going to be toothless, unenforceable or vague and ambiguous. Now, on the other side of it are companies thinking about governance and risk. I think that a lot of companies are and organizations are at least thinking about it, having the discussions. I think that there is, so I think the AI risk management framework is a fantastic strategic framework for how organizations should be modeling risk with artificial intelligence.


The problem with AI, RMF is that it's just that it's a strategic framework. And translating from that high level strategy to something that's tactical and useful for each of the models in your organization, has has proven to be something that's very difficult to do and is going to take a significant amount of effort. But I do think that the fact that a lot of AI technologies are still kind of hanging out in R&D and not moving to production is a testament to the fact that there is concern over risk, and senior leaders do want to address that risk.


So I think a lot of it is just rolling up our sleeves and figuring out how to apply that strategic framework to something that is tactical or tactical and useful at the foundational level. Trey. Oh man, I got thoughts. So in the private equity space, when we first started rolling into this, AI stuff, there was a lot of conversation, specifically around intellectual property.


I know that we spoke to this and identified that it may not be that big a deal. Funny thing is, but three attorneys in a room, you have ten opinions coming out of the room. And what we wrestled with was this notion of, it's a lot like for, free open source software and how you license open source if you're using the wrong license model incorrectly in a third party library, you're not disclosing that you're using code.


You may have to open source your code base. Right? That's a really big deal. There's a couple of sponsors out there that protect us from this. We have to keep track of what code has been contributed to our code base. When we're pairing developing with AI. We have to keep track of that so we understand where that model is trained from.


Are we submitting open source software or suggestions into our codebase? We have to track that. How do you do that at scale? How do you do that across your portfolio? How do you do that within your own organization? That's a non-trivial question. The second piece I'd push on is this notion of, corporate vicarious liability. We think of AI as, a decision making process.


But they're non-deterministic, the probabilistic and that's very different than how humans operate. Now, economists will remind us that we are not as rational as we care to be as humans, but we lack traceability into the decision making process that comes out of a model or an agent. And so the AI told me to do it. I think counsel was representing that.


Ultimately the CEO, the company, the executives, the leadership team is responsible for decisions that a company made. But how do you go back and point towards a misinterpretation of the data that we made a decision on? In aviation, we're cross-checking all of our instruments all the time. The same thing in O.T. we're cross-checking all of our indicators all the time against that.


And that's a non-trivial problem. Space. The last thing I point out point to is the third party doctrine. So, there was a lot of, hullabaloo around. I'm drawing a blank on the Chinese model. Deep, deep scar. One. So the third party doctrine is interesting. This is that concept of if you leave something in your mailbox for 30 days, the feds can come grab it.


Not have to notify you if you're using, I think this was the Microsoft in Ireland case. You were leaving information SMTp mailbox, leaving stuff in your mailboxes on the platform. The government could go request access to it because it's not in your custodianship. They could actually access that without necessarily serving you a warrant. They could serve that to the service provider.


And so when we start federating our data, sending information in for training queries, uploads, we're losing custodianship of this information. There's also the inference, the cognitive biases. We become subject to based upon the model we're using. And so there's selection bias recency bias. There's decision making. That's going to be happening by the model on behalf of our employees that are human and open to influence and so there's layers where I think we're going to be decomposing, both regulation and law within the company, like this is a minefield of, very interesting layers, both from philosophy to intellectual property law, like all the way across the spectrum.


There's layers to this. I think it's going to challenge a lot. Clinton 60s sorry, a no no, I can pass. Yeah. So I think that the conversation about governance for risk in AI is happening more from an opportunity, opportunistic perspective. And what I mean by that is I'm seeing a lot more vendors out there pushing the message, hey, there's this new thing I where GRC consultants, you need us more than I'm seeing executives actually, pushing for it, I think.


I bet you pretty much every executive in here is asking the question, what is this? Should we be looking at it from a group perspective? What does it mean? What is the risk? That's the right place to start. You should be asking. But for a lot of reasons, which I'm not going to get in because I have less than probably 10s now.


Thanks, John. Is that the risk of local only on prem, only AI models is not nearly as what people would have you think, but we can talk about that later if we don't get to it. Here. Right on. All right, let's, bump a slide here. As CISO, what is your biggest concern regarding AI and how do you plan to adapt and support the business while protecting it?


So I'm going to start back on the end with you, Mr. Bug. Crowd, how do you secure an AI and what is an AI? There's a lot of folks who said we're not going to open these ports. We're not going to run these services on back in the day of the Cisco Safe model into fitting our data centers.


There's a lot of pieces I'm talking to now that say, we're not going to use AI. And I don't know if that's a tenable long term career decision. So the question is how do we enable the business? How do we enable folks to use things rationally? And the answer is we can't. But what we can do is monitor.


And so most of us have moved towards zero trust networking models. We've got to. So we've got an ability to proxy and monitor and keep track of usage. We could have conversations with our folks, train them, but ultimately we've got to keep track of how things are being used. And so as embarrassing as it is going back to like the chapter of Blue Code History with rich proxies and, application awareness, we need to understand how our folks are using it and then take stock of, if that's acceptable, how that's acceptable, and then clean up behind it.


We're going to have to put guardrails in. This is a complicated one. That's actually really consistent with what Scott was talking about. If the three layer model on the slide, the top innovate, where basically you can't say no. But how do you you think about the stack to allow it to run but protect the business. That's a good answer, Clint.


What do you think in 63 seconds? Oh, I get three more seconds. Cool. All right. I think so. The the my biggest concern is everyone blindly using the magic without understanding the magic. And so I think education is the key to understanding what happens on the back end, understanding what's a real threat, what's not a real threat.


And this can largely be done by first of all, there are tools out there like luminosity weights and balances, length. Smith pedantic log fire that give you massive back end insights to the prompts, to what it's doing to what it's to what it's checking, to what the the neural network has produced. When it's been trained, there's a lot of visibility and open source models.


And so I think that as a business, you should try to opt to local models only as much as you can. That fits your use case. And you should look at the open source, weights. You should fine tune it to your preferences if you can, when necessary, but ultimately get the transactions of all of these weights and balances, reports and these tools that can check what's going on in the back end.


I think going back to monitoring visibility, understanding what's really going on. It's not a nebulous black box. You can actually get this and that's why for your business, you shouldn't just blindly be allowing even with guardrails in place, you shouldn't blindly just be using GPT models. OpenAI or whatever. Put guardrails in place, use local when possible. But at the very minimal ask your vendor, your provider, your, your consultant, your, the app, whatever you're getting from, ask them if you can see a report of the limits self and the output and the transactions and the all these weights and balances and everything.


That's a big discussion. But visibility is possible and it's it's not a big black nebulous box. It's a really good answer, not just a black box, but instrumented, just like you would any other IT system. Hutch, what do you think about especially the adapt and support kind of portion of this? You know how how are you approaching that conversation with innovation?


Yeah. So I think my biggest concern is the asymmetry that this is creating on the cybersecurity front between defenders and attackers. And and of course, asymmetry has always existed. It's defenders have to block all of the doors. They have to attack or defend, an ever increasing attack surface, whereas attackers only have to find one way in. So there was already asymmetry.


But because of the complexity of artificial intelligence, attackers can easily adopt it because if it fails 5% of the time, it doesn't matter. They're operating opportunistically anyways, but defenders have to make sure that it's consistently reliable, and that creates a barrier for successful adoption, for defense purposes, or even for just operational purposes in your business. And so I think it's further exacerbating that asymmetry that existed in cybersecurity.


And to answer your question about how do we adapt for that? I think that the most important thing is, once again, speaking of that three tier model that Scott spoke about, of course you got your table stakes, you've got your differentiators. But oftentimes we're not leaving room for innovation. Innovation is absolutely critical to this. We have to be figuring out how we can make best use of future technology, and we have to, start adapting to that technology, figuring out more effective ways to make this reliable, make it useful, and start embracing it.


Because as long as the threat actors are embracing this technology and we're not, they are going to move faster than the defenders could answer. Ron, you want to bring us home? Okay. My biggest job is trying to keep my clients out of court. And so, what general counsels is such typically, recommend in these areas?


Use the AI in your own, your own operations and not have it outside the company, not have the implications necessarily outside the company. Oftentimes that means that the AI is working in parallel with the humans until the humans can get replaced. And ultimately, that's what's going to happen. But, you know, from a broad standpoint, I don't think it's going to be nearly as big a problem.


The oligarchs to really run the country, right? Ultimately, they like AI because they if they can trust it. Did you say our country is run by oligarchs? Okay. It's just being clear. Yes. That's there be drag. Do you want to name names or are we going to talk about this later? But the top 400. Yeah, the top 400.


That's what's considered. Yeah, they're the ones that fund the law. All right. You don't I mean, I've done this, all right? I've gone to the Texas legislature and stuff, and and, you know, the first my first time I tried to bill it was, you know, lobbying 101 and, and and the first thing that they asked, they said was, is this going to help, Texas residents?


I said, oh, yeah, it's like, don't say that. As soon as the public gets some benefit, somebody is going, some company is going to lose and they're going to come after you. And you know, you literally you go through and you find the opponents, the stuff. You make a horse trade, fix the bill, and there you go. But yeah, that's how the sausage is made.


You know, bills don't originate from nowhere. Companies. Right? Bills companies like my company wrote bills. I wrote a bill. Right. And, you know, I work it out with the AG and, you know, the attorney general and things like that. That's how it gets done, right? The public doesn't write a bill. It's true. Well, we're going to have a conspiracy meeting here after probably the show.


So it's you're all gonna see it, but you you don't know anything about it. I saw the PBS special. I'm just a bill on Capitol Hill. And that's not how you're going to be the first one at the meeting, I guarantee you so. All right, let's move on here. We're wrapping up in just a few minutes on the session.


Let's talk about, you know, tooling especially, I think there's probably a lot of opinions up here. But let's talk about tooling. What's relevant are AI tools getting enough data security built into, we we actually touched on this in the last topic in several different ways. But, Clint, I want you to leave this out, please.


No time constraints, just no more than 85 seconds. What time does this event in eight minutes and 18 seconds. So if we're talking. So if we're talking, what context are we talking about. Tooling. Are we talking about LMS ability to use tool or just in general? I actually I just focus on the security aspect of this question. Right.


So so people are deploying LMS in the cloud and now on prem as well. What's the security angle especially that cyber CISOs should be concerned about. Right. So in terms of, you know, security risk right. You you have just a you can look at the OWASp top ten to kind of see what these things are like. The you know, you have prompt injection and data poisoning, model poisoning and all these things.


But ultimately the biggest risk is going to be, really two main things. Number one, it's going to be your data, your private data, whatever that is proprietary private company information, employees, customers going in, being used as a prompt or a data insert to these large language models, which now may or may not, depending on what they say, be stored within their cloud environment.


It may be used for training even though you know you can click the button that says nope, don't use my data for training because it's trustworthy. I'm sure. But ultimately, any time you're sending your data to a cloud, then you have that risk rate. The other risk is and there's there's this I hear a lot of people talking about, well, if I have an I'm not advocating for Chinese models, I'm just saying there's some fallacy in what people say about what models can and can't do.


So, so I'm using the deep Sik model as an example, but, well, what if they train the large language model to hack my system? Okay. Is it possible? Sort of. And I'm not going to get to the technicality of why or why it isn't. However, if you're using a local only model, okay, then there is much lower risk of actually having a model being trained to go and do things, to write code to hack your system and all of that.


So where is all this going? Where am I going with this is that you have the ability to train out shenanigans, right? So if you have a local model, especially an open source model, that you can fine tune yourself or have a company fine tune, then that model can be trained, if you will, fine tuned to reduce that risk, to reduce those those shenanigans that might happen.


Right. And so those are the two biggest risks is, is can it hack and can it. What's going to happen to my data. The mitigation for most of that is simply if you are using a model on the cloud. And by the way, so you can for example, using OpenAI, GPT models, you can fine tune those and then put it into a private Azure cloud.


And now that is your model. That is not the model that the general public hits when they go using OpenAI ChatGPT GPT models. Right. So, so back to the walled garden. Yeah. Yeah. Let's let's get Hutch in on this because we got to kind of wrap up the timetable here. Hutch. What do you think. Yeah, I think there's a lot of different ways that this could be answered.


There's already a lot of different tool markets in this space. We've got everything from endpoint agents and browser plug ins. We've got gateways. We've got model scanning tools. As mentioned, data security is becoming much more relevant. DSP already existed, but there's a lot more interest now in the post gender age. But I think the one that sticks out to me is the most fascinating.


And I think what's going to be the most relevant going forward as we move into this agent tech future, is the question of non-human identity. And I want to distinguish that from historically what we thought of as non-human identity, because in the past we had human identities, which had a lot of variability in the way that they behave.


They use web browsers, they use email. But they were relatively low privileged. Then you would have your non-human identities, which were largely kind of API key certificates, which were higher privileged, but just did the same thing over and over again. We're now seeing the strange convergence where non-human systems are now operating in a non-deterministic fashion, where they are taking the worst of both worlds.


They are assuming the high variability risk of the end user, as well as the higher access to privilege and being able to do more in your environment. And so I think the focus on and we consistently talk about kind of the zero trust idea of identity is the new perimeter. And I think now more than ever before that is going to apply to the AI.


A genetic space, because of the fact that we absolutely have to get a sense of how we're handling authentication, authorization, how we're creating those guardrails around how these systems are going to behave in our environment, how do we establish observability and understand what they're doing in our environment, especially when you move from not just agents, but now agents talking to agents, the the challenges involved in all of that are going to be monumental.


And I think that that is the next frontier for security tooling in AI. I think identity is a great point there. Run really quick. All this tooling stuff, particularly from the decisions that have been handed down from the courts about intellectual property, particularly patents and copyrights. You really have to worry about, who's the author of the code and whether or not the company can actually own the software.


And if you can on the software, it's going to, complicate tremendously in type of merger or acquisition. So and for a lot of small companies and stuff, who to do a lot of pioneering work and want to get bought by the big ones, this is going to be a huge stumbling block. So you've got to be really careful about documenting what software you used, how it was used, what we got done, and literally, going through and figuring out line, literally line by line, a lot of code, who did what and what can you can claim and therefore what you can value your company for.


So very important. It's a really, really important point that tech teams on what Trey was saying as well. Trey, you want to bring us home on this? Yeah, I'm going to go back to Casey. John is reference. AI is a tool. It's a target and it's a threat. So when we think about what tools and what security concepts fit into there as a tool, it's a productivity tool.


There's a lot of things we can do with it. It's going to be very efficient. It's going to create a lot of value for value capture. As security executives, we need to think about how to facilitate that, provide guardrails and monitoring around it. I think we kind of beat that one to death as a target. If you're building AI, any neural networks do this technology in your environment.


You need to be thoughtful about the architecture, how you're testing it, who's testing it, what your tester biases, and make sure you're partnering with folks. If you're finding diverse perspectives for that assessment work and how you're city where you're sourcing it, then finally is a threat. So along that same line, when you think about AI as a threat, I'm not talking about Terminator and the return of the machines.


I am speaking to this notion that, I'll pick on China because it's fun. If you're pulling their their open weight model into your environment and running, you're going to do your initial assessment, but it's now part of a supply chain. And so there are updates. And I don't know how you're testing that infrastructure. I don't know that we fully understand as general security practitioners, as software engineers how to test this the way that we would a web application or any other traditional software.


So I would I would bucket this into broad groups. AI is a tool. It's a target and it's a threat. And so I would tear apart my tooling and my defensive strategy around that good perspective. All right. Well, and that's it. A lot of good perspectives here on the panel. I would encourage all of you, please, to grab these gentlemen out in the hallway and chat them up, make friends.


There's a lot to talk about here. And the conspiracy theory meeting will be, what, 3:00 below the escalator? We'll see. All right. Thank you all so much.

Latest