Presenter:
Transcript:
So my name is Tommy Todd. I am the CEO and co-founder of Grid Light. Prior to that, I served seven years as a CSO. So I've been in cybersecurity now for a little over 30 years representing the space.
Michael and I came up together through various different entities. So it's kind of like my brother from another mother when it comes to this type of stuff. A little bit old hands to it. Today, what I plan to talk about is the hidden risks of big AI. And so I know from the last time I did a talk, first of all, was anybody here for the last stock I did?
You were here for the quantum encryption one. You saw the update from this. They're now abbreviated it to I think that it's coming in the next 3 to 5 years. So right on target with what I was saying on that one. So pretty validated some of the things I was saying. But for what we're going to be talking about today, it might be a little bit different from your expectations.
So this is not really going to be much of a security focused conversation. It's more holistic than that. It's about AI as an industry in general. So just want to set that expectation so that everybody understands it's not going to be a technical talk in the sense of we're going to get down in the weeds about AI. It's more about the risks, as I see it, about the consolidation of AI in the space and the limited choices we're going to be presented in the next year or two as these things continue to coalesce.
So I just want to make sure that is on top of that. All right. So let's talk a little bit about what AI is building. And I think it's relevant because of what we just talked about. Everybody's worried about AI and how it's going to shape our future and what that looks like for all of us. Either you're coming into the industry or your people like us, or you're on the tail end of your career.
It's affecting all of us. And so when we think about this statement, it's important because it's relevant for those that have been around a while to remember a day before the internet and what the internet meant from a transformation, transformation perspective about everybody's career. So when you think about where we're at with that and you apply it to what we're doing with AI, AI is probably the most significant piece of technology we've seen in the mankind's history.
But what happens when it doesn't belong to us, right? It's belongs to a conglomerate of entities with enough resources to control what we do with AI in a big way. And so when it doesn't belong to us, we lose control of that. And that's concerning. And it should be concerning to all of us because it's limiting our choices.
So let's talk about what AI is doing today. And again, this is kind of one of those things where it's probably obvious to you guys, we're seeing this stuff being integrated more and more into our daily life cycle. The idea that AI can make decisions about things like your credit score, the loans we saw with Oracle, there's been some conversation coming out that AI actually selected the individuals that got let go based off of certain criteria.
So things like who's been with the company longest, what's their age, are they coming up vesting. What's their salary look like. So AI is already making decisions on our behalf that directly impact a lot of lives. And it's only going to get worse as we continue to see this integrated into more and more capabilities. So these are just a few examples of things that we're already seeing.
How many of you guys are actively looking for a job, and frustrated by the fact that your resume won't ever reach a real person, it goes through an arts filter. 99% of them do that. And so your opportunity is to get in front of someone or limited now, which is a problem because you have a lot to say.
You probably got a lot of good skills that you want to bring to the table. You just need that one shot to talk to the right person.
So who owns the intelligence? And this is what's interesting is when you think about the big three players, you're looking at Amazon or OpenAI. Amazon's coming up anthropic. Right. We also have others like Harvey and some of these other tools that we see pretty popular in this space. They own 65% of the compute that you're using today, right?
They own that. They own that infrastructure. And so with that infrastructure is affected. It creates a disruptive model across the board. How many of you guys remember Claude going down a couple of times in the last 60 days. And it can go down just temporarily. It went down hard almost to the point where it became untrustworthy. Right? The results we were getting back from these systems became crazy and hallucinating.
And now it kind of erodes confidence that what I'm getting back is trustworthy enough for me to want to use this implicitly. We've seen world events lately where certain data centers have been impacted by the conflict in the Middle East, disrupting the compute capabilities of these technologies. So as world situations change, we're now at the mercy to different kind of warfare than what we've seen in the past.
So when you think about the AI infrastructure that exists today, it's owned by fewer than ten organizations. Again, this all comes down to this idea of control. Who's giving you the information and what are they allowing you to do with it? Right. When we think about AI startups, they're all running on the same platforms. So I talked to a lot of co-founders right now that when you ask them how they built their business, oh, yeah, we're using Amazon Bedrock, or we're using anthropic, commercially available modeling opus and all that other stuff.
ChatGPT. Okay, well, you're at the mercy of them. And we've seen recently where Claude has actually deleted an entire company out of its platform, like without any warning, this came out actually, today I saw an article about it where they deleted 60 accounts. They deleted all the chat histories and all of the development they've been doing against Claude overnight.
Wiped the company out with no explanation. So they have control that can disrupt you. And if you're building your platform against these technologies, you're at the mercy of whatever they want to do with it. You'll see here in a second what I'm talking about. So again, this isn't a technology talk. This is about accountability and resilience with limited choices.
You're kind of at the mercy of the capabilities of these technologies with no other alternatives. And so from a business motion perspective, there's not a lot of confidence that this stuff's not going to go down at some point. One of the things that I do on the side is as a continue to stay in the cybersecurity space, I serve as a fractional CSO, and the consulting shop that I work with has become heavily dependent on Claude for a lot of deliverable creation.
And when Claude went down for about four hours, that was half a day of work that the entire company couldn't deliver and so dramatically affected their ability to be successful. And so people are depending on these technologies to be available and be be around so that I can continue to do my work. When these systems go down, we no longer have a choice.
So how do we get here? What got us to this point I'm not going to blow up here to.
Yeah, it doesn't want me to tell you this, right. Okay.
All right. So when we talk about AI dominance and what locks you in, its three things compute data and talent. Right. And we're seeing some of this stuff happen recently. As far as being able to train the models that we use takes countless cycles that none of us have. Right. It's kind of like Bitcoin mining. I don't have a rig to do bitcoin, and even if I did, it'd be so insignificant it wouldn't be worth it.
So we rely on these capabilities of these big entities to process these models. Again, what they put in the model is up to their discretion. We've seen model pollution. I don't know if you've used any of the lighter models like Tiny Llama and things like that. They are pretty crazy and it's funny to watch, but it's also scary to think that people are relying on these models.
When we think about the data that they've scraped. For those of you that have followed the history of anthropic, when they got that 8 billion round that they got, what was it late last year, as soon as they secured that funding for 8 billion, and that went right out the door, because they had to pay off all the authors that they ripped off globally on a class action lawsuit.
And that's not the end of it. Now they're having to pay out record producers and music makers because they're scraping the internet to be able to provide that data for their models. And then the talent. We've already talked about this, that there's a global shortage of AI researchers that are concentrated on this problem, being able to understand the true implications of AI being in our society.
And as a result, because of this handful of employers, there's a knowledge gap. These guys gobble people up. If you saw the founder of Cloud Bot when it launched, you know, made a big hoopla in the first couple of days when it hit the scene within a week, OpenAI gobbled up the founder and basically took them off the market.
And so they see this talent and they want to be able to close it down so that there's no competition. You know what better way than to hire a guy that built this thing that became viral?
When we look at the acquisition strategies of a lot of these texts, it's the same old story, right? Why build when you can buy? And for startups that want to be innovative and want to be able to create something that is truly unique and favorable to society, well, big competition won't let that happen. And so they come at you with a big offer, and it's like you either take it or we're going to out innovate you.
You don't even stand a chance against the resources that are at play here. And again, all of this adds up to creating kind of a critical mass of lack of capability, lack of versatility, lack of resilience. Again, your choices are becoming more limited daily as this continues to move forward. We've seen VCs go nuts over the last couple of years by making big bets on Big Tech, only to fail out.
Clawed still hasn't been profitable, neither has ChatGPT, and they're actually burning cash at a rate we've never seen before. But yet they won't tell you that everything looks hunky dory on the surface. But if you look at the financials, they're making some adjustments because their investors have told them we need to see profitability. So if you probably notice over the last, say, 2 or 3 weeks that your token provisioning has gotten less and less and less, you're running out of message caps much faster than you ever have.
I think I talked to somebody earlier today and they said I had three queries before I ran out, and they're not wrong. And you're paying money for this. You're getting less for the money you've put in because they need to start being profitable. ChatGPT introduced ads to their product. Now, if you notice that you run a query, you get your result.
Oh, there's an ad for some other company. It's becoming adware because they've got to maintain a profitability. And VCs know this. And as this strategy of being able to to think about investing in a startup based on their acquisition capabilities, they don't expect a startup to go long anymore. They expect you to come out and make a splash and then immediately get gobbled up.
We saw with Claude Bot, right? They didn't last a week before somebody had gobbled them up. And so that's the strategy now with VCs, which, you know, we think about versatility and having longevity and having variety, that variety gets gobbled up too fast. And it's a bad strategy because it doesn't allow companies to survive long enough to make a difference, to really change the landscape for all of us.
So let's talk about this, because this is something that, as I mentioned, about Claude wiping out the company. If you've ever tried to build a company using APIs that connect to these commercial models, there's a lot of things in the terms of service that regulate how you get to use it to build your business, and if they don't like the terms, they will cut you off tomorrow.
So imagine spending two years of your life building a business that relies on these API calls only to be told, sorry, we changed our terms of service. You owe us more money, or you're not going to be able to use a product anymore. Like you're at the mercy. And a lot of this has to do with the fact that we rely on these guys because they have built such an infrastructure that we can't compete with that.
Right. What's the alternative of going out and being able to access the latest and greatest models that we know are being trained? I can't afford to do that. You know, build one on my own. I'm at the mercy of what these guys have. So if any of you guys do DevOps or, you know, especially for the AI people here today that aren't necessarily cybersecurity, you've probably experienced some of this.
Have you looked through the terms of service when you're doing API calls out to these different platforms, have you truly inspected what rights they have over your intellectual property? If you're not, you need to look at it because you're at the mercy of what they want to do with the calls that you're making to their infrastructure.
And we say this, you know, you shut off with 30 days notice. Will Claude prove that out today? Right. But that's not true. They don't need to give you 30 days. They don't even to give you an email. They just shut you down.
So here's the other problem we're seeing is that governments are slow to catch up. Big business rules. We know this, right. The lobbying potential of all these entities is greater than any of us combined. And so as a result, governments have been slow to enact legislation to kind of slow the tempo of this capability, to continue to focus on these ideas, we need to be able to stop these monopolies from happening.
We need to provide variety and choice to the American public. We're starting to see other people like China there. Their regulations are more about access than it is anything else. They want to be able to know what you're doing and how you're doing it, rather than what kind of market you're involved in. So some of these governments are, again, trying to pass legislation and frameworks to guide us through this thing.
One of the things that I thought was interesting is when Oracle did their massive layoff and the engineer came out and said that he's pretty sure this was or actually, I think it was a lady that said this first that they used AI to do this decision making. If they laid anybody off here in the state of Texas, they actually violated Texas law that went into effect January 1st, that states AI cannot be the sole reason why you lay people off.
It's discriminatory. There has to be other justification. So I'll be surprised if we don't see a class action lawsuit come up against Oracle, because they're not transparent about how they made these decisions. And if it's true that they use AI primarily for that, they're probably going to get sued. So at least that's on the books. But that's a state law.
And it's specific to the state you live in. You don't live in Texas. You're not reported that protection. So there's some of this that we're thinking about and looking through. So as far as the federal AI legislation is concerned, there hasn't been any and there hasn't been any introduced over the last year. So I don't know how we're going to solve this at a regulatory level.
Yeah, this is pretty interesting. So when we think about the too big to fail, why these entities who are not profitable will continue to stay in business, it's because of that too big to fail motion. And if you guys remember some of these like the banking industry in 2008 when the mortgage industry took a crap and there was a lot of unemployment as a result of that as well.
Social media, right. Being able to get in front of that because you're talking about monopolies on some of these algorithmic aspects that they're using to do the data harvesting, cloud computing. So this is the other thing for those of you who've been around long enough, you remember the promise of cloud. Oh, man, you move it up in here.
You get this great capability of compute, and you don't have to worry about you no fault times. And you get the five nine and that. Good stuff. We quickly learned it's not cheap to go into the cloud when you're being charged per minute for the compute time, and you're running workloads all the time becomes way more expensive than if you ever just kept it on prem.
So the promise of cloud computing was a lie, in my opinion, and people are slowly starting to find that out. We think about the models that are available to us. How many of you guys have actually built your own model for AI? I have, we have you. So you have built your own dedicated models. Okay, so so two out of an entire room.
How many of you guys use off the shelf models? How many of you guys log in to hugging face and grab open source models? Okay. Do you guys have hooks into anthropic or OpenAI's models through an API call? How about the new bedrock with Amazon? Anybody using that? So I think the point I'm trying to use bedrock okay.
So the point I'm trying to make here is that unless you're building your own, you're on the dependency of these models being available to you. And fortunately, hugging face makes it a little easier for these open source models to exist. And we're seeing benchmarks where those models are starting to compete pretty heavily with the commercial models like you can grab this bonsai was most recent when I looked at it's actually on par with a commercial model.
So they're getting better, which is great. It gives us more variety, at least in the model side.
So what does this mean from a risk. And this is where we start to get into the kind of the the holistic viewpoint. From a concern and security perspective. We think about the individual. What happens when you put all your eggs in one basket. And we know this. I say we know this, but you guys know that the concern is what kind of data are you sharing with these systems?
And I think it was interesting somebody mentioned about the human aspect of this. And I'll tell you a little bit of a story. I work with a client who has been focused on producing AI governance documentation for the better part of about two months now. And so I'm part of that story with her. I'm like, hey, let me help you build some framework around how your users are going to use AI.
We'll get the policy in place, we'll get the users trained, we'll make sure we gets enforce all that good stuff. Awesome. So we went loops, went through this whole exercise, and then I'm on a call with the CIO and her VP of IT. And they were going through a vulnerability management report and she said, hey, yeah, I want you to see what my my VP of it did.
He took the report that you guys provided us and you put it in Gemini, and he produces really cool output. And I could not believe my ears. I'm like, we just went through two months of building an AI governance that said, you're not supposed to do this, and your own people did it, do I? Yeah, yeah. And it was it was convenience, right?
They didn't even think that they were violating their own policy, that I helped them. Right. And that they were going to enforce on everyone else. And yet the VP of it was Edwin Say he was caught. But when I kind of pointed that out, she was like, oh. And her response was, well, we need to build an exception for my guy.
Yeah, that was her. Excuse me, we're just going to make an exception for him. And I'm like, okay, I can only do so much. Right? So again, when we think about the business and what the dependencies look like when you're dependent on Claude to do business and Claude goes down, you have no alternative. You actually have a disruption in your revenue because you can't produce a deliverables you are used to producing.
And then as a society, we've already talked, I've had talks on kind of the misinformation because as we look at what models are built upon, it's based on a collective knowledge. When you talk about internet scraping, we all know the internet isn't always true, right? It's on the internet. Must be true. Reality is the internet was made up of people, and it's made up of information that we've created, right or wrong.
And so we talk about AI bias. We talk about as a society, putting too much faith in the outputs of these things. I think somebody mentioned earlier like accuracy is the key. So when we talk about what's going to happen with people's roles, they are going to be the guys looking at the output saying, is this correct? Is it accurate?
Now I will say this and I'll ask this team and be honest with yourself. When was the last time you got output from claw that you questioned?
We've gotten lazy, right? We actually expect what we get from Claude to be fairly accurate. We copy that, we put it in a document, we send it to our clients. There you go. You want stuff on regulations? Here you go. Because I'm not going to spend the time to research. Did what you give me. Is it true? Right.
I don't have time for that. And probably none of us do. So we just implicitly trust that what we're seeing out of something like a Claude is accurate enough to pass it along. Now, the problem with that is, is who do you hold accountable as a CSO? If I provide guidance like that to my client and they get popped and they point back and say, well, you told us this and yet that happened.
They can't sue Claude for that. They're going to come after me. So the human, as we say in the loop, it comes down to accountability in my mind. And we've gotten so lazy with just assuming what we're getting out of something like a cloud is accurate. It's eroding the confidence of what we're doing, and it's eroding our minds a little bit.
If I'm honest, we're getting lazier.
Here's the other thing that's concerning. You know, we hear about AI called the black box. And it's interesting when I talk to people about what we're doing at Grid Light, how a lot of the conversation, I'd say about 90% of its education and awareness. How does AI work? How does all this stuff happen? What's the difference between a vector D.B. and Rag versus Laura versus all these things?
It's behind the scenes stuff, and there's not a lot of transparency. Try to ask Claude how it does what it does. You'll won't get an answer. It's not going to share that secret sauce with you. Right. So how do we know how these decisions are being made? We don't because the transparency is not there and there's nothing forcing them to be transparent about it.
So we talk about how is my data used. Right. What are you doing with it. Oh well we don't use it to train our models. Show me that now we're not doing that. So I'm just going to take your word for it as a security professional. One was the last time you took anybody's word for anything. Show me the proof.
Right. Show me the evidence. And so being able to have this explainable AI, putting it in simple terms of how does AI work, how does it not work, what is it? What it's not is something that the industry still has a problem with, because they don't want you to know how all this stuff works. Because if you did and you saw the behind the scenes, you probably freak out when you realize just how much of your stuff's being exposed, right?
As an individual, you have almost no legal recourse. So we talked about the Oracle example. The only real recourse that I'm aware of, that the Oracle people would have is based off of the state law. Texas, I don't know how many other states have an AI law like that. But for imagine if a decision is made about denying you alone, or why your resume didn't make it to the right person.
You don't have the transparency to know how those decisions are made. They're just made without any explanation. That's, to me, is problematic because when we don't know, then how do we take legal action if we feel like we've been violated? If AI tells you you have a certain diagnosis from a medical perspective, and that's not true, and they take out the wrong organ, who do you sue?
You know what I mean? How does it make that decision that that was the organ that needed to be taken out? So there's a lot of questions we should be asking as a society to get ahead of this. And no one really is in a way that's going to make this change. So that's kind of why I'm bringing awareness.
You know, I like spicy topics. So hopefully this is not scaring you. Just trying to bring awareness about what we're up against. As far as free AI. We've seen this happen before. Anytime you use ChatGPT or Claude on a free account, anything you share with it is considered public domain. I think it was a couple months ago. The CSA director, a guy who should know better, shared information with ChatGPT in the public way and it got exposed.
The guy at the top should have known better, and even he made that mistake. And it's because data is the product. Anything you share with it. If you're going to take advantage of this capability, then we're going to use whatever you give us, right? And that includes any type of dex, any type of collateral, anything that's unique to the business you're running, they now have rights over, and they can do whatever they want with it.
So we think about health and financial queries. So I think somebody mentioned the data governance piece. How do you control the kind of data that goes up? I've already given you an example of where one guy out of convenience put what it was ultimately, you know, sensitive operational information in a Gemini just to make his life easier. So that information is now out there and I don't know what's going to be done with it.
Who's going to take advantage of it? I know from my experience working with Claude, it's pretty easy to seed Claude with a bunch of information so that if you look up the same topic, it'll tell you everything that I've been telling it, even though it's supposed to be in my tenant and I'm supposed to be carved off in private, it shows up elsewhere.
So I know that that's not true, right? So imagine what else is getting out there. Most people don't ever read the user agreement with any of this stuff. When was the last time you looked at one on Claude and what they can do with it? In fact, they just changed their privacy policy and they gave up on it.
They actually reverse course on their transparency around privacy. How many of you guys read the updated copy? Probably nobody. Right. So you've been using cloud like you have been all along, not knowing that the privacy policy changed the way that they handle your data now. So I encourage you to go back and look at that because they completely reverse course earlier this year, saying it was not in our interest to continue to be private with your data.
Yeah.
I'll get through this, I apologize. I know it's quite a quite a bit of data here. So we think on what your business is typically built on again, for the DevOps and the team in the room and people that are kind of working with AI today from an application design and maybe even building a business, you're kind of tied into these models.
We've already talked about a lot of these, and once you're tied in, they have the discretion to cut you off. So if you've got something that you're dependent on now all of a sudden they don't like the terms, they will cut you off. And these are some of the bigger names. Right. You've got cloud infrastructure. We talked about bedrock.
Somebody mentioned that we run over bedrock that's all embedded in AWS right. They control that model even though they claim it's kind of siloed off. So.
As far as the competitive moat, this is something that I think is pretty interesting because as these vendors, I think we talk to somebody about this recently, the concern that CISOs of today have is threefold. One is AI is being used by my users, and I don't know about it. So I don't authorize Claude, but you're doing it anyway.
That's a shadow AI concern that I have. The second concern I have is considered tech creep. So a technology stack that I had yesterday that had no AI capabilities all of a sudden has AI tomorrow, and I just have to deal with it. There's nothing I can do about it. And a lot of times this stuff shows up without prior knowledge.
So I'm constantly talking to CIOs and other CISOs that said, you know what? This stuff showed up all of a sudden, and I had a user in my environment report it to me. I should have known this before they did, but I didn't get that chance because it just showed up one day in their tenant. And so this idea of this stuff being pushed out without authorization, and you really don't have a choice to turn any of this stuff off.
I'm not a fan of the fact that Samsung with Google now has the Gemini stuff, and you can't disable it. It's there whether you like it or not. They have removed the choice for you to be able to turn that stuff off. So this is a concern when these big tech vendors say it's my way or the highway and you really don't have an option, what are you going to do?
Build it yourself? Good luck with that. And they know that. And we talked about regulated industries. So this is an interesting aspect because this is what's slowing down AI adoption. If you're a regulated entity and you have to like healthcare finance, anything that's got some pretty rock solid regulations, you can't fully embrace AI in any way because of the fact that that data is no longer in your control, and you're violating all kinds of compliance stuff for that.
So this is kind of an interesting thing where some of this is kind of barrier, just because of the fact that these entities can't move forward with it. So the alternative approach is, again, I'm not going to pitch grid light, but we've kind of solved this for them, and I'm happy to talk about it after the presentation on kind of the things that we're thinking about to kind of solve a lot of these concerns.
Yeah. Democratic erosion. We talk about where we're headed with this. I think I did this on a previous talk. You know, AI is biased. Flat out it is. And anybody that tells you it's not is lying to you. It actually is biased because the information that's generated might not be accurate. So imagine shaping your viewpoint based off of some results you're getting out of Claude.
If you think Claude is objective and doesn't have a slant, then you're wrong because it does. Ask it what you want. It will actually slant it in a certain direction. That may not be appropriate, but if you believe it, it's one of those deals where this kind of misinformation is prevalent. We're seeing this with things like deepfakes, right?
Having a political representative being defect in a way that makes you makes it believable. And now you have to spend your time trying to figure out, do I need to debunk this? Is this really true? Is this something our president said or some senator said? And it's creating erosion and our confidence of our leaders, and there's no ethics tied to any of this, which is really what annoys me.
So here's a couple of examples of the catastrophic failure. So this is more about resiliency when I think of and I've already kind of said this a couple of times, the dependency on a technology that is fallible, right. When we talk about some of the things that we've seen in the past, Cloudflare being taken down. And when that happened, it affected both GPT and Claude.
So about six hours worth of activity was gone recently. Claude was there, Bahrain data Center was hit and one of an Iranian strike. And when that happened, they had a part of their compute went down with it. And it created a whole situation where we had a lot of hallucinations. And this is when we talk about that untrustworthy.
Some of the responses we were getting back weren't accurate anymore. Blatantly inaccurate. Again, one mistake from a misconfiguration created a whole outage. Any of you guys that are M3 65 or 365 tenant owners, you probably have seen this happen a couple of times, or the whole West coast went down for a better part of a day because of one single misconfiguration.
Again, the dependencies on these big vendors is creating a situation where it's just a matter of time before there's massive disruption in a way that's not recovered. We've seen airline operations grounded. In fact, I think American Airlines got hit with this recently. One little glitch caused a whole bunch of failure in their planning services, and that allowed them to, I would say, allowed.
It created a situation where it had to ground bunch of aircraft. So again, these things, when you put all your eggs in one basket, create a problem from a single point of failure perspective.
Yep. All right. So let's talk about what resilience looks like. And these are probably obvious at this point I truly believe in diversification. You know don't take my word for it when you're looking at maybe building a business around AI. Or maybe you want to use some solutions that you are kind of dependent on. Diversify. Look at alternative approaches.
Have a fallback plan. You know, I wish that consulting shop that I work with, when Claude took a dump, they could fall back to some of their plan B to keep working. Right. So being able to diversify your solution sovereignty over convenience. We talked a little bit about this with the guy that put in the vulnerability report into Gemini.
Right. Having data sovereignty is crucial. Knowing where your data is going and keeping it within your walls. It's an age old concern. It doesn't really have to do with just AI. This is something we challenge with all along, but AI is making it worse, having better transparency about what AI is doing and how it's making those decisions. We need to demand this as a society, because when these decisions are made and we don't have a recourse to understand why, then that's a problem, right?
Decisions to be made. And you're not going to have a way to challenge it. And then interoperability, being able to have the capability of having multiple selections. So if maybe I'm not tied into AWS, or maybe I want to use AWS and cloud and ChatGPT with additional models not having vendor lock in by being limited to one vendor only.
Again, when we talk about some of the regulations, I'll just kick through this pretty quickly. I know we're kind of getting along here, but what you can do as a society, you do have more leverage than you realize. And obviously we all have access to our representatives. But being able to understand your rights and as laws come online, knowing where you stand with them.
How many of you guys knew about the Texas law that went into effect about AI discrimination? It went in January 1st this year. So knowing your rights, understanding where you are with all this.
When we talk about businesses, you know, I can kind of skip through some of this. I know this kind of repeating what I've already said, being able to have model agnostic capabilities. So making sure that you have options when it comes to how you build your business. And we talked about open models. So there's a really good site vellum AI that benchmarks a bunch of models and their performance.
If you're interested in this scene, I encourage you guys to take a look at their site. It'll give you an idea of what kind of open models are out there and how they compete with commercial models. So when you're making some decisions on how you want to use AI, you can kind of look at that and say, which model works for you based on performance.
This to me is important, but unfortunately it's painfully slow. This is the idea that we need to have regulations around AI. This is one of the weakest areas we have right now in technology, because people are still trying to understand what's the ramifications of doing nothing. And so if you're kind of a an advocate for like I am around certain regulations when it comes to our technology stack, again, get involved, start bringing awareness to the community.
I think the more people that stand up and question what we're doing with AI, the slower will get with it. Maybe we can slow things down a little bit until we can get it done right, because without this, these guardrails aren't going to exist. It's only going to get worse, right?
And then what do we say from a resilient ecosystem, we talk about making sure that you have interoperability with different models, not having vendor lock ins, being able to make sure that you're doing on premise AI. This is something that a lot of people don't talk about because they're so used to the cloud dependencies. But there are options, including grid light, that will allow you to run an AI infrastructure on premise.
So when it comes to data sovereignty, data privacy, data control, having transparency about what your AI is doing, the best way you can do that is bring it back down. Bring it on Prem, right. Control it. Put your own guardrails around it. Send a message to big AI.
So what is the choice in front of us? Again, this is a great question because as I mentioned earlier in the presentation, let me think about if you're old enough to remember when the internet first kicked off, what would have happened back then if only three companies on the internet, how would you feel about it then? If those are the only three choices and you were kind of if you didn't use their stuff, you didn't use anything.
So they broke up AT&T we didn't get there, but we did break up AT&T. Yeah, it took that to happen before we got here. Right. So I remember all academic. Yeah I mean we had a lot of choices when the internet first came online. Right. We had anyone that was anyone could build a website and put it out there.
If you knew how to spell HTML, you could put it up there and make money at it, right? I remember the glory days of the early internet, but what happened over time, it's coalesced. Now we have Google. We have Amazon, right? We have Oracle, we've got these big entities. We've got Microsoft. You have no other choice. What happened in Netscape, what happened to some of these other vendors?
That gave us a choice, right. And so to me, I don't think we have enough variety. So imagine if this is what we dealt with in 95, how mad we would be about that. Right. So and this is what we're faced with with AI, this is exactly the choice we have to make. So what can you do as far as and if you're part of an entity that's looking into AI, I would highly encourage you to kind of take a look at this, look at your your stack to figure out what your dependency is.
When you start talking about risk management. Think about what happens if that goes down tomorrow. What happens if they shut us off? What kind of an impact is that going to have? Look at your third party suppliers. Because one of the things that's interesting is a lot of this tech has AI that's showing up that they don't own.
So imagine getting a vulnerability management scanner that's got AI capabilities. What do you think they're talking to. Like who do you think they're hooked into. Or what if that toss shuts them down. What's that going to do to you. It's kind of a byproduct of it right. Supply chain problems. Because a lot of vendors are starting to use these capabilities, but they don't own those capabilities.
You guys are different because you've built your own model. You're kind of independent from the dependencies. Yeah, but a lot of people aren't like that. Yeah. They're doing most vendors are doing N plus one. Right. It's an existing stack. Yeah. About two months ago they wanted to bring in a new tool. We started evaluating the technology. All right.
We would run through this. Which models are you using? Yeah. One of them was IBM's. That's our biggest competitor in the market. Oh, man. Yeah. This data sovereignty provision is something that a lot of people miss. When you bring on a vendor and they have AI capabilities like like a Gemini if you want to ask. Right. Think about the data sovereignty controls that they have.
Right. So this idea of the VP putting in that report into Gemini, he basically released it to the wild, not realizing what he had agreed to when they signed up to use Gemini in the first place, because the data sovereignty doesn't translate necessarily from the main vendor you're working with. It's the vendor they're working with that might control a lot of that.
So fully understand and map that out so that you know where your risk is. You know what your blast radius is. If there's a data breach or something gets exposed, making sure that you're looking at policies when it comes to AI and your own organization, having AI governance, this is a big topic right now. In fact, when we talked about GRC earlier, in my opinion, it doesn't.
A lot of people want to put AI into like your standard acceptable use policies. I think AI is significant enough. It needs to be its own standalone governance. Right. Let's build a program around AI specifically because it is so unique and it does touch so many different things, you know, advocacy. I'm a big guy. I'm one of those guys.
I like to, you know, kind of be an agitator to get people to become aware. If you were at my last talk where I did quantum encryption, it scared a few people. And I don't like to scare people, but it is like, hey, I'm just bringing awareness. This isn't going away. This is only going to get worse if we do nothing.
And then finally looking at on premise alternatives. So having something that you can control bring it on prem. Right. The technology has advanced enough now where you don't need massive data center compute power. You can run this stuff on a laptop, right? And again, I have answers to this for grid light, but it's not a pitch. But if you're interested on how we're solving this, happy to have a conversation with you guys because we've thought a lot of this through and that's it.
So if you guys want to connect up there's my contact information. Happy to continue the conversation. Do it. I know, I mean I need to get the beard back.