Skip to content

From DLP Security to Securing AI: Rebooting Your Data Security Strategy

Presenter:

Rick Holland

Transcript:

By his request, no introduction record. That's actually a good introduction. I like that. Howdy, all. Thanks for, hanging in here towards the end of the, end of the week here. Yeah. I didn't want to go through an intro. I've got some joke slides for my intro, so we'll do that.


But I'm going to walk through, data security, today. I always, like, want to do talks. Just trying to understand the audience a little bit. How many folks, are in security? Do we have any IT folks in the room? And what about students? Okay. How many folks have been responsible for data security in some aspect in their career?


I'm sorry. I have that. I have that experience as well. It's actually the. It's why I have DLP, scars. I had the pleasure of rolling out, the data security program at one of the University of Texas, schools, UT Dallas. Back in the day. And that was a little painful. So I'll dig into that a little bit more as we go along.


A little bit of background about me. One, a U.S. Army veteran. I've been very threat focused in my career. I was threat intelligence, oxymoron for for the army. But then I was an incident responder at UT Dallas. I ran a cyber threat intelligence team at the start up, where I was an early employee.


A company called Digital Shadows. So I ran a cyber threat intelligence team. I've been very, very threat focused in my career. And then I worked at, managed detection and response provider. So I was also threat focused, there as well. I've been a Forrester Research analyst in my career. If you use Gartner. Forrester, don't hold that against me.


Often I'll tell people I left the dark side to go to the vendor side, and then they're like, not sure which side really is. The dark side. They're both kind of dark, but I had a good, good run at Forrester. And then also, I'm a Texan, of course. So I'm a barbecue guy. So, I was doing, pork belly three ways here.


And the one on the far right made it into pork belly burn ins. The two on the left. I did brisket style and cooked him like a brisket. So, black pepper there. So that's that's about me. The reason that I was going to start off with the threat side of the house is I've been, like I say, most of my career.


I've been focused on threat actors and that sort of stuff. I haven't really been focused as much on the data side of the house, like, where are the actors going? Well, let me say this. I wanted to be focused on the data that the threat actors were going after, but it was always a struggle to understand who had access to what they hacked into.


What happened with the actor when they had access? It's always been a struggle for me, and I think a lot of people as well. So I'm going to I'm going to talk about data security. I'm going to talk about AI security when I do my part, even though I'm on the vendor side, I always call myself like a my jokes, an anti vendor vendor.


Because I've been a practitioner and just hearing vendor marketing and stuff. So I'll talk a little bit about the day job, but I really wanted to kind of give take you on a journey of data security and how little it's evolved. I think, and where we are today. I also want to give just some, some pro tips I have my very last slide is just kind of a, a link with all the references.


So you don't have to worry about slides. It's in a, it's in the deck. You don't have to worry about trying to get these references. But I'm going to refer out frameworks, different kind of practical things that you can do for your data security and AI security approach. Things to track as well. So I really wanted to leave you with some take homes and things along those lines.


And that's kind of what the, the to do about it component is. So the 2000 is called, and I really think that's the case, like, I've just recently been moved back into a company that's focused on data security. As I said, I was, you know, doing data security at the University of Texas at Dallas.


That was pretty interesting, but I thought I would call back to some some analogies here. Does anybody know what this is? Any idea on what year this was? 2006? I mean, 2006 or 7. So this is this is Brittany. Leave her alone, guy. I actually saw when I was taking this, image, like what he looks like today.


They're very different. Very different. All right. How many people had Myspace accounts? So my, Facebook in 2007 opened up, for non-students. Right. So that's when I made my transition from Myspace to Facebook. But these dates that I'm talking about, that was like peak DLP. Anybody here been responsible for data loss prevention? Day to day data leakage prevention.


Their careers. Anybody? Keep your hands up. If it's been an enjoyable experience. Okay. Usually the answer is to no on that one. And, it's still the case. Okay. How many people are familiar with this? Now, Rick Astley did not do this song at the heyday of DLP, but Rickrolling became a thing in, 2000. I got 2006 or 7.


And fortune is where this started. So the reason I use this analogy is I think it's it's almost a shame. Ridiculous that data security has not evolved since these things were like the the height of pop culture. It has been you think about a space that's ripe for reinvention, a pain point that people have the security teams going and asking who owns the data?


The data team saying, I don't know, you're it, why don't you tell me? And it's like a Spider-Man meme of trying to figure out the data and who's responsible for it, much less how are you going to put controls into to protect it? So it's quite vulnerability management, actually. You know, while I'm ranting is another area where we've not seen, as much I think we've seen more innovation in vulnerability management, but definitely not in the data security space.


You know, part of that is we were using regular expressions. I did I think every single graphic that I have in here, is, is from Claude or ChatGPT. This is and I like cyberpunk themes and I like cyberpunk game as well. So this is a regular expression, like who likes regular expressions? Okay, that's more than normal.


How well do they scale in organizations? Yeah. They're tough. So I mean, I think regular expressions have been the foundation of data security for for decades. And it's not been a very good foundation. And I think when DLP was brand new and Gartner actually defined the term, I think in, in, in 2007, von two, I used to do some von two stuff in my career.


Symantec bought von 2 in 2007 as well. So this is kind of like the heyday. Lots of acquisitions were happening. RSA was actually a product, DLP product back then, web, since this was kind of like the peak and they were all getting acquired. And I think on the left is like how it started and we're going to have this vault and DLP is going to protect us from all these things.


This university that I worked at, we were just trying to use regular expressions on Social Security numbers, because that's what most universities use back in the day for their student IDs. And that was a struggle. And even just doing it on a particular on endpoint only. That was a struggle. In reality today, when we think about data security, stuff's leaking out everywhere inadvertently, S3 buckets that are exposed.


How long have we known that we need to configure S3 buckets appropriately and we still struggle with it, right. So it's it's not been it's not been great. So we're going into this new age now. It's not new now like digital transformation is happening in your organizations. People have been moving to the cloud. I mean, for that's not a new thing, but actually there are a lot of organizations that are still, especially in the Houston area, that are more on prem focused and will continue to be certainly for certain segments of their environment.


So we weren't really set up to try to defend today's network. How many companies do, mergers and acquisitions? Few hands come up. That's a really big problem, because then you take the technical debt of every single company that you inherit. Now you have multiple data classification schemas. If you even had one to start with, it was effective anyway.


It's really, really tough. It actually makes me glad that I'm not operational right now. And I don't have to deal with this. I just go in and help customers with it and then I can step back, you know, moonwalk out of the scene. So some of the challenge we have today, I've already talked about some of these.


The classification one is one that I like to always highlight to me, if you can't get classification right, and that is the foundation of your data security program, how are you going to have DLP controls that are going to stop? Rick from accidentally sending an email out? I'm trying to send it to someone that begins with an A internally, and then I send it out to a customer or, a partner externally on accident.


Right. If the data classification is not right, everything that we do after that fails is kind of like for those people, a lot of experience in SoCs, right? It's like you get a false positive. And now I'm going to go and isolate Rick's host. I'm going to reset Rick's credentials, I'm going to terminate all of his session cookies, etc..


Right. You're building your whole response on a house of cards. Now we have AI as well. Like, I've been a big, big focus on non-human identity because a lot of the intrusions that I've worked or seen customers, non-human identity are invariably involved. Back in the day, we used to just call them service accounts. But now we call a non-human identity.


That's tokens. SSH keys is another example of a private SSH key. Now we have AI is another identity that we have to worry about as well. And it's going YOLO all over our environments. And then if you look at the scale of data, that curve zettabytes up there at the right, like how many organizations actually have a good data governance program and you actually destroy data retire data analogy.


I use it this morning as well as I look on my phone and I think about my iCloud or my pictures, I never go through and delete the pictures that are some random picture that I took in Houston this week to then post on social media. It just stays there. I just up my iCloud, monthly family plan, and then my kids don't delete anything either.


And then it's just growing and growing. It's kind of like your Splunk bill if we got any Splunk shops out there. But, I feel your pain on that one third party risk. And this is an important one as well. And third party risk has always been a challenge for us. I think about the target breach and folks that remember fuzzy or mechanical, being the initial access into target back then.


This continues. Chrome extensions from third parties that are used for for a host to get compromised. Here is a recent one from Salesforce. Anybody impacted I say from Salesforce. It actually wasn't from Salesforce. It was from Sales Loft who then was using something called drift. I've actually had, you know, you pop on to a website and that annoying little face pops up and says the most useless things, and it's supposed to be like a really good way to engage.


It's called a drift bot. One of my old companies, we tried to bring one in, and then you're running basically third party code through that on your website. It's no bueno. You had a kind of a similar situation happened with Sales loft using drift. They gained access, into a GitHub store, and then they basically were able to go to anybody that was running that code through drift and then pull all they got these OAuth tokens, that's how they actually got access into the environments.


Think of a what token is like. You know, it's your sign says come on in. It's kind of like when you've seen like AWS compromises through key, through through keys. They're same type of thing. But that's coming from a third party. And Salesforce wasn't responsible for this. We had customers that were scrambling to try to understand what was the access that this account had.


What could they see? What was the scope that's had? There's also been people that were reporting that they were able to get, you know, other parts of the organization from this as well. So third party risk and secrets management has been a problem for us for a long time. And now with AI, we're going to have AI agents or many people already do have.


AI agents are in the mix there. So the complexity of this is just getting harder and harder. Seems like the barrier to entry for the attackers just gets lower and lower and lower. And then meanwhile for us, it gets higher and higher. We released a survey, this week actually, when we were asking about, you know, artificial intelligence use within organizations and here you can see 83% are currently using it.


Which I'm not surprised by. Everybody's got it. I think a lot of people don't realize how much it's being used, as well. So we have a discovery problem here too, and I have links to the it's a whole it's actually could be a useful thing. One of the things that I always like to do when I'm trying to get budget in programs is take vendor survey data and then put that in next to my Magic Quadrant or my Forrester wave to try to say, hey, you know, we're not the only ones that lack this visibility.


Here's what the broader market says. I have the links in there at the end for for that survey as well. I did like to throw in a little bit of jokes here as well. I'll give you a second to read this one.


I mean, I do think it's pretty awesome. I use, anthropic for a lot of stuff, a lot. And I've never been a developer. And it's like been a it's really been a nice force multiplier, for me, especially when I'm trying to troubleshoot things. So there's good and there's, there's, there's bad associated with this sort of stuff for sure.


The one thing I want to always when I do these talks, I always like to highlight just some of the recent news. I mean, the pace of change within AI is, is crazy. I used Google to again cyberpunk me there and then, you know, we just saw this week that, OpenAI has launched the, the checkout agent, and a whole protocol for commerce on that.


I can just imagine a talk at Def Con or Blackhat next year of somebody talking about how they've, used and abused this particular protocol that's out there, and perhaps is going to make its way into enterprises, I mean, procurement teams, I mean, that's probably a little bit further out. This is going to be more commercial. I'm sorry, more, consumer based, for now, something to think about for yourself.


And then, you know, meta just rolled out a new I mean, everyone's got a, you know, saw a two, just came, just got announced is getting rolled out, like, the video stuff that's coming out is crazy. Do you think about how, business email compromise and deepfakes and things like that could use that? Like it is.


It is crazy. I actually, ask, perplexity every day. Give me a summary of the top AI stories. What's happening? Just to kind of keep up with, the news of the week. Is anybody familiar with this OWASp top ten? How many of you are familiar with OWASp in general? Really good resource. We've been around forever.


Web applications. It's kind of where I first learned it, but how many people were familiar or aware that they had one on the left side? Okay, I've found that usually the minority of folks are probably like 40% of a room is, so there's, there's there's what, two, four, six, eight, ten of these. Or I could just read, at the top where it says ten itself, but I wanted to do the math on it.


Some of these are probably more appropriate and concerns for anthropic or an open AI, that sort of thing. But I highlighted some that I thought were probably more relevant just for an everyday defender out there, someone that's using commercial tools. Maybe not someone that's managing. I mean, I do have a lot of folks I know that are starting to build their own and roll their own versions as well.


So some of this might extend, but I think by and large, and I be curious if if people agree or not, this is probably our, at least for right now, our biggest risk besides some systemic one where China has poisoned models or the US has done that to an adversary as well. Like as far as like day to day, I'm a security operations person.


I'm a security engineer. You know, I think the sensitive information disclosure is and I have an example coming up is a big one that's here. I mentioned the supply chain earlier with the sales. Last example, I wanted to to bring it up here. Right. OWASp has identified supply chain vulnerabilities here. And I think this will be an area where attackers do get into the mix of complexity that's going to be there.


Like I mentioned, Chrome extensions being something third party risk. You know, you think about, you know, packages that have been compromised and things along those lines. There's a lot of bang for the buck for threat actors going that. So something to look at as well. So I would definitely take this down. If you haven't looked at it review it.


It can be helpful as you try to threat model the risk to LMS in your environment. Anybody rolled out or piloting copilot right now on, on OneDrive or anything along those lines? I won't ask for any war stories, but I'll give you some, people not understanding the permissions of accounts that have access to SharePoint, OneDrive, copilot, then crawling it and making it available for queries.


Just last week, I was talking to a customer who had their C-suite data publicly available, internally available to all employees, and people were then reviewing. Now, some of the C-suite people are public anyway. You know what they are because they're a public companies, that sort of stuff. But others weren't. And so you're able to just query the salary data, just like you'd ask ChatGPT, I'm in Houston, what's a good restaurant to go to?


And then it just spits it right out. So I definitely would say if you're and a lot of people are piloting, because the E5 license like the E5, the allure of the E5 license, it just brings us all in. So copilot is probably the main enterprise one that I kind of anecdotally see, see, that's out there.


So Microsoft has done, and I have the link to this as well on how to harden your settings to minimize, you know, to, to make sure you have at least privilege. Not all privilege, that sort of stuff. So I'd recommend that one to you. This one kind of strikes me as the S3 buckets for AI, right? Is what are the we?


We still see S3 buckets exposed today and what we've known S3 bucket exposure has been a problem for, I don't know, a decade now. So you can imagine this is going to continue to. So again Lincoln show notes, for this one, how many people feel good about shadow AI detection?


Okay. I want to talk to you. I still think we struggle with shadow. Organizations with SAS applications. And I know we've got a lot of network controls that can detect that, but the reality of defending modern enterprises that are complex, you have limited resources. Everybody in this room that's a practitioner is fighting whatever fire is hottest right now.


You don't have as much time to be proactive, hopefully with AI. I mean, on a positive note, it's going to be a force multiplier for defenders. And we can start to do some things that we haven't been able to do in the past. But shadow asterisk has been a problem for us. A couple examples, that I have here is I mean, obviously people are using the consumer grade chat bots, and I'm going to talk about some low hanging fruit, to deal with that.


One of the ones that I have found is I just was, joined a zoom call with a friend last week, and they just popped on there. They checked their, I note taker into our zoom call, and it was a friend of mine, I didn't care. I was actually asking like, oh, what is this? What are you using?


I'm like, is that gone through security? Has there been a risk assessment? Oh no. No, not at all. So people popping I mean, because I've used some of these types of capabilities and the doctors, I can see the doctor's notes from some doctors when they're using it for, for that no tech. I like it, but do you really want people to have conversations in your enterprise, perhaps around regulated data?

CYBR.SEC.CON CTA


There then goes up to some third party cloud and it puts you at risk for some exposure you didn't know about. So this is one. Another one that I have seen is the Mac safe version. It might be a little pro tip for going into meetings with people is to look on the back of their phones. And of course you don't need a mac, a version of it.


You can you can run apps for it as well, but that's trending up too. So that might just be something to think about. If maybe you're having a non NDA discussion with a service provider of some sort, just kind of understanding if they're recording that conversation or not. I think there's great value to it, to both organizations, but it's got to go through the right processes and things like that.


I, I'm a bit actually, I use notion, I use Grammarly. I've used Zapier right now, but I can only afford so many subscription services. Like if I looked at because I have perplexity, I have clawed, I have open, I wait way too many of these. Now we have a commercial version. I was thrilled when I started at my new job and saw that we had notion.


Commercial version I love notion, all the things, but, these types of tools, like employees are not trying to do evil stuff. They're trying to be more productive in their day to day job. So a lot of times we always want to like, blame employees and shame the users and stuff like that. The real question is like, how can we adopt these types of tools in a way that our employees can be more productive, get their jobs done?


Your teams can do as well. But again, it's not yellow. It's not sensitive stuff going up into these third party applications and services that are out there. The, developers, you know, I think CEOs have a love hate relationship with developers, especially CEOs of technology companies that need the developers to actually ship the product that you want to sell, but you want to do it in a secure way and a developers need.


I mean, I'm sure people here have had the challenges with, you know, administrative rights for developers and the headaches that that can cause, etc. I'm not trying to beat up on developers. Again. We need to find a way to help developers do their jobs and push code in the right way and secure manner. But you know, are your developers just going and getting stuff off, hugging face, doing it locally in their own environments?


Do you have visibility into that? How can you set it up in a way that you have structure? So I kind of want to transition, into what to do about it. So I have some more pro tips and things like that. I kind of think this sounds cheesy, but I think it's a good analogy, in general.


So this is a cyberpunk renaissance, portrait. Basically is what? Yeah, this is Gemini. They did this one for us. People have struggled with their data security programs for a long time. People have struggled to get money for their data security programs. But what has happened now is AI is getting pushed down from the top.


I've talked to customers and CISOs, CIOs that are like leadership is saying we need to use AI. In this case, leadership has no clue. I don't want to paint too broad of a brush, but many people have no clue what that actually means. We're just on boards and we hear that other boards are using AI and doing stuff we need to do stuff, charge full speed ahead type of thing.


Smart IT security, privacy data leaders are getting budget on that. So they have broader you know, I don't think you can look at AI in isolation without looking at data and identities, but they're basically using, you know, this AI, you know, what does it never let a good, whatever go to waste? That analogy, it's like, hey, the business is interested in AI, how can I tie any of my projects into AI that is going to allow me to get things through?


So if you're not thinking about that, especially for those that are working on their budgets for next year, which is something that we're going through right now as well. That is is something like how can you if there isn't a broader AI transformation initiative or something like that, how can you plug into it? So there's a lot more interest in data security than we've seen in the past, I don't know, 15 years because of AI.


I always I have a joke and some folks have heard this joke. I have a joke, and I, I recommend that you play this next time a vendor comes in. I call it mean time to. I, and so how long does it? I had another joke which was mean time to CEO apology post breach. So you just kind of track that one.


So the meantime to AI, how long does it take for a vendor to talk about AI without talking about the outcome that they're trying to deliver? Right. AI is a tool in the toolbox. Not all the things need AI. Some people think that it does. And maybe we'll get to that point at some point. But like when people come in like, what problem are they trying to solve?


Is it like in Hammer of Search of a nail type of thing? So yeah, do that mean time to I say recall on, authorize use of or reuse of that joke, and make them squirm. And I do that to our sales teams to, if I hear like a sales rep that just jumps in without talking about, like, what are you trying to solve here?


I'll just make that joke. Everybody laughs. And then the sales reps like, okay, next time I'm not going to take that approach. So I mentioned Chief data officers. The one thing let me ask this question. How many folks, outright initially outright blocked to the extent that they could all AI in the environment and then backed into programs?


How many people have it open and they really haven't established the governance and control. It's usually always a big mix. I never want to overestimate where people are on their maturity journey again because a job is hard. Lots of competing priorities and things like that. But by and large, I've seen a lot of folks, that are very immature in their governance, and policies around this, and historically have not been great at emerging technology.


And now we just have probably the most emergent technology there. So there are a lot of people. And like as a CSO, I've always wanted to work better with my peers. Like I had to deal with GDPR. So I had to work with, the data, the DPO, data privacy officer, it was really a great partnership for me to work across the organization.


With that, we have an opportunity here because AI touches everything. Data touches everything. To work with all these rules now in your organization, you may not have all these roles. It may be two people, it may be the CIO and the CSO. And that's why I kind of put at the bottom or equivalent responsibilities. But it takes a village here.


And so you work with all these stakeholders. It gives us an opportunity maybe to combine budget for things. There's a lot of options. So hopefully you can just plug into an existing governance model that you already have. And AI is a component of that. I have seen a lot of organizations that stand up a separate AI governance committee as well, in addition, that they then plug into their overall governance committee, too.


Just because it's such a big thing, and a lot of it also depends on how much I call it. I don't know if that's the right word for it. Vertical. I like I'm in oil and gas. This is how we're using AI to improve our processes. Or I'm in health care and this is how we're using AI to improve patient outcomes.


It's one thing I love about talking to all kinds of customers is always finding out. I was with an Agora company, and we were talking about how they're using AI models now to figure out plantings and things like that. It's really, really cool. But when you have those kind of your business, not cybersecurity, not it, I use cases, you know, a lot of these folks will come into play.


So it gives you an opportunity to work with them, especially in situations where you may not historically have had the best relationship, especially with the CIO, because we're in security. And then we dump all these roles on the CIO's team on a Friday afternoon before a long weekend. Get ahead of it. Now, there's a couple things, that I practically wanted to point.


I do not expect you to read that spreadsheet that's there. But I have a, link in link in the notes there. But there's a couple of things I want to point out. The World Economic Forum actually put a nice this is very strategic, high level. Now, you know, we may have some people that aluminum and Illuminati camp for the World Economic Forum.


I get all that. But this is a good piece. High level could be good for executives and things like that. The more practical operational thing about, and this isn't just governance, what I did here. So this is the the cloud security Alliance. Again, I have the links to us. They have a whole maturity assessment. So you can take this spreadsheet.


I forgot how many domains they have in it. I think it's 20 something domains, something like that. And you can actually assess your maturity across their domains. So what I did is I just pulled the governance ones up. So you don't have to do all of them. You could pick out the ones that are appropriate for your organization.


And it's getting updated as well. So I think this is a good one to track and to track changes on, but it'll help you build out policies. What is missing? What do we need to do from a governance perspective. So I really like that one. It's practical because some of the stuff that's out there, like nest, has some very high level things.


But again, those are more for Facebook and Anthropic. And the folks that are doing the big models and things like that, not that every day, you know, cyber security IT person is having to deal with this stuff. So this is a good practical recommendation. Now, as far as what you should do, and I would imagine the folks that in my previous organization, we did a block it out right.


And then we started permitting the authorize thing. So if you're in that state where you are trying to, you're about to do a roll out for copilot and you're going to pilot copilot that's out there. This is kind of things that you you need to build out an AI asset inventory. Now, let me make a cmdb joke.


How many people feel confident in their cmdb that they understand they're not AI assets, just your regular assets? Yeah. Nobody ever raises their hands there. Even though we do have some good vendors in that space. I'm not zoning. I'm not. I have nothing to do with zoning. Yes, but I do like that space. It's going out and doing asset discovery and things like that.


Again, I think if you can't get your assets right, just like I said, you can't get classification, right. It's you're built on a house of cards. So one engage with the lines of business. So I think, again, you build that coalition, you're talking to people. What are your plans for AI adoption? You know, like this should be on your security leaders.


I mean, IT leaders list. You know, you have I'm also a big proponent of using existing stuff. Does it sound weird being on the vendor side of before you buy anything new, leverage everything you already have invested in? I call it expense in depth instead of defense in depth, where you just go buy more junk before you've maximized existing investments.


Right? So you've got things that can help with the inventory, right? You've got your EDR tools and your MDM tool. So you can see what's out on the devices that's actually being used and been installed. You know, did Rick install a perplexity client, locally on his Mac? You've got the browser extensions. I mean, you should be inventory browser extensions anyway, just because you want to make sure you don't have some dodgy third party extension, that is then plugging back into, capturing your credentials, and they're ending up on the on a dark web Russian language site.


So you've got your browser extensions, you've got the AI artifacts from models that are running locally. You've got the libraries, as well, depending on, you know, how you're, you're doing the development there. So there's things that you can look at to do this inventory. Now, this could be depending on, this could be a point in time.


There's disparate tools that I, it's actually probably a good use case for AI to help manage this discovery process. Building on further other examples, you've got your Docker images, your Kubernetes clusters, the network traffic. This is probably the first place that people go even before I don't know. For those that have have done this with network, maybe number one you do end point.


Where did you start ETR so here you've got csvs next gen firewalls, your your secure web gateways. There's these scalers, you've got your DNS, your proxy logs. You got a lot of network ways. So look, I wouldn't just use one of these. I would look at every single one that you have available to yourself. You can do scans.


GitHub, GitLab, Bitbucket. Looking in those repos as well, you can look at your CI, CD pipeline, tools as well and see are there machine learning AI build deploy steps that have been built in by your developers that are out there. And then of course, you can look at your billing and see if, you got some GPU spikes somewhere as well.


You can also just get your, you get that to your, your cloud providers, of course, but you can also just work with your accounts payable team and get that there. So I think you need to cast a wide net use every tool at your resource because shadow AI is just going to continue. And it is a big challenge for folks.


So once you've done all that now this could be a whole talk in and of itself. And I've got what, 13 minutes left. So, have to do a part two of this talk at some point in the future. But you've got to do your regular, you know, risk assessment. Just like we would on anything else.


Right? The, the the company wants to switch ERP solutions. You're going to do a risk assessment. Need to do the same thing on the AI side of the house. The likelihood impact just kind of your fundamental risk management stuff. If that's not an area you're focused on, you can go talk to your risk management team. Your GRC folks are the ones that are there and work with them on it.


Check the risk register. Because I know every risk register I've ever been responsible for was always up to date. But it is some place that you should check and make the jokes. My risk register was up to date ish for probably my top, you know, depending on the size of your company. But like my top 30 most important things, there's some stuff that I didn't have the resources to do at scale, I don't know.


I'm not up to speed on what AI is happening in the GRC space, but maybe there's some novel ways to do this now. I think talking to the lines of business again, you can see a theme here is working with our peer organizations outside of security. And it is really important. The one thing that I don't like about these kind of risk scores is it's a point in time.


It's kind of like your pen test that you get once a year and then everything changes, you know, the following week. That's kind of why I do like the the idea of the breach attack and simulation space or some of the I read taming stuff that's coming out, these days. But this is a point in time. And if you go back to my example with how much is coming with change, with that, you know, e-commerce agent for open AI, new, video generally, you know, it's moving so fast point in time, especially if just like deepfakes, like what you could do with a deepfake a year ago with what you can do on deepfakes


now is astounding. That's actually a really solid tip if you're trying to get time with your executive leadership, doing a deepfake demo, I've done one, for a customer where we did a deepfake of their chief risk officer and introduced the risk committee session that then I was going to go and present on. And then at the end of the talk, I had that deepfake say, hey, Rick, this is the best presenter we've ever heard.


We don't normally do this, but we want to actually pay Rick money for this talk. It was so good. And give him a stipend. That can capture attention. So, that's really good. And then you have to update your policies. But policies or policies. Right. How do you know that your policies are actually effective and working as expected?


That that's always a challenge for us. And we talk about some of the things that will help out on the vendor side with that in a little bit. But as far as things that you could do now, this is you could do this kind of stuff right off the bat, you know, your immediate lockdown. And then you start carving, you know, then you start doing the whitelisting of the things that are out there.


So again, I broke it down into control areas and just things that I think that you could start with that you already have. But like, what about, duo whatever your, your combination on the MFA and the so side of that is whatever you're using ping you know, what can you do there to make sure that like I have for me and day job, I've got my authorized ChatGPT that I launch from there.


Any other ChatGPT isn't going to work for us. Restrict who can create the API keys by the way, we should probably restrict who can cut API keys. Just full stop.


Also going back to, working with customers on intrusions and stuff, non-human identities, API keys, you've got to have a good inventory. You've got to be monitoring them. There's such a there's such a risk just in general in this use case, it's more trying to keep people from doing the shadow stuff. But just you need to have good governance of AI keys.


Also step up authentication. So if someone is going to be authorized to do a particular activity just like you would have it administrator step up authentication, you can do the same thing for some of your use cases. Right, right off the bat you can go into at the bottom on the network controls. I kind of alluded to this before.


You can just start blocking the domains, the sites that people are accessing for this sort of stuff. And then you can use DLP to block sensitive data. You just got to be really today, I think you have to be really surgical with, blocking the wrong things with DLP today, because our classification and false positives can be a career limiting move.


For, security leaders. So you got to be careful there. On the endpoint side, there's a lot of stuff that you can do. You can stop some of the execution of the binaries that are associated with, with the, different services. You can whitelist stuff to only run certain times. DLP again, even though I make fun of DLP, there are some options that you can do there.


In the cloud you can deny creation of the services. The one thing I want to say here is a lot of this is to deny what I will say is, you know, this my my thought process here is this is going to be authorized users, right. This is just isn't anybody that can yolo it up and do this sort of stuff.


It's going to be the authorized developer for this business unit, wherever the case may be. So I don't want anyone to think I've got like a department of no perspective at all. It's like, no, we need to put some controls on this, but this is how we're going to do it effectively, and we're going to do it effectively through endpoint, cloud, network, etc. controls that are out there.


You've got if you got your Wizards of the world, they can help you out with some of this sort of stuff. If you have API gateways, that actually could be a good source in your discovery, as well as to look at what API calls are going out. I see again with E5 and Microsoft, some of the, the allure of, Microsoft and some of their solutions there.


So API gateways could also be used in your inventory component. There's a couple things that folks may not be aware of. That I wanted to point out as well. It's just stuff to track. Is anybody familiar with NSA? Certainly, if you're if you work for a European, company, you might, but it's a it's kind of like it's kind of like a CSA in a way for Europe.


They actually do some really good stuff threat Intel research, threat reports, that sort of stuff. So they've got, a little bit higher level, but it'll give you some risk things to think about. So again, I have the link to this, at the end. So and this is a good one to track. This came out in 23.


So they've been thinking about this a little bit longer. Is anybody tracking what this is doing for the more practical stuff like tying into the cybersecurity framework. So this is they have a profile so you can go to the profile page. I have the link for this as well. They actually just had a number of webinars last month on this.


So this will go out to industry seek feedback. But ultimately this is going to result in, you know, honest kind of more practitioner focused guidance on how to secure AI and align it to the different components of the cybersecurity framework. So this is a really good, you know, maybe set up a Google alert or I actually have perplexity doing this where it'll go out and just it'll check and gives me a summary like on Sunday morning as anything new come out on this so you can use some automation here.


Old school with Google alerts or AI as well. But this is I think, a good I like the stuff that nest puts out. But I think if you think about where we're at today and all the challenges, the velocity of just data exploding everywhere, I and every new thing that's coming out and like a genetics, a whole nother area as well.


You know, you've got full on a genetics SoC personas being built out, going to market. How are you going to do governance on that? What are these agents touching? I mean, everyone's going to have agents, all the things. So the way that we've done this, I mean, it hasn't worked for 15 plus years now. It's certainly not going to work in the new environment that we have.


There's three ways that I think we need to think about it. And they're all, together. And I was struggling late last night. I was trying to put data at the bottom because I think kind of data is the foundation of it. Working with data security vendor, of course. But I still think data is, is really, really key because like, what identities have access to the data?


AI is, is an identity as well. Right. With non-human access. So I think we have to think about data identity, how the AI is accessing it, how the regular humans are accessing it, because we're still going to have breaches and intrusions that don't have anything to do with. Maybe it's Tucker AI, it's how they're getting it, that sort of thing.


So I think these are all things to think about and not in isolation. I'm going to I'm going to skip through these because I'm just I'm not super vendor ish, but I put in some. If you are looking at. So what we do is we have the ability to help you, you know, through AI, security, posture management, deal with these things in a more scalable way.


And we have the data classification at the bottom of it. So I've got 14 here of things. If you're looking at the space, kicking the tires, a lot of folks will start with DSP in this area. The DSP capabilities really translate well to AI. So I'm not going to go through these in detail. I want to do that.


But there in the slides you can see that that's there. But basically I've got 14 things to kind of consider as you're looking at the space, as we kind of wrap up, the main thing I want to say is I like and people will be familiar with this statement, especially, people that are shooters, hunters, that sort of stuff.


I think one of the problems that we've had with, with data and data security for since, since my space times, right in that joke is we've tried to go too big. Like I tried to roll out full on enterprise DLP to a university across every single channel, and it was hard enough just to get endpoint to work, much less everything else.


So as you're looking at data security, as you're looking at AI security, and you're also trying to prove value to the business, that's got a lot of, you know, if they're going to give you money for AI initiatives, we better have success on that, because kind of like the Bobs from Office Space. You're going to get the, the, the budget and then the bobs are going to come in hopefully, you know, that reference still really, really works.


It's to old movie. They're going to come and say, what would you say you do here if you got all this money from leadership for AI? And then you can't prove value because you tried to boil the ocean. That's not going to be real good. So I use the SharePoint example like you're going to be piloting different things inside your environment.


Why don't you pick one of those pilots? Let's get some success there. Let's show the wins. On how we were able to limit access. How are we able to accelerate whatever business deliverable you were hoping to get out of that initiative? That's there. So start small, get some quick wins, like think big, start small, and then start growing over time.


I think that's good actually. Probably for an initiative that we have is like, how can you have success? It's a tough balance, though. If you look at the if you're a security leader, the kind of they say the average is two years for a CSO, right? If you only have two years, you're going to want to move fast.


But if you're going to do that, you got to move smart. I've kind of talked through this is just kind of more details on how Sarah actually helps with it. Discovering the data, helping with the governance, making sure that the controls are actually working appropriately, doing ongoing monitoring, and then tying into the DLP side of the house.


We can talk more about that or, you know, hit me up and I can get a demo set up with the right folks on on that one thing, because we are in Texas, we do have in Dallas, November 12th and 13th. And you can sign up for a streaming version. We may stream it this year, but we do have a data security AI conference.


So that it'll be my first time to go to it. As an employee, I went, last year, but we actually have a, a data security certification for the community we're piloting right now, an AI certification, which is just level set. You know, in the previous conversation talk, there was talks about how to upskill yourself.


We're trying to help people understand I better just in general at a macro level. So that's there. And this is my one more thing slide. So this is just everyone everything that I referenced in you were taking pictures of those are the links to it. Obviously links don't show up very well in the purple that's out there.


So you've got that. You can get the slides from Q secon, or else you can hit me up on LinkedIn. I'll be happy to share the slides there. So with that, I thank everyone for your time.


So our next speaker is running a little bit late. So if you want to take questions we do have time for that. As anyone has questions.


And it's okay if you don't know. Thank you. Nailed it.

HOU.SEC.CON CTA

Latest