Presenter:
Transcript:
Good afternoon, HOU.SEC.CON. Welcome to the closing keynote. We have got a great speaker with some really interesting concept to share. Hope you all have had a great day one. Don't forget we got day two tomorrow on deck. It's going to be even bigger, badder a lot going on. So hopefully you've gotten plugged in, you've made some new friends, you've learned something new, you've taken some notes and found a productive day.
So you likely have heard of our closing keynote here as a frequent speaker, author, and media contributor. He hosts the RSnake Show podcast, publishes the RSnake Report, and is the author of AI's Best Friend. He is currently the CTO of Root Evidence and Managing Director at Grossman Ventures. Previously a co-founder and CTO of Bit Discovery and Vice president of one of my Houston favorites, Whitehat Security.
Please join me in welcoming to the stage Robert "RSnake" Hansen.
Hello, everybody. Can you hear me? Excellent. Well, I have 60 slides to go through and 45 minutes to do it. So if you hear me breathe, just like, you know, give me that signal. Right. Okay. So I already gave my intro, so I'm skipping that one. Right. There's a lot of caveats to this very particular presentation, because there's a lot of things that I could have gotten wrong.
A lot of the data sources are not my data sources. I didn't create these data sources. I have no say in how they were created, etc., etc.. And I did not validate this information with miter, etc. so I just want to make sure you guys know that I really highly recommend you do your own homework and actually do your analysis.
Irrespective of anything you see here, just, you know, take it with a big grain of salt. However, I did talk with two, CBE board members. The head of EPS and the CEO of On Check. And so all of them have seen various chunks of all of this. The original data sources. I've never heard anyone say anything that where I'm wrong.
So, so we'll see. So Jeremiah Grossman absolutely hates it when I talk about this analogy, so forgive me, but I think it's the best way without trying to be overly rude about what I think about CVSS. So CVSS, is basically a rating system if you're not familiar with it, where you rate things from 0 to 10, which, by increments of 0.1.
So it's 101, steps on a scale. And I sort of said, okay, well, what if I could create sort of an analogy of that? Right. So I came up with something called the Common Fruit Scoring system. We all know what fruit is, I think. Right ish. But it's a little complicated because how would you define a ten on a fruit scale and a zero on a fruit scale?
Like, what would that mean to you? And so I tried to come up with something like ability. You know, you think of fruit typically as edible. Nutrition flavor, stability and sweetness. But so but that doesn't really explain something. Whether something actually is a fruit or whether you buy it. Like around Halloween, I'm going to buy a pumpkin.
Right? Because that's what you buy at Halloween. I don't really care if it's sweet or edible or whatever. That's what I need. If I'm going to make a fruit salad, I'm not going to buy a tomato. But if I'm going to make a salad, I am going to buy a tomato. So, so it doesn't really matter.
From the adversary's perspective, these numbers don't matter if you're buying fruit. This fruit scale doesn't matter. If these are kind of arbitrary things we try to add on top of it. So if you take all of these numbers and you add them together, you would come up with a score. And similarly, CVSS has a set of numbers that you can add together.
For some reason. It's not really clear why you do that, but you put them together and that comes up with the score. So, while that may not be the best analogy, I think that's a good one to keep in mind. So I started this whole deck with this very simple question in mind. Why do we keep getting hacked?
Like, why do we still keep getting hacked? Like, I've been in computer security for 30 years and this number is getting worse, not better. We are getting worse at security somehow, and this graph is starting to kind of come together and explain what's going on a little bit. We have so many new CDs or CDs coming aboard. To the magnitude of something like around 50,000 a year at this point.
Every time I check there's another thousand, I, like, go away for a couple days and there's another thousand. I keep saying it's 309,000 today. It's like 212,000. Apparently this data, by the way, is comes from today. So that's getting worse over time. We're actually spinning. We have the same approximate amount of time to do things, but we have way more things to do, and we're spending a lot of time on a bunch of things that may or may not actually matter.
So one thing you're going to notice as we go through this deck is there's different versions of CVSS. If you're not familiar with that, this is version two is version three, version 3.1 and version four. No reason why they did that, but whatever. Different versions. So if you look at any individual vulnerability, you will see that in one scale it's one number in a different scale, it's a different number.
And that that level of inconsistency is just one small flavor of the kinds of terrible things that are happening within the CVSS world where things are getting mis numbered or changed or whatever. There's also inconsistencies in, how things are named. So if you're trying to find all the vulnerabilities in an S bomb related to, some company.
So, for instance, Red hat, you might check for Red hat comma incorporated. Period. Right. Because that's the company name. That's the most likely thing to give you all the data, you'd think. Right. Maybe, but there's also red hat without the common ink and the period, and there is red hat with brackets around it for some reason, red hat all capitalized, no space, all lowercase uppercase camel case, I guess, or whatever.
Red hat with a.com in it. And my personal favorite red hat with the space at the end. And so, because there's no consistency in this, it makes it very difficult if I'm going to try to figure out what I'm vulnerable to. If you actually do this search within, the keyboard, which is sort of the official thing the government runs.
If you search for Red Space Hat, you get different results than red with no space hat. In fact, it's a little different. I think the first one was 18, 100, and the second one was 11,000 when I made this screenshot, like six months ago or something. So you're going to see massive differences depending on what you're searching for.
So there's really no way to know that this is problem exists unless you already have looked at the data. And that's why I'm saying we really need to look at this data. It's pretty interesting and bad. There's also weird things like this where, drive by downloads and Internet Explorer will be, will be, marked as a network based vulnerability.
Well, anything that's on a network could be considered a network vulnerability. Really? This should be classified in a whole different category of something that's drive by. So any sort of user interaction really should have its own categorization, because that's a very different class of attack. But there isn't there's no such thing in at least CVSS version three.
There's also a large amount of vulnerability that aren't scored at all. Which may be a bit of a shock to you guys. So envy might score it, but it doesn't make its way all the way through the system. So I can't just ask the question like, look at this data and tell me, like, what is the score of this thing if I'm pulling from CVSS?
CVSS. From minor. So about 60% as of this morning, are not scored. That's that's two thirds of the vulnerabilities have no number at all. I mean, that's just crazy. So that also doesn't include things that, for whatever reason, do not have a CVSS score, period, because they don't have a CTE. There's things like Python injection is something I came up with a while ago.
I submitted it to Red hat. They said it's not a CV because it's expected behavior that people can write bad code. Okay. That's fine. So it doesn't give a CTE. So even if it is vulnerable, you may never get to see it ever. In this, in this corpus. SQL injection, a lot of the time, SQL injection will never appear there.
Despite the fact that there might be a vulnerability in it for whatever reason. There's one example I saw where there was a library that was that was vulnerable, that was in 40,000 different applications. Those 40,000 applications were not marked as vulnerable or separate. CVE is. So those don't count. And then a whole bunch of other stuff.
So this list of things we aren't scoring and don't know about in the CTE catalog is enormous. So another thing is there is a base score, which you would assume is the score. It's actually whoever wrote the last score, which is a little confusing. So if you have two different groups scoring something, the last one wins. But it specifically wins if it's, CSR, if CSR or one of the CNA's, market, they're the ones who win.
So NBD might have had their score, then it'll get killed. So I can't go back and see how well they're doing. I can't I can't look at MVD and like in the same data set, I have to pull in separate data sets to figure it out. Which is kind of annoying. So this is a Venn diagram of the overlap between the CSA and Nvda.
So you can see CSA is a very small subset of the total amount of scores that Nvda has done. There's a little bit that they have done that Nvda hasn't done, which is kind of interesting. But for the vast majority of cases, CSA is worse. And so in the case they're worse, this is the, no score coverage for Nvda.
It's 7.2%. I think that the vast majority these are things that they've disputed, found is disputed or whatever, and just have never gotten around to scoring or brand new or something like that. So this number should be relatively small. And it is I think Nvda is doing an okay job. Okay. This is an eye chart, but it's basically cvss.
Scores from from Nvda versus, whoever the CNA is or ADP or whatever. And so for this to be a good graph, ideally everything should be along that center, line of agreement because things are all over the place. It means that they're actually disagreeing with one another. So Nvda might say this is a really critical you see a whole bunch of things on the ten, like way up at the top.
But there are also some that are way in the top and way over to the left, which means that the CNA said, I don't think this is a very high score where the Nvidia is like, this is a ten, ten out of ten, like crazy high score. So they're they're not just like slightly disagreement about some of these.
They're like way disagreeing about some of these. There's also a whole bunch that are marked as active or inactive, in terms of exploitation. And these, fluctuate over time. And it's kind of odd. You'd think it would be, you know, basically 100% or very high. It's always like around like, what is it like half a percent or so to actually have, some level of exploitation?
This feeds into CC's list, I believe. To some degree. Anyway, which is the Caesar capitalist, about 10% have references to public exploit code. So this is code that has been, published in the wild where someone could download it and actually go and run it and attack somebody with it. You'd think that this would be a perfect overlap with the amount of vulnerabilities that are being used.
But it turns out that the vast majority of exploit code is on stuff that doesn't matter. So the so the exploit code is sitting there. The adversary is looking at and they're like, yeah, it's not a vulnerability. Don't care about this. I can't it doesn't give me access that I want which is really telling. That means that there's something going on in the brain of an adversary that's like, hey, I want certain types of vulnerabilities.
Those vulnerabilities have to scale linearly. I have to be able to run at once, and it has to work everywhere, uniformly. And for whatever reason, the vast majority of, CVEs do not fit that the vast, vast majority. So it is it has everyone heard of Kev? Is this risk? So maybe 5% of you guys. 10%. Okay. Well, Kev is supposed to stand for known, exploited vulnerabilities, right?
And there's a whole bunch of other, cams out there than the one you might be thinking about. So there's caca, there's vulture, Kev, there's Kev Intel, which just recently went under. And I'm sure there's others, but, but importantly sees a Kev is the vulnerabilities, that they say have, that are actively in use. So the known exploit exploited meaning has been exploited, right?
That's what it means. Does that sound like it makes sense to everybody? Okay. The problem is, it is not that. It isn't that what it is? Is the known, exploited vulnerabilities for which there is an easy patch. You see the difference between those two things. That is, there's night and day different. Right? Because there's a huge amount of vulnerabilities that they do know are being exploited, but they just don't have a good patch for it.
So they have no way to incentivize people to go fix those things, because there's nothing for them to do. They're like, well, that sucks, but what do I do about it? Right? And so this is this is bad. I've actually talked to a couple of people about this. What's actually happening is there's a failure of incentives internally.
There's a bunch of government contractors who run these things, and they are incentivized to close vulnerabilities within a certain window. If they don't do it, then they don't get bonus or whatever. So you don't want to have the window closure too far out, because then your exposure is huge, but you also don't want it to close in because then they might go, oh, I'll never get it in the like.
There's only like two days to fix it. I'm never going to get there because either the vulnerability patch does not exist or it's just too hard or whatever. So they say forget it. So they never get around to fixing it. So they they're in this weird battle where they're trying to, like, get it just in that sweet spot of figuring out what the right incentives to get these things fixed are.
And it's leaving us wide open and many different versions of these vulnerabilities. Okay. So this is the population versus the population. So these represent 14 1400 out of 30, sorry, 312,000. Right. So, that ends up being .46 percent of all vulnerabilities have been put on this list. The known exploited vulnerability list. So the vast majority, 99.5 ish percent of vulnerabilities are not known to be exploited, according to see if you believe CSA.
Right. So that the what is that? What are we working on with all this blue stuff? Why are we spending so much time on that? If C says right. And so you're thinking, well, because this is not right. Okay. So this is the average. If you took all of the vulnerabilities over the entire corpus of the CV library.
This is what it looks like. And the weird thing is for Caps I'm sorry. This is specifically for CVS. So the weird thing is that it's been fairly static. It looks like it's always pretty high. I think that kind of makes sense because certain vulnerabilities rank higher in CVS testers because they are remotely exploitable. There is an exploit, payload, etc. so they tend to kind of tweak the knobs slightly upwards and not all the way to ten necessarily.
Notice that it isn't ten across the board. It's like, you know, seven, eight kind of thing. But then I started looking at these other graphs, and this is what got me just like down a crazy rabbit hole. So the top, is an example of all of the CVS by their scores, from 0 to 10. And the bottom is all the cabs from 0 to 10.
Right. Does anything stand out to you about these graphs at all? Does anything look weird to you? Yeah. Yeah.
The bottom of the bottom of it tends. Yes. Well, cabs are skewed higher. I think that makes sense. Some of the Cavs are down at, like, 3 or 4 or whatever, which is kind of interesting, by the way. Means the bad guys don't give a crap about the highs and criticals. They'll use whatever they can use.
That's a good that's a good a that's a good thing to notice. But the thing that stood out to me is there isn't a bell curve and it's also not flat. And it's also not like evenly distributed across the thing. I didn't know what I was expecting to find, but this was definitely not it. And the other thing that's super weird is there's huge spikes.
There's like massive spikes. And then it's down to almost nothing, and then huge spikes and then down to nothing. That's very odd. Like if you if you spend enough time looking at a population of anything, you're supposed to see some sort of bell curve or some sort of flattening or some sort of randomness that kind of ends up being normalized into a flatline, right?
But it's not it's not happening like the more I run this, the longer I run this, the more this looks the same. Like it's creating these crazy spikes. Okay, so what the hell is going on? So if you look at the density of each of these, you actually break it down. This is slightly old data from about six months ago.
You'll notice that certain scores like CVSS 7.8 have 7% of all vulnerabilities ranked 7.8, 7.7. 1%. Like, that's a huge chunk. That's nearly 10%. That's way up there, right? And then other vulnerabilities like the ones on the far side over here on your right. There's none there's literally none. And I'm just it's like blowing my mind, like what is happening here?
Like, am I messing up the data? Like, what's going on? Well, no, there's something happening here. Something very weird happened, and I found this one. Vulnerability. This is. This is what cracked it for me when I sort of figure out what's going on here. So, if you look at the bottom scores, the impact scores six and exploit score, probability score is, 3.9.
So if you add those two numbers together, what does that equal? 99. So what's going on with that? Ten. What's going on. Right. There's something weird happening here. And and I'm just like what? Like like is there some weird, like typo. Is this, like programmatic issues? Like what's happening here? Like, my math brain was exploding. Okay, well, here's what's going on.
So they have this crazy algorithm. That is it. Just additions. You think, like, the common fruit scoring system is just addition? It kind of makes sense, right? You take a bunch of numbers, you add them together, and that comes up with the number, right? No no no no, it's nothing like that. It's way more complicated. It's this much more complicated algorithm that has rounding functions.
It has a multiplier of one point something 1.08. Right. So it creates this weird math. And I found this one website. I'd be trouble to find it again. But anyway, some guy did the math and actually allowed you to tweak all those numbers. And if you change it to any other number, it becomes less of a bell curve.
And so someone, somewhere said, we want this thing to look kind of like a bell curve. So they created this math to make it do that crazy thing that we just looked at. Isn't that weird? Does anyone find that very strange that we're using somebody, like, artificially created math on top of something that's ultimately subjective anyway? If someone's feeling about how bad this vulnerability is.
Anyway. It's terrible. So why can't we get a CVSS, 9.5 and CVS version three? Well, the reason is because if you're trying to get this very specific thing, if you're trying to get between 9.4 and 9.5, where there's a multiplier of 1.08 and you're trying to solve for that value, it turns out you just can't get there.
It's you just can't do it. There is no mathematical function. There's no way to get to that number. So that's why we're seeing these massive dips, right? So we're just being shoved into certain numbers, having nothing at all to do with the actual real risk to us as a company. Makes sense. Okay. So if you look at the, the distribution across, this CVSS, you saw the other graph that was orange is a little higher.
Well, it would make sense if you were kind of shoving stuff up a little bit with that round up function. The point. 1.08 multiplier that you might see an increase off the median. So instead of it being exactly in the middle, like you might expect, a perfect bell curve. Not that that makes sense, but whatever, you would see it kind of down a little bit.
Well, now it's shoved up about 17%. Well, this explains some of that 17%. Not all of it, but some of it. So if you do the math, the probabilities of all possible combinations for CBS version three, this is a static graph. This does not change over time, because I'm actually trying to create every type of version of math where I multiply every number together to see what all combinations look like.
This is what it looks like. Someone made it look like this. So a huge spike in zero. You can't get any of the low numbers. And then a bell curve, right? And so if you overlay that with the real graphic, with real vulnerability data, you'll see weird spikes. You see this like 7.8 over here for instance. And well, that has to do I believe with certain numbers are easier to get to that others when you're actually pulling on the the sliders on those boxes.
It's just easier to hit certain numbers than other numbers. Some things are more common than other things. So this is CVSS version two, right. So CBS version two is a little worse. And there's certain numbers in the middle you couldn't get to. The bell curve is a little more stretched out. I don't know why they thought CVSS version three was better, but there it is.
But it got me thinking, like, okay, we're coming out with CBS version four. It's now been out for a couple years or whatever. You're starting to see it populate, right? It's starting to come online. Still a ton of CVSS version three out there, by the way. Like, that's not going anywhere. I think there's a lot of existing tools.
People just don't want to, you know, they don't want to do the extra work on CVSS version four, which I'll talk about in a second. So any guesses on CVSS version four? Do you think it's better the same or bad? I heard worse. You, sir, are right. It's a mess. It's an absolute disaster. Look at this thing.
So these are all the probabilities of all the combinations. Whoops. Backgrounds. So these are all probabilities. What am I. I keep trying to keep down. Here we go. As you see, huge spikes here, big gaps in the middle. Huge. Very likely to get 7.0. So you're going to see a lot of 7.0 KBS. I'm just going to predict that right now.
And nothing down here. I'm just going to predict that right now. It's just you're never going to see a CVSS three coming out ever again if they're using CVSS version four. So I think that's hot garbage. I don't think that this is real risk. I don't think this has anything to do with the adversary. This is just made up math someone came up with.
Right. So this is the likelihood, the density of you landing on any given number for any of the different versions. So CVSS version four is worse. There's fewer combinations of things you can get to. So yes, whoever said it was worse, you were right. It is indeed worse. But the amount of work you have to do to get to worse is higher.
You have. There's way more combinations of things you have to get to. For me to run all the computational analysis for CVSS version two and three takes like less than a second way, less than a second for me to run combinations, all the combinations for CVSS version four. It takes like 30s or something. It's like quite a bit more work because there's way more variables that go into it, which means more work on the end user.
For what? For fewer possibilities. So I just forget Cvs's version because as far as I can try it. All right. So another weird thing is we're seeing a greater middling over time of vulnerabilities. I have a better version of this graph, but I just kind of want to show you, so you're going to see this middle section is growing where the criticals and the lows are shrinking.
Right. Everything's kind of going to that middle right? And this is another version of the trend line of that graph. So again, there's something happening where more and more vulnerabilities are turned out to be middle. Why like, did we just start finding kind of mediocre ones a lot? Like what happened as an industry where this is occurring, right?
Something is going on here and it's it's like you'd think you'd be focused more on the criticals if that mattered or highs or whatever, right? I'm not sure those things do matter, but let's say they did. But no threat of a reason. We're finding a whole bunch of cvss sixes and sevens and stuff. Okay. So one quick point about the CVSS zeros.
I thought that was kind of interesting. I'm like, There's a whole bunch of cvss zeros. They weren't marked as not an issue necessarily. Some were still an issue. They weren't like they weren't like pushed out of the system in another way. They're just like zeros. So I looked at a bunch of them and one of them was this probably most my favorite one.
So this was a remote, aix RDP. So it's like a type of RDP. That Nvda said was a cvss version, value of 9.1. But when GitHub scored it, they scored it a zero. As far as I can tell, there's no patch for this and no one has denied that. It's a vulnerability. I think someone typed it.
I think they meant to type ten. And so there's kind of weird artifacts in the data on top of the fact that it's not trustworthy. And it's all human kind of mess. All right. So, if you look at this is one of the very first graphs I created, actually. So this is the Cav list, tied in with all the criticals the, the most, the highest, CVEs possible.
Because what I wanted to say is, well, shouldn't every one of them, if they're all critical, that means that bad guys are using them all the time, right? Well, no. Instead of it being 0.0, 0.5%, it's just 3.8%. It's barely better than it was in the entire population. It's only like six times better or something, sometimes better. So there's something weird about how we're describing criticality.
Criticality has nothing to do. Apparently way worse than flip of a coin. This is wildly worse than flip of a coin. You flip a coin. The chances of you getting that, that small chunk, that 33.8% is extremely small, which means that criticality doesn't mean anything. We're not actually predicting anything with severe nine or above. It doesn't mean anything.
So this is the breakdown per year. There is nothing at all before 2002, which is kind of interesting. This is the heaviest. So bad guys will use old ones. They don't really care. A lot of people think it's all about, oh, days, like whatever's coming out today, then. Nope, nope. They got whoops. They got back, down to 2002, 2004, 2005.
Like there's a bunch back there. So there's I don't think the adversaries are thinking about things the same way, or they're just looking at vulnerability and saying, Will this work for the tasks I'm trying to to do? And so it doesn't matter the criticality, it doesn't matter the age. These have no bearing at all on whether they're very likely to use or sorry, not as much bearing as we like now.
It is skewing more towards the more recent of vulnerabilities, but I think that has more to do with the fact that people just arbitrarily update things. So like, oh, our old RSA needs to be replaced by the new RSA. So the old RSA vulnerability is going away, but the new one's vulnerable. So the old key goes away in terms of utility.
But the new ones, the new hotness. So you'll see some of this newness. But if you look at the entrance report, we pulled data from everybody who has data, basically, the trends report said that there was 12 new CVEs used last year. 12. That's one a month. That's not hundreds of thousands. We're talking about 50,000 CDs per year.
And the adversary only picked 12. Why those 12? Why specifically those 12? And why not the rest? I mean, there's so many to choose from. Well, there's something particular. There's something interesting about them that we're just not paying attention to. Okay. So this is the breakdown of the list. The one that I found the most interesting was the Lowe's, 0.5 percent of them are low severity.
So again, they don't care. They'll use whatever like whatever is useful. Now, one thing that's weird about the Lowe's and zeros and whatever is sometimes there is typos, sometimes there's misunderstanding or whatever. I found one vulnerability that was marked as a critical, but it was like a default credential. But the second you log in and asks you to change it, it's like, well, yeah, of course it's default until you actually log in to the box.
Yeah. That's how literally everything works. So you'll see in this case like this, this is a, a vulnerability that was marked as a medium by NSA, by MVD, and then changed to a low by the CNA, in this case Samsung. So I asked a bunch of questions about this. So some of it is just them changing a variable or something like, well, it's actually not this, it's that or whatever.
We did more investigation. I did some analysis. So there's this, there's a standard that apparently NDI uses for they hand off the ability to become a CNA. And that is basically your reputation, how good you are at predicting the likelihood of getting the same answer the NDI gets. So NDI is God and everybody else has to do what God says, but they don't check everything.
They only check like occasionally. And if you get it right, most of the time, you get to keep your status. You get the gold status and platinum status, and you win if you get it wrong than they, I guess nothing. I guess they just don't like you or something, I don't know. Anyway, so it turns out certain CNAs, have figured out that they really don't want to have these criticals because it just makes people upset that no one really understands this stuff anyway, so.
And it doesn't matter. Clearly bad guys don't care. So they will find whatever answer that Nvda is most likely to use to decide what this thing is going to be from the text. So Nvda takes the text and they look at the text and they say from this text this looks like it. It would be this level of severity.
So if they change the text as they're submitting it, because most of the times CNAs are the ones submitting it, they change the text to be what the eventual number is that they want. They're much more likely to be accurate. Nvda will choose the number that they want them to choose, and then they'll happen to get it right.
Oh, a magic right. And so certain companies have gotten really clever about this. They're actually using large language models to try to predict the exact outcome that they're going to. They're wanting on the other side of this. So this score means nothing like nothing. You see, I'm saying, okay, so you have choices though. What are you going to trust.
You're going to trust Nvda who over overestimates votes historically. Or are you going to trust the CNAs who underestimate the votes? Both of them have weird incentives. Both of them are kind of untrustworthy for various different reasons. So this is the number of unscrewed vulnerabilities, and Cavs over time. This not including Nvda. I think this other graph does a better job of explaining it.
So you'll see this big drop off here. That's the case. That's them coming online and actually becoming a thing where they're actually going back and looking at old phones and deciding whether these things should be, you know, have classifications or not. So they're doing a pretty good job. If you notice, it's going down dramatically where most vulnerabilities, new vulnerabilities are getting scored.
I notice, as of six months ago when I first pulled this data, we've seen a decline of around 6% of vulnerabilities that, were on scored and now are scored. So that's great. CNAs are actually getting around to scoring stuff, but how fast? So this is the publication delay. And I'm looking at this graph, I'm looking at this graph and I'm like, what is wrong with this graph?
Like what did I do? Well, what is happening? Why is this going way off over 6000. Like what happened? Like I honestly think I just screwed it up at first. So I go back and look at the data. I blow it up to log scale, and there is sure enough stuff sitting out there that 16 years old before they got around to telling us about these vulnerabilities, right?
16 years later. So I go pull an old one just out of curiosity. This one's from 13 years ago. This is 2012. This is a cross-site scripting and an endpoint dot PHP parameter or whatever. It's like, who cares? Why did this take so long for them to release this? This is like relevant now. I mean, maybe that's why, you know, just wait long enough and no one will care.
But so you can't really trust them. So the content is out of date as well. And in particular, the csco, we believe it's around ish, 18 months out of date. There's I think there was trying to speed this up a little bit, so, that's good news. That's really good news. But 18 months is still a pretty big delay.
Like, we'd like to see that happening much faster, especially if it's actively under attack. Like, these are nation states attacking our government. You think that they'd want to get around this a little bit faster? I think there's some reasons for that. I think some of it is like sort of clandestine reasons. They know this is being used and they kind of want to sit on it and they want to watch the adversaries.
I mean, I don't know this for a fact. I'm just guessing. But, maybe. All right. So this is, Vaughn check. Have Vaughn check in our opinion. Okay. I'm not. Don't don't take this to the bank or anything. In my opinion, of the things that are currently publicly available, we believe Vaughn check is the best. So if you're going to use any of these things, you heard it from me.
Tell Patrick Doherty I said hello. I think this is the best out of all of them. I'm not saying it's perfect. I'm not saying that. But I have reason, many reasons to believe that they're the best. So, if you look at their overlap compared to see some Kev, see some cab fines, whatever, 1314 hundred, they find like a little over 4000, 4100, something like that.
So it's a little bit bigger corpus. Vulnerabilities. They have different ways that they pull stuff in, and they don't really care about this 18 month window. They want to have recency. So it stands to reason their number would always be a little bit bigger because they're a leading indicator compared to a Kev who is that got that were delayed.
So this is Vaughn check Kev versus CVSS scores. So if you take one shot Kevin you throw a score on it. See this. This is what the breakdown is. If you notice a huge chunk have no score. Again I guys don't care about scores. It's together and a bunch of low and medium and whatever. So I think one of the major problems is we're talking about what I call stoplight information security.
High, medium low. Critical. Non-critical, whatever. ABCd, red yellow green. It's all the same. It's these weird, confounding, constrained sets that actually have no bearing really at all on anything. Because what you do is you try to do math on it. If I say you have, ten criticals. How many highs is that worth? Right.
You can't do that. It's not math. And the problem is we're trying to talk to executives about math, like things as if we're still in elementary school, and I could trade grades with one another or something, which you also can't do, like teach. You know, you gave me three D's and one. Hey, can I convert one of these?
D like, how many can I get the number of raised if I trade or something like this? That's not possible. It doesn't make any sense what you're saying. And this is a similar thing. Like we're, we're trying to arbitrarily add things together that aren't actually math. Just because you put a number on a thing doesn't make it not a thing.
Like if I say, if this fruit is a seven, what does that mean? What am I saying? So I'll trade you like three fours or something and I'll get, you know, a 12 or like what is like, what does that mean in the fruit scoring system you have this exact same problem. CVS says similarly, if you take the full replacement cost of something that's $1,000 and you say that's a one out of ten, are you saying the worst thing in the entire company, the world ending company thing is only $10,000?
Because that's what the math says. If you do a one through ten, right? Obviously that makes no sense. All of this isn't math. It's just like people arbitrarily trying to make math out of math non math things. So, this is a very fun little website that Watchtower put up, where they found a, cvss. I'm sorry, a CDC rather that had an executable attached to it that you could download to fix the vulnerability.
This was the patch to the vulnerability. So this is a very old vulnerability. And so they went back and they looked at it and it turned out that it was available. They could go just register it and upload an X, and now you're downloading this patch for this thing. It's potentially their malware. Right. So I'm like, well that's really funny.
I'm going to go take a look at it. So I look at this website from 2013 hasn't been updated. It says it last revised 2013. Right. That's a long time ago. That is that's before I gave a crap about this. Right. So I go I scroll down and where is it, where to go. But this is last updated in 2013.
So clearly you cannot rely on these numbers to be accurate. Like if I actually need to know if there's changes here, if this is somehow related to my job function where I have to actually track this, I cannot rely on their website for valid, accurate data. Okay. There's also this concept called disputed. If you're not really paying attention, you might be thinking this is really important to go fix the dispute.
It has changed over time pretty significantly. It used to be kind of not such a big thing, and now it's happening a lot. There's also rejected. This is also starting to spike a little bit in the recent years since 2017 ish. And so if you don't know, to remove these types of vulnerabilities, you're fixing a whole bunch of stuff for which people don't even agree is a vulnerability.
You're wasting a bunch of time. And this is thousands of vulnerabilities, like tens of thousands of vulnerabilities we're talking about. So it's not super uncommon. EPS also gets this wrong. They start throwing numbers and stuff for which there is like no one thinks this is a bomb anymore. So it's like, oh, there's some chance that someone will exploit this.
No, there's no chance. No chance. This will be exploit. Zero chance. This is a CVSS versus EPS. I showed this to the EPs guys, and they were they are amused because they had the exact same graph they had made. Literally the exact same graph, different color. So for this to make sense, you again should see all the Vaughns, all along that line of agreement.
There is almost no Vaughn's on that line of agreement. We think the EPS is a, It will not work. The reason we think that is because if you look at the total population of Warner buildings is 312,000. So the chance of getting any one correct is one divided by its 312,000. If you have two vulnerabilities, it's times two.
If you have hundreds, it's times hundreds or thousands times thousands. The chances of you getting that exactly right are basically zero. So we do not think the EPS can statistically ever do what it's designed to do. At least not at the accuracy level they do. You need to do this correctly if you're going to fix the ones that matter, assuming you're going to fix the ones that actually lead to loss.
So this is a distribution score between E, EPS and CVSS and it's worse than a flip a coin. It's below 50%. There's no correlation. So one or wrong or both are wrong. Right? I think both of them are wrong. This is wrong. Check an analysis of it compared to kind of everything else. So the 312,000 vulnerabilities, if you take the criticals, the 57,000 that are criticals, or highs and criticals and you flip the coin, you said, okay, I want to find all the things in this corpus that are high and critical, like, would you be better off flipping a coin or would you be better off, fixing all highs and criticals? And it turns out I actually thought I'd get this exactly backwards. I thought it would be better to flip a coin. It turns out it's actually three times better to, to just fix all highs and criticals. So that's great, right? CVSS has done something there, but what did it cost you to get there? That's the real question.
It turns out it's 96.2% wasted work to get that 3% or three times gain rather, so that it doesn't make sense. Again, it's back to not being a good idea. So you're better off working with lists, or at least things that you believe lead to loss. Right. So this is from their website from this.gov. They said that CVSS is not a measure of risk.
So you're thinking no, it's is a measure risk. It isn't a measure of risk. They say it's not a measure of risk. It's something else. It's a measure of severity whatever that I don't I don't know what that means, but it's not risk. They believe that it is well-suited for people who need accuracy and consistency. But I think I've shown you many examples where it's neither of those things.
And they say it could be used as a factor of prioritization for remediation. I totally disagree, I do not think that's the case because I think you end up doing 96.2% way too much work. I think it is a bad way to prioritize things, but it's what we have. Like, that's this is this is my love letter to you guys.
I know that this is the best we had. We did our best. I feel really bad that this is what we came up with, but it's what we had at the time. We didn't have better data, but now we have better data. We should not be using this as an industry. We've got to fully reject this seriously. It's very important.
We are losing millions and billions of dollars literally because of bad decision making about prioritization. And it's coming from stuff like this, okay. With my copious amount of time left here. So this comes from us. I'm not trying to cheat on them, but, just so you know, this is a quote from them when I gave a kind of semi earlier version of this, like much earlier, I had some data wrong.
That's what I'm saying. Do your homework. Seriously? They said, well, you're kind of saying some stuff about Caesar cab. That's not so great. Like, we kind of want to, like, kind of couch that a little bit. I'm like, great, what do you want? What do you want to say? And so they gave me this quote based on the anonymized customer data we are observing, Caesar kind of catalog is a cornerstone of collective defense, helping security teams swiftly identify, prioritize, address critical threats.
So this is proact, proactive sharing of these risks, fosters unity, keeps defenders ahead. Notably, many organizations respond promptly to newly listed Cavs and vulnerabilities and remediate faster when they are in the Caesar Cav list. Crate. Who gives a shit like that isn't the problem. The problem is it? The people use it. The problem is that it isn't the measure of risk.
There's a whole bunch of stuff that isn't putting in that list. That's the problem. Like, I'm not mad that they put the list out. I'm mad that we're using it in a totally incorrect way because it's very poorly named, very, very poorly named. It should be called what it is commonly exploited vulnerabilities for which we have an easy patch that is not the same thing, right?
Okay. So I think that network security, application security, we are doomed if we use this as prioritization like this is not the way to go. There is a better way forward. And that has everything to do with the adversaries actually doing not what we think and weirdly try to predict, but what they are actually doing. Because what would you rather say?
Would you rather say, I want to be compromised by something that no one has ever seen, ever? Or do you want to say I've been compromised by something that's getting everyone compromised? We're all getting compromised, but we just didn't get a round of prioritizes, prioritizing it and fixing it, because that's what we're currently saying to people. I don't think that that conversation goes over very well.
I think we need to switch to return on security investment. If I put a certain amount of dollars into a company as a marketer or salesperson or whatever, I expect a certain return, right? When we talk to the board, we're like, hey, I want $55,000 to fix some cross-site scripting, command injection into SQL injection, you know, a couple of CVEs, blah, blah, blah to change our grade from A, B, minus to B plus the ports like, what is this kid doing?
Like, get him out of here. This is not an adult because the sales team and the marketing team, they're talking to me in dollars and cents. They're saying, hey, we did this and we got this return. We need that same chalk track. We need a change to being a conversation where we're saying, here's a certain amount of money.
I want Mr. CFO, who's the real risk officer of the company, by the way, not us. We need to say to the CFO, hey, if I have $50,000, I think I can retire like $9.4 million worth of expected value of loss based on something that is being actively being exploited. And I'm going to they're going to ask, okay, let me see.
You're the math. And they're gonna look at the math because they're math people and they're going to go, oh, you got the fully loaded headcount or slightly wrong, it's not $100. Now here's $110 now, okay. We'll tweak the knobs. And and they're like, okay, can I have the $60,000 to fix the $9.4 million worth of loss? Yes. That's an easy conversation.
And it gets you back in the boardroom in an intelligent way. What we've been doing before with this fruit scoring system is not an a measure of risk. It's not the it's not the way forward. It doesn't actually represent what the real risk profile is. We need a return on security investment. Rossi. All right, get that in your head.
We need Rossi. If we're not getting it, we're we haven't built the tools to do the actual job that we need to do, which to inform the real risk officer of the company where the risk is and how to fix it. All right. This data is all available on this website. So if you go see E data.com you can pull down all of these graphs.
Please use them. Please use them. Put them in your decks. Explained anybody who matters. This is not correct that we should not be doing this. These tools are not built for us because we need to work on better data. And so I hope you take this presentation, which by the way, is on this website. So you can just grab it just as it is.
There's also a dynamic version that updates daily, so grab that one. It's a little weird because it's in JavaScript, but whatever, you'll figure it out. Or just grab the graphs and download them. Just pull them down. Right. I want you guys to be armed with real security information from real adversaries. Not this weird thing that we've created.
All right, I will forgive you, because I forgive myself for making the decision to use CSS in the past, but we've got to move on. And with that, that's my presentation.
I have one minute for questions. If you want. Yes. So I think that's the harder thing to prove. With you speak a little louder. Yeah. So I think it's a harder thing to prove it with the Cav database. Has there ever been a scenario in which, I guess something that's not in the database that is being actively exploited?
Yes. But for whatever reason, we just don't like, have that in, in, in the database and there's an 18 month lag. Where does that show up best. Yeah. Well you might see it and check Kev, for instance, but you won't see it in C's account. So that's why I say if you're if you're looking for a Kev list, go check it.
Which is, by the way, free to download. So there's no reason why you wouldn't do it.
The questions. All right. Well, thank you very much for your time. Appreciate it.