Presenter:
Transcript:
So recovering from disaster recovery. So first of all, we're going to kind of go through the basics. One of the disaster recovery elements. I'm going to present a scenario similarly based on what I went through, a few years back. And then touch on more specifically what actually happened.
And then since we start with the elements, I want to go through, what happens when you are missing elements from a disaster recovery slash business continuity plan? And then, of course, you know, closing remarks, questions, dad jokes, whatever you all want.
So disaster recovery elements. Oh, and I like to throw in financial advice. Nothing protects you from identity theft quite like a 350 credit score. Trust me.
What is disaster recovery? So the textbook definition, or I guess depends on where you get it from, is an organization's method of regaining access and functionality to its IT infrastructure after events such as natural disaster, cyber attack and other types of business disruptions such as the Covid 19 pandemic. Still a natural disaster? Yes, kind of actually, that was lab generated.
So every solid disaster recovery plan, has five critical elements. You have your disaster recovery team. Basically, the people that, you know, make this work. You have full on business risk evaluation. You have to identify your business critical applications and assets. You have to have backups. And you have to test and regularly optimize, test and improve.
And ask the question. Yes, I would imagine with and yeah, identification of assets, both processes where you do your, the idea of your, your, your policies that everything is really a bit ahead. But yeah, that that's very much a part of it. So I'm going to break those down a little bit more here. So starting with the team.
It's basically a group of specialists who are responsible for creating, implementing and managing the disaster recovery plan. You know, each team member should have their own defined roles and responsibilities. Defining, you know, is basically partly a segregation of duties, partly, you know, who's going to be making decisions, so on and so forth. And, you know, a common misconception is, well, the senior leadership team, this whole teams are going to make all the decisions.
If they know what's best for them, they're going to sit down, shut up and just stay in their seat. In the event of disaster, this team should know how to communicate with number one each other, other employees, vendors, customers, the executive team, potentially a board of, you know, a board of directors. You know, and as I mentioned, I got a little bit ahead of myself.
Leaders, you need to stand aside and let the disaster recovery do their job. Yeah. So I do apologize if my text is a little bit small. So, you know, just that a little bit beforehand, but, so for risk evaluation, you know, basically what you do is you assess potential hazards that put the organization at risk, both before and during a disaster.
So this happens at multiple times along, you know, the path, you strategize what measures and resources are needed to resume business operations. And then moving on to business critical asset identification. You need to have documentation. You need to have a lot of it. You need to have it in places where you can both access it online and offline.
You have everything in SharePoint and you lose assets. So like, you know, somebody asked earlier, how do you get access to it if you can't log into SharePoint, you know, so it's always critical to have a copy of your most recent disaster. You know, disaster recovery team offline.
It's also imperative to identify, you know, which systems, applications, data and other resources. Those resources could be your vendors. They could be, you know, other support teams, third party, you know, investigate, you know, investigators or anything like that that you need, to come in and assist. You have to know how to get Ahold of all of them.
Documentation should be very low level, step by step. Take me through the numbers is if I've never done this before, you know, you're going to need to do that to be able to recover the data in the systems, especially because whoever's going through it probably inherited it from somebody else and may not know the systems. You know, I, you know, I've walked into an environment before where, you know, it's like, I don't know anything about what's going on.
But I saw right into a disaster. So it's, you know, it helps to know what needs to happen and when. That also has to document, not just what's critical, what the assets are, but also what the dependencies are. Because, you know, if, you know, Active Directory is, you know, or your entry ID is critical, but what is it dependent on?
You know, it could be dependent on SSL to log in. It could be dependent on, you know, some other form of, you know, access or whatever else. So moving into backups, an effective backup plan determines which systems of data need to be backed up. You know, who should perform the backups, how they're implemented. You know, most people have a backup solution in place.
But you know what a lot of the backup plans do not account for is what happens when your backups are out. How do you, you know, do you have backups of your backups? Do you have cold storage? Do you have things like that? So, these are plenty of factors that definitely need to be, considered. You know, and as you mentioned, your backup plan is going to have recovery point objectives, recovery time objectives.
So the recovery point is what is the frequency of backups. Are you doing this? You know, kind of in a rolling hot state, you know, backup or do you, you know, back something up every night at midnight. You know, if you're doing, you know, financial transactions, millions of dollars a day, you probably going to need something a little bit more real time instead of, you know, losing you know, 20 hours worth of financial data and records and things like that.
Recovery time, basically. How much time can we survive if this thing goes down like, how fast do we need to be back up online to, you know, start conducting business and stop hemorrhaging money? You know, these directly impact the strategy of the disaster recovery and the amount of downtime the organization can handle.
And testing and optimization. Most people will say do it every year. I am more of an advocate or do it every six months. Quarterly is just an exercise in absolute futility, and nobody has the patience or time for that. Tabletops or quarterly. But disaster recovery should at least be twice a year. I know plenty of companies that do you know, they have two hot sites, and every six months they do a hard cut over to simulate a disaster and see what happens.
What goes down the the firewall rules kick in and we forget to update rules on this one. So on and so forth. But they test it so they can keep optimizing and making sure the next time I cut over, we have, you know, fewer and farther between problems. You know, while some better time to also tune their tools, security tools need to know what's happening.
You know, when, when they see, you know, log ins detected because you're using, you know, AWS, West Cloud or something like that, or, you know, you're using, you know, the western region of the United States for authentication. And then you cut over and all of a sudden your, you know, everybody's authenticating from the East Coast, because that's where, you know, the the logins for Azure or whatever are being redirected.
You know, you get a whole bunch of alerts saying, hey, you know, this is impossible travel or hey, you know, this user logged in from here and five minutes later they're logging in from over there. So it helps to tune the tools to be prepared for this stuff as well. So financial advice part two borrow money from pessimists.
They don't expect to get it back.
And, All right, quick show of hands. Who has actually participated in a live disaster recovery like.
Real bad. Two. Three four. Real time. Five. Six. Yeah. Real time. Like not simulation. Like. Okay. We're we're we're running out of business here. All right? It's about what I usually get about maybe 5 to 10% of the room. So I want to go ahead and present a little bit of a scenario. Just imagine you're a city. So you, even if you don't know what it's like to sit in that role, just pretend for a minute or a CIO, you're in charge of the whole cybersecurity program.
I'm sure everyone of us knows how we would run a program if we were that, you know, that high ranking. But, you know, as you get higher in the, you know, in, the management and leadership chain, your vision changes a little bit, but just forsake, you know, just for the sake of, throwing out, a unique perspective.
You have an infrastructure of just over 30,000 endpoints, about 15 to 20,000 users. The other ten K or so is, you know, servers and VMs, Docker containers, things like that. You have a very small team of cyber professionals, about 12 people in all of cybersecurity. And what I mean that I mean, your engineers, your analysts, your, you know, your management teams, operations people.
I'm not talking about GRC. You've probably got 50 of those.
You can track 3 or 4 different managed service providers, to manage most of your it your network stuff. Your 60 is a security infrastructure. The security tools and systems are basically awarded to whichever MSSP says, I can do this cheaper. Right. Begins you get a call Sunday morning, 5:30. I haven't had a chance to, you know, wake up and get ready for church yet.
Instead, you're, getting a phone call from your SoC manager just tells you that applications and services are failing across the enterprise. Databases are unreadable. A couple of users have already reported strange extensions on all their files, which indicate they can no longer open. I don't know who's checking their files that, you know, 4:00 or 5:00 in the morning on a Sunday.
They're really dedicated to their jobs. Or they're exfiltrating something that to potentially.
And I tell you, it looks like a ransomware outbreak. I'm sure we all kind of figured that out by now. What do you do? First, feel free to pitch in. What would you do?
What would love to pull the plug on that? Pull the plug on everything, our talent recovery and start talking to other leaders, I remember is, one of the first five elements.
Of bringing the disaster recovery team, right? Yeah. They're the guys you're calling this type of stuff, you know, hits the fan in New Orleans. You say, oh, wait, we, we don't have a disaster recovery team.
I'm not kidding. This happens. So take a closer look. Let's go back through what I had already set up. You have an infrastructure of just over 30,000 endpoints, 15 to 20 K users. Right? We should have backups to recover from. Actually, no you don't. The ransomware took them out too. So what do we do now?
Also, the only thing you can do at this point, while the FBI's not going to tell you one way or the other what you should do, they're going to wink at you and say, pay the ransom. And that's pretty much what you do at that point. You have a very small team of cyber professionals that's 12 or so defending your company, a team that small struggles to keep their heads above water, just managing day to day current security projects.
Half of them are probably involved in, you know, three, 4 or 5 different. You know, we're we're implementing this tool. We're taking that one out where, you know, we've got a Sox audit, we're going through, you know, you know, Pci-dss audit on top of this. You know, they're absolutely slang for a company that size disasters. Recovery is a luxury they have never had.
They don't have the time, manpower or the budget to afford it. This pushes everything onto the backs of the MSPs. But hey, we've got four of them, right? 3 or 4 of them.
Now there's no rhyme or reason to how the distribution of responsibilities was assigned whatsoever. You could engage three different MSPs to make one simple change. Has anybody ever worked with multiple MSPs? Do they play well in the same sandbox? Nope. I'm not letting them touch my systems. I mean, it's just they can't have it. I'm not configuring this because I don't trust that they're going to send me the right configuration, or I don't trust their API key.
It might have too many permissions that sound like an MSP to you. Yes, I worked for a company one time where the MSP wouldn't give me access to the tools that I was paying for. Yeah, that's how bad they can be. Yeah. And of course, when you have all of your infrastructure spread across 3 to 4 different MSP, so no one has the full picture, they don't know what's happened outside of their purview in their scope.
So really they're not going to be much help in this case. It's it. But what they are going to do is say that we're the ones that should lead this disaster recovery effort. Pay me. Oh yeah. Oh. And in fact, they can get downright cutthroat about it. They get nasty. So that's a little bit on my experience and also give you a little bit more financial advice.
Take out a mortgage from bank and you'll spend the next 30 years paying it back. Rob the same bank. You'll be out in ten and you'll have enough to buy that house in cash.
So if I sell up what I saw when I actually did my ransomware, recovery efforts and a single word chaos.
Infighting. Fighting was happening across the organization, across the ISPs. Everybody pointing fingers is saying, my fault. I told you so. Yeah, yeah, yeah, it it it gets bad. I literally sat in a conference room while we had people shouting at each other across the conference table, while there's executives on board and outsider, you know, vendors and whatnot.
They're constantly shifting priorities. This is the most critical thing we need to get this on now. And, you know, an hour later, it's like, oh, no, no, don't worry about that. Don't worry about that. Forget I said that. Let's do this here. Because again, no plan, no idea what to do. They just they something gets stuck in their head.
Oh that sounds important. And they jump on it. They pounce. They do not think more than five seconds ahead of their face. Absolutely gaping mistakes being made during the recovery efforts. Resource depletion. And when you've got a small team and everybody's working and you bring in your MSPs to help out, it's still, you know, there are long days, it's hard.
And, you know, everybody's cycles get spent. People start, you know, stop being able to focus. You know, who's ever worked more than a 16 hour day. Yeah, yeah. How do you feel at the end of that?
Like, yeah. Brain dead. It's the best way to sum it up. You got vendors on site selling new products and services. Hey, they just got hit with ransomware. You know, their cybersecurity budgets going through the roof right now. And they're in the war room no less. They're not like outside hanging out or anything. They are in the war room where we are actively trying to recover and they're saying, hey, our products can help with this.
It's like, I don't have control of the budget. What are you talking to me for? I only work for this company. I'm just here to help. MSPs making significant changes without collaborating with each other or with the company they are supposedly trying to help recover. They started making changes, and all of a sudden it's like, what happened?
Why wasn't this working? And oh, we made that change like two hours ago. And of course, that change caused ten of the problems that now we have to dig through and of course, burn out. Oh, pretty much expect to do that after about day three. Security measures and best, best practices are sacrificed, thrown out the window to minimize R2.
Oh, what's it all even have a documented RTL. But there's trying to get that recovery time. You know, they're trying to get back to operations as quickly as possible without knowing quite exactly how to do that. So they're following best practices away. We're going to do shortcuts all that stuff. So, my experience, conducting a situational analysis, I get alerted, you know, about Monday afternoon, day one.
You know, this was about a day and a half, I guess, after, about 36 hours or so after, you know, they found out, and I was working for one of the MSPs at the time. Don't hate me.
And I found out about Monday afternoon. I'm like, Corey, man, send me on this. Like, I want to be there. I want to get I want to help. I don't just send me out there. Fly me out right now. It's like I will buy a plane ticket right now. You're like, all right. Oh, you know, hold yourself there, killer.
CrowdStrike and the FBI were already on site, conducting forensics. Mind you, this is still day one, and they were on carbon black, and CrowdStrike was already on site throwing carbon black to the wolves. So, yeah, I immediately flood, offer to fly on site, but this was during a pandemic. So here's where the fun came.
I received the order on Wednesday at about 6 p.m.. Hey. All right, we need you on site. Go. Like. All right. Yeah, I'll book flight. Oh, oh, wait wait wait wait wait, don't do that. You can't fly. Oh, why the hell not? So, here's the thing. They won't let you in the data center. If you have used any form of public transportation over the course of the last 14 days.
Yeah, it's been going where?
You're kidding me, right? This is a this is a prank. Oh, this. It was a special.
Day. So I get the order. About 2.5 hours later, I kiss my wife and kids goodbye. I'd already packed a bag. I started driving at about 8:30 p.m.. Stop. And, somewhere in, you know, Missouri or whatever for, the night, I get about four hours of sleep. I get called again, after I've been asleep for hours.
So. Hey, how soon can we be on site? So, like. Well, I'm awake. I'm heading back. So I made it to Chicago. A little bit after 6 p.m. on day four, after driving a thousand miles. And that's, I would say less than 24 hours. That's that's that's impressive.
So after I arrive on scene, I meet my, liaison. I'm introduced to company leadership. They get to know who I am, what I'm capable of, what my skill sets are. I get the latest updates and plug in. I worked for 31 hours at that point straight. So you thought 16 hours was bad? Try 31 by that.
By the time I finally got out of there, I was so that I could barely drive to the hotel without passing out.
So while I was there, you know, after, like, I kind of got a little bit of rest and, you know, came back the next day. Yeah. Well, actually, even during that first 31 hour shift, you know, things that don't get considered, your basic needs, logistics. Did anybody order food? I haven't eaten in 12 hours, man. I'm starving here.
Yeah. And then, you know, after, like, day six, seven, eight, nine, ten, it's like, seriously Mediterranean food again. Do we have, like, any, like, other options here? So, I mean, it's just like this is going to be simple. We're going to do what we know works. And, you know, they didn't put a lot of effort into thinking that out.
But as part of our, disaster recovery and business continuity plans, you have to consider this stuff, too. Well, a lot of people don't think they incorporate that into their doctor plans, but Ransom ended up being paid around day six when they realized that there was no hope of recovering their files. Time management, prioritizing tasks, conference bridges, things like that.
So how did 16 hour days after that first 31 hour day for two weeks straight? No day off? The days could then get a little bit shorter about halfway through the second week. But there were plenty of times where I was, you know, sitting in a conference room. I have an earpiece in listening through another conference bridge for operations and things like that.
So it was absolutely hectic at times and trying to, you know, pay attention to both at the same time is a skill very few people get to learn. But if you ever get a chance to do it, it'll make you pretty valuable pretty quick. Lots of times with way too much to do it once. Like, I have to do this.
I have to focus on that. I have to, you know, you know, push the, the the decryption packages to, to this batch of servers, you know, so on and so forth. And then all of a sudden, now that I got all that stuff done, I'm like, all right, I got all this stuff done, what do I need to do next?
Just stand by for a little bit. I'll stand by for an hour and a half, like I got a nap. You know? And then, yeah, it's like, now we still don't have anything for you to do yet, so just just hold on the standby. Killer.
I got to learn a bunch of stuff on the fly and also chase the rabbit. It's not like chasing the dragon. No drugs involved. But, it's more like the Alice in Wonderland, you know? Follow, you know, follow the rabbit down the rabbit hole and see where this takes you. Yeah. We're, you know, say. All right.
You know, once you, log in to the server, you can run the decryption programs, like. Okay, I can't log in. Oh, we have to push a patch to it or something like that, or there's a back end. Wait, for log in, because we don't have the local credentials for that machine, but there's a way to recover the password, and then after I, you know, push a patch or something, update it, and then all of a sudden, I get some error message that I've never seen in my life before.
So I have, because I can't use corporate Wi-Fi, I have to use my mobile phone. How do I get past this error message? And then I, you know, go through the forums and, you know, God knows how some people ever figured out how to get through some of these things with Microsoft is amazing, but you're you're diving through that rabbit hole.
Finally, I clear that error message. I tried logging and boom, completely different error message just like, oh, write this again. So I got to go down that hole and then when I get down there it says, all right, this is what it needs to happen. I can't do that from this interface. How do I do that without being able to do this?
You know, directly from, you know, the computer interface. I mean, it got it got nasty. Like I've been through some rabbit holes before, but that one, some of those took the cake. I learned all sorts of interesting stuff about windows internals, little known add quirks that, you know, people have never had to deal with until they've lost access to their ad infrastructure.
Learn new security tools along the way. It was actually the first time I got exposed to CrowdStrike. And Nexus switching. We had to re-architect the entire network, because it was a pretty open network at that point. So now we had to do micro segmentation to make sure, you know, there's not some residual effect that's going to, pounce on the Active Directory server we just recovered.
And that is now our critical ad server. So now we have to make sure nothing can get in except absolutely authorized traffic. We had to had, you know, Re-architect, you know, the the what do you call it, redundancies in the network. And I've never done that with Nexus switching. I've got a network engineering background and a lot of Cisco stuff that I never actually played with Nexus before that.
So that's learning that along the way. Constant legal considerations. At least once or twice a day. I had a call with my company's legal saying, this fat, don't say this, don't say that. I know it's probably the truth, but you can't say that. You have to be very careful what you say in any, you know, electronic communications, even if it's internal to our company, to your boss or to the other team members, because this is all going to be subpoenaed.
You can count your you can just count the days you have legal counsel that actually understand what was going on. We had legal counsel, and they weren't on site or nothing like that. They were calling from, you know, wherever else they were located. But. Yeah. So, you know, electronic communications, when you're in a situation like that and you're a vendor, if something goes wrong and they find you to be at fault, they are going to come after you to recover their money and they're going to blame you for it.
So yeah, we we had to walk on pins and needles. Even if I had to say your architect is an absolute moron. What I wanted to I yeah, I couldn't, I couldn't say that outside of speaking it in a closed room with only people that, you know, I knew we found security gaps everywhere, things that were never documented, things people didn't know about things.
So people put a pin in three years ago and it's like, oh, oops. I forgot about that.
Oh, we were all sharing the same local admin account and password for everything. It was a domain admin and it was given to have free buddy. How many of you think any of those, logs were being monitored or going to a SIM or anything like that? Nobody. Yeah, good. You'd be right. What would it matter if role admin.
Yeah. So let's just document it while we're here so we can come back to it later. I think it was us 15 to 20,000 users, 30,000 devices, oh $1 wise. You know, they're a global insurance company. That's right. I got thrown under the bus by a vendor, too. The other MSP. Yes. Oh, they love to point their fingers and, man, you make one mistake.
They'll never, ever, ever let's. It's like I learn something from it. Don't get me wrong. I'll never make the same mistake twice. But man, get thrown under the bus in front of the CIO and everybody else there in the executive team, does not feel good. I learned PowerShell automation. Who likes PowerShell automation?
To people. What is wrong with you? PowerShell is horrible. It's so wordy. It takes like ten lines to like do one command. We know. I think it to that work with some absolutely truly awesome people. Some of the people I met, the cyber security team, you know, work them a little bit before on projects there. But I never got to know these people inside.
All right. I'm sorry. But they were some of the salt of the earth, most amazing people you'd ever meet. And still to this day, we call each other on a regular basis. You know, just shoot the breeze, see how we're doing that? We made some great friends out of that, reflected great credit upon my team, my company.
We all did a fantastic job. I think, all things considered, because we had no D.R. plan, no dream, nothing. None of those five elements. We have a.
We strengthen the customer relationship, obviously, a very large wave of very grateful emails from the client leadership, from my my own company's leadership operations teams. I still have a folder where I have every one of those emails. So if anybody ever needs a reference to know exactly what I'm capable of and what I do in a crisis, I have that to show.
And I, I wish more people let me bring the job interviews. I should just be my resume, just unzip that folder and read those emails. That's my resume. And yeah, you definitely get to pad the resume with that kind of stuff because like I said, five, maybe 10% of the people in here have actually participated in a live real world.
This is not a drill, ransomware or just any type of major disaster recovery incidents. The people that know how to do this are a very rare find, and if you find them, you hold on to them and you let them help you build and perfect your doctor and business continuity plan. They are absolutely going to be critical to your organization someday. Take good care of them.
So now we get to touch on the impacts of the missing five elements. Last bit of, financial advice part for saving for retirement. It's not necessary if you win the lottery. In fact, everything you were going to dump into the into your at your retirement, dump it into the lottery and increases your chances. Oh sorry. So you don't do that Michael, for minutes.
Okay, I'll make it quick. So disaster. If you're missing a disaster recovery team as I you've kind of got no one knows what to do. Different members of this, you know, senior leadership team are going to prioritize different things. The CFO wants to make sure they can collect revenue. The chief communications officer or the public relations officer wants to be able to get the website back up and let the customers know, you know, this is what's happening.
You may not be able to make payments yet. Everybody has a different priority and it's critical. You know, it's all critical for their job. But it's not critical for the company. In a lot of cases, infrastructure and ops teams are pulled in way too many different directions all at the same time. And the teams on the ground become ineffectively utilized.
As I said, lots of times where I had too much to do and then times where I had nothing to do, significant communication gaps. First of all, they didn't really know how we were going to communicate without being able to access teams. So they bought zoom licenses. But they had no Wi-Fi. So, you know, very quickly created a makeshift Wi-Fi that was straight open to the internet but still gave them, you know, access to the the company.
I'm not going to go into any more details on that one. Vendors and customers become an afterthought in communication, and they're some of the most important people you have to communicate with. Yeah. Without this, sorry. Risk evaluation without this. This all happened. Starts to happen as part of your disaster recovery effort, and it wastes hours of valuable time.
The official strategy at that point just get everything working. Stop knowing what your business critical assets are. It's easy to overlook critical applications and data that's not documented. Sometimes you have critical data that you don't realize is critical until somebody speaks up and points out its relevance. And all of a sudden you got some, you know, C-level executives saying, oh, no, no, we need to deal with this.
We need to get that handled. And all of a sudden everybody realized, okay, yeah, that's kind of a big deal. Why didn't we think of this yet? So it's even then it's not easy to effectively communicate the criticality if it's not already documented. There's other people that probably had things that should have been dealt with sooner, but they didn't know how to communicate.
Why is this important? So constantly shifting priorities of resources, critical systems are in unfinished recovery states because resource got pulled to do something else. For backups, even if you have a backup plan, it may be incomplete. And chances are it was the only thing. It's probably going to still get thrown to the wolves anyways. But a lack of an RPO and an RTA makes it very difficult to prioritize recovery from backup.
So you got, you know, 30,000 endpoints, but which ones are, you know, hosting your critical databases, your critical financial applications, you know, your web applications that, you know, need, collects payments from customers. Where are those, you know, what are the dependencies? You don't know how quickly you need to get up, get things up. You lose sight and you miss things.
Different types of backups have different impacts on recovery. So, you know, cold backups are the least likely to be impacted, but it takes a much longer time to reach your auto backups. Faster recovery more likely to be compromised for a malware incident or ransomware. If it's a natural disaster, you're probably going to be okay. Without a backup plan, expect to triple your downtime.
Just for starters. That's probably really, really being generous about it. It's probably going to be way more than triple, but you want to be up in a week. It's going to take you a month testing and optimization. If you're not regularly doing this, even if you have all of the other four elements, the lack of testing and optimization has a direct impact on RPO and our, po.
The plan will be significantly out of date. New critical systems that were brought in to replace something else is not identified. It's not documented. It's never been tested for failure. Decommissioned systems are still in the plans when you go, oh, we got to find this system here, and you spend four hours searching for it, only to realize.
That's right. We took that out two years ago. So changes of business critical assets are not documented. Communications are impacted. You may have, you know, zoom or something like that specifically for this or, you know, something like that. And that's like, oh, we got rid of zoom a year ago. Again, it's, you know, if it's not current, everything gets impacted all the events are also generally not considered in a lot of testing and optimization clients such as Covid 19.
So address things like that that, you know, you what necessarily expect security tools not optimized for forensics at that point in time. And in closing, my actual real piece of financial advice. Seriously, folks, invest in and test a disaster recovery solution. You can actually take that one to the bank. Questions? Don't mention that you think that, helping you take your ransomware after you take it back, or how if they, they had an online presence within about two weeks, they were still unable to collect payments for about another week after that, but they were basically back to about 90%, fast walking, slightly jogging, operations within about 4 or 4 and a
half weeks to get in there. So yes, they actually did. It's in their benefit. Actually give the decryption program that way. They, you know, when people know, oh, they'll actually give it to you when they know what's ransomware group it is, it's in their interest because if they stiff one, you know, one person, you know, one company that that's getting out, it's like, don't ever trust them.
They're not going to give you your, your software back. A couple other things, just, you know, different statistics. Colonial pipeline paid 4.4 million ransom in 2021. JBS paid 11 million. Caesars Palace paid 15, CNA financial paid 40 million. And just last year, an undisclosed fortune 50 company paid 75 million in ransom, which is the highest to date.
So that doesn't count the costs from the loss of business operations. Alternatives? Cyber insurance. It's nice to have. They might even pay, but they find out that, you know, you've shirked your responsibilities and left an admin server without multifactor authentication. Watch how quickly they, decide. Yeah, we're not going to pay you. You didn't do your due diligence with securing your environment there like that.
Read the terms very closely with your Seigel, cyber and legal teams in the same room if you are going to. Do you know town on cyber insurance, ADR index or whatever, you know, Asterix VR, any season hacker will find a way to bypass it if they really want to. I love the it won't happen to us scenario because I don't know how many companies still do this.
But there's a lot of them out there. Oh, well.
Thank you.