Skip to content

Neurohacked: How Stress, Fatigue, and Bias Sabotage Cybersecurity Decisions

Presenter:

Dr. Dustin Sachs

Transcript:

Good afternoon everyone.


Thank you so much for joining us this afternoon. We've got a couple great sessions in track five for you here in ballroom C. We're very excited to be joined today by Doctor Dustin Sachs. He has prepared a presentation for you. Neuro hacked. How? Stress, fatigue and bias sabotage cybersecurity decisions. A little bit about Doctor Sacks. He's a chief cybersecurity technologist and senior director of programs at cyber Risk Collaborative.


Also an adjunct professor at Lone Star College. He's got over 15 years leading cybersecurity programs in critical infrastructure, finance and enterprise risk management. There's expertise in security, program development, incident response, third party risk and vulnerability management. And he holds a doctor of computer science in cybersecurity. An MBA in cybersecurity. CISSP, c CSO, an AWS Cloud Practitioner. Certifications. Thank you so much for joining us today, Doctor Sachs.


Thank you.


So one of the things that has really interested me over the last couple of years is looking at the way in which, behavioral science impacts what we do in cybersecurity. You know, we talk often that cybersecurity is this problem that can be solved with technology. But yet at the same time, we acknowledge that in many cases, the end user, the human is, is a part of the problem.


And what I found a couple of years ago that really stood out to me was there's a lot of talk about the cyber psychology or the way that end users are thinking, but there's not a lot of focus on the way we as practitioners, we as the people in the trenches. Look at our selves and look at the way our behavioral science plays into it.


So I started looking originally at cognitive bias and specifically as it relates to third party risk. But as I started to do that, as is typical, went down a bunch of rabbit holes and I found that it was a much bigger issue. And we don't talk about some of those mental health issues the way we should. So today we're going to talk about some of that.


And I want to I want to caveat at the beginning when we talk about bias here, I'm talking about bias, not in the sense that we normally think of the word bias. We're talking about simply favoring or dis favoring something disproportionately. It is not a negative thing. There are very many reasons why bias is a really great thing.


Most of us have a bias towards not dying, meaning that if a lot if you were standing face to face with the lion, your bias would be to get the heck out of dodge before the lion gets you and decides to make you into lunch. Great bias to have you learned as a kid. Don't touch the hot stove.


You'll burn yourself so you have a bias towards staying away from the stove with your hand. Good bias to have. What's the problem is when we don't understand it, when we're not aware of it, and we let it control our decision making. And we so often talk about decisions in terms of did I make the right decision? But we don't spend nearly as much time, especially in cybersecurity, especially when we're talking about situations of uncertainty, which is what all cybersecurity is.


We don't talk about how we're making the decisions we're making, and that's what we're going to dive into today. So here's the problem. Roughly 90% of breaches involve human error. We've heard that statistic. It fluctuates between 85 and 95%. Verizon data breach report pretty much every year. So but we say we're going to just give more security awareness training.


We're going to get the users to do things that to stop doing things that they're doing, or to do things that they're not already doing. What we don't stop and think about is why do humans make risky decisions even when they're trained? And it's pretty simple. We think they're making risky decisions. They think they're just making a decision that helps them do their job better.


Most of our employees are not acting maliciously, but many of of the of the situations where a risky decision is being made involves something like stress, fatigue or bias. Outside of bias, we would call this the term that's that's come out over the last couple of years is what's known as noise. And I'm going to simplify noise for everybody here because everyone has had this experience.


There's a reason that we told do not go to the grocery store hungry. Everyone of us has gone to the grocery store hungry at one point. We picked up the bag of Oreos when we said to ourselves we weren't going to buy Oreos this time, or we weren't going to buy that pint of ice cream. We were hungry, and we let hunger take control of us and made a decision that otherwise we wouldn't have made.


So what's the solution in the grocery store scenario? Well, we plan out our trip to the grocery store. We make a list. We organize the list based on where things are located in the store, in the in the sections. This is what I need to get from produce. This is what I need to get from dairy. This is what I need to get for meat.


And we say, I'm going to stick to this list. We set up a decision framework for ourselves and we stick to it. Why wouldn't we use that in other scenarios of risk?


So today I want to talk a little bit about stress and how stress impacts decision degradation. I'll talk about fatigue and specifically cognitive failure. And why the hero mentality we expect out of incident responders can actually be doing our teams more harm. We'll talk about bias and more importantly, bias blindspots, because all of us have blind spots that we're not aware of and that once we make ourselves aware of them, there's a concept in physics known as the observer principle, which basically says by know it, by being aware of something, by observing something, you're going to change it.


So if nothing else, I will tell you. I've been studying this extensively for the last five years. As much as I try to fight it. I make biased decisions, but I notice when I make a biased decision, I'm aware and I recognize it and I go, I just made a decision using something other than what I thought it was going to be.


Am I okay with this? We'll talk about some real world cybersecurity examples. Where do we see this? And we'll talk about strategies. I'm big on in my day job, making sure people are getting practical tips, things that they can walk out and do. And I promise you everything we're going to talk about you can do for free. And of course, we'll leave some room for Q&A.


And I encourage you to ask questions. Please. So stress and high stakes cybersecurity incident response is a mandatory exercise of operating under pressure. When we are in pressure situations, our bodies release chemicals that cause us to be in a heightened state. They increase the blood pressure, they increase respiration, they start to cause our hearts and our brains and our organs to pump faster.


The longer you stay in that, the more stress you're putting on your body. So there are physiological reasons why staying in this state of constant pressure and stress can really be problematic long term, and can really be problematic when you're doing them and you're going on a roller coaster of sorts. When you're in this point, there's a there's a chemical called cortisol spikes in your brain.


It reduced. It actually reduces your working memory. You, you it becomes harder for you to transition things that are happening from your short term memory to your long term memory. And as a result, that's why, quite honestly, incident responders, we tell incident responders they need to be taking notes because if you've ever been in an incident, you may do something.

CYBR.SEC.CON CTA


You may make a decision. You may have something happen. You move on to the next thing and five minutes later you don't remember what you did beforehand. It's because your brain can't handle it. So what is the decision consequence as well? When you're in this state, you're in a tunnel vision. You can only focus on the immediate. I make a decision, I move to the next thing.


I forget that I made this about this decision and I move on. As a result, I'm so hyper focused. I've got to stay so hyper focused on the one thing, then I might miss broader risk or broader things, broader, impacts, consequences, etc. we also tend, in this case, to be a little bit more risk averse. We tend to be less willing to take chances or to make bold decisions in situations that we might need to, which can also cause us to ignore things which can also cause us to go into basically decision paralysis.


It also because of this, we often will take reckless shortcuts because we need to we need to keep moving at the pace that we're moving. Your your blood is pumping at a speed and you've got to catch up. So the example of this is SoC analyst actively investigating an attack. Who doesn't see the secondary indicator. Because they are so focused on that initial indicator they don't see that the brute force attack was really just hiding the fact that something bigger was going on.


Fatigue. How many of you have ever been in your role as a cybersecurity practitioner and gotten to a point in the day and been tired? Anybody not been in that position before? Has anybody gotten to the end of the day and been you don't count, Jack. You worked for a software company. You don't actually do work anyways, so it's okay.


We all know CISOs don't see so that vendors don't do anything. Come on. So fatigue is an interesting one because often times we expect our practitioners, our first responders, our incident responders to to work as long as is necessary to complete a task. And that's a slippery slope, okay. When you're tired, when the research shows when you're tired, when you're fatigued, you tend to make less.


Less reputable, less optimal decisions. So the research that was done on this, Daniel Kahneman, Nobel Prize winning economist, and, a couple of his colleagues did a study. They gave judges, they took away all of the information about the perpetrator of a crime, and all they did was give the scenario, and they gave it at 10:00, and they gave it at three in the afternoon.


And what they found was at 3:00 in the afternoon, the sentences were much harsher. They were much more heavy handed at three in the afternoon. And they looked at this and they realized that what it was was at ten in the morning. The judge was wide awake, very well, very prepared, had just had coffee, had just had breakfast.


And they were making decisions based on what the law said. At 3:00 in the afternoon they were ready to go home and they made the decisions that they could, they felt were going to be the most the easiest to uphold. We see this, though, in the fact that as we start to get tired, as we start to lose, lose some of our control because we are tired, we tend to implement more diminished willpower.


We tend to have less control over our action. So what do we do? We start shortcutting. I'll just do this to get it done so that I can go home. Or what will happen is you get you've made so many decisions that now you've got to make. Now you get the just click accept and you go, you know what?


Fine, I'll just click accept. I'm tired, I like it's not that big deal. And that can be a that can be a problem. That can be a really big challenge. Especially when you've got somebody who's been working for 12, 13, 14, 15 hours. I had a discussion about this yesterday. There's an organization out there called Cyber Minds that deals with mental health for cybersecurity practitioners.


And one of the things we were talking about yesterday was if I asked anybody in this room, are you uncomfortable with pilots, doctors, critical, first responders having timeout limits? Pilot can only fly for so many hours. Then they have to take a break. No one or most people, most reasonable people are going to go. Yeah, that makes a lot of sense.


But yet we don't think that that's acceptable or we don't look at that same kind of process when we're talking about incident responders in as stressful if not more stressful situations. This is why it's so important that we be looking at shift work, that we look at having a, B, C teams. Because as time goes by, as you've made more and more and more, more decisions, you get to a point where you start making decisions without truly thinking about them, and you actually end up making more errors.


So there's a higher error rate three in the morning than there is at three in the afternoon. The other problem is we is that the average SoC, if you didn't work the way the average SoC, it's about 3 billion alerts a day. Lots of alerts coming in, lots of action that has to be taken. There's so much a constant barrage of alerts that all that happens is you start rubber stamping them and you miss the one that was the the key alert.


So you missed the lateral movement that occurred because it occurred at the end of the day, or because you found the alert at the end of the day, and you had been working for eight hours straight.


What about cognitive bias? Cognitive bias is probably the big one, because it enters into every one of our decisions, and not by not because of something negative or nefarious. The statistics that are out there is that the average person makes 35,000 decisions a day. Now, when I say that number, you go, well, that that seems a little high.


What I'm talking about is every decision that is made, every time your body automatically decides to take a breath, every time your body does something, any time, any decision that is made, smallest to largest decisions. And that's why we have two systems of thinking. We have our decision, our system one and our system two. Fast and slow. In order to get through the number of decisions that you have to make in a day, because if you took a minute for each of those decisions, it would take about a year and a half to make all the decisions you have to make in a single day.


So what we do is something that was discovered in the 1950s by an economist named Herbert Simon, and it's called satisficing. We set a certain criteria, and we've all said this before I made the decision. It was good enough. That good enough decision is we set a set of criteria. And we said, once I hit this criteria, I'm not going to continue making the decision.


By the way, if you're looking for a really great use case for AI, AI doesn't care about satisficing. And I can do those more enhanced decisions. Cognitive bias. So we create what are called heuristics mental shortcuts. Here's the example. How many of you, when you go to pick up your child from school or when you go to the grocery store, go to the exact same grocery store, take the exact same path to get to that grocery store, park in generally the same exact space, an area in that, parking lot, if you can, go around the same time of day that you normally go those habits, those things that you've done to make


your life easier and to make your decision to to minimize your decisions is a good thing. However, when you're relying on that decision and then you get upset because there was an accident and you're like, oh man, I'm now I'm going to be late. The decision was made not trying to figure out which is the most efficient time route to take.


The decision was made based on a heuristic and understanding that is important. Common biases that we see in cybersecurity confirmation bias. Well, it's probably a false positive or you know, it's it's it's got to be this this looks like it's this. So it's got to be this. So I'm only going to find the things that will tell me that it is, in fact, that we've all seen this one optimism bias.


It won't happen to us. We're never going to get breached until you do or until you find out that you were breached. The one that for me was the most interesting is this last one. The authority bias. I did my doctoral research on third party risk and the risk assessment that people at the sub CSO level make. And I was telling this story at lunch.


But, one of the things that was in every scenario, I gave three scenarios to my participants, and every one of them had a variation of the following sentence. The vendor has PCI compliance and will provide a certificate upon request. And I thought when I did that I thought, okay, maybe somebody will latch onto it, maybe they won't.


Nine out of ten participants said that that that single piece of information, just that you've got the certification and you can provide the certificate. Nothing about what what how many times things did they have to fix? How long did it take them to get it? Were there critical issues that they had to fit? Did they fail the first time their audit?


That was enough for them to go. You know what? This is enough for me to lower the risk of this vendor. If you look at every breach that has happened, I would imagine it is every one of those companies, SolarWinds, you name it, had certifications, had a Soc2 report, had all of those things. It's a control that fails when an incident occurs.


So having those certifications, they're great. Yeah. But if you're relying too heavily on them and you're thinking that that certification is going to somehow protect you and lower your risk, you may be fooling yourself. So the case example, you know, it's it's the breach that gets escalated. The it's the breach that was escalated because the analyst trusted normal behavior flagged by an AI tool.


The the you get, you see this AI and this AI says this is bad. I mean, how many of us have seen the impossible travel alerts that come out on your EDR where you're like, and it turns out all it was was somebody going on to the VPN and changing their location. But because it happened at a time that didn't seem right, you were like, this has got to be something.


And you ignored what the obvious or potentially obvious answer. So let's talk about some real world scenarios, okay? We've got the case study a case study of ransomware at 2:30 a.m.. Okay. Ransomware incident comes in 2:30 a.m.. You get an urgent ransom note that already puts you in a in a situation of stress. The note says you've got 72 hours or 48 hours or 24 hours, so the attacker has already activated your stress, which means now cortisol is running through your body.


You're fatigued because it's 230 in the morning. And most people, even if they are used to working at night, 230 in the morning, is usually pretty much the physiologically, our bodies don't don't do well because it's dark out. It's you're usually inside. So you've got either no windows or it's already dark out. You're using, unnatural light. It's it's just it's not physiologically a position that works.


Well, the other thing is because it's a ransomware, there's tends to often be an overconfidence. We've got backups. We can get to it. That's no problem. So we'll we'll deal with this in the morning. We'll deal with like we're going to investigate this, but we've got backups. We'll get to it as soon as we can. No big deal.


The problem is that that resulted in extended downtime and increased cost. Why? Because we were tired. We were stressed, and as a result, we didn't move as quickly as we should have. So how do we mitigate this? Well, stress mitigations, easy stress, stress typically in in in many of the situations that we see stress as a result of some level of uncertainty or or lack of clarity, we get stressed because we don't we're not sure what our next step should be.


This is why having a predefined playbook. Why do we have an incident response plan? It's so that when an incident occurs, there's no stress of trying to figure out who do we need to get involved? But do you have decision frameworks in place? Do you know what process your organization wants you to follow for? Should we or should we not pay the ransom?


Who needs to be involved? What information is going to be needed? Do you know what to do if it's a suspected exfiltration of data? The more you've got that defined, the more your decision process has already been laid out and tested, the less likely it is that you're going to have to make a decision under stress because you've already established the procedure.


It's also important, just like we practice our incident response plans and we do tabletop exercises to practice your decision making processes. How many how many organizations have you have have you been at where when they try to do the tabletop exercise they make, they do their best to try to schedule it. When everyone from the executive team can be in the room, or when the most people are available to do that, they try to schedule it at a time that's convenient for everybody.


Problem is, incidents don't happen when it's convenient for everybody. Do you know what to do if your CSO is across the ocean? Do you know what to do if the CEO isn't available? Do you know what to do if the general counsel can't be reached? What's the process? Those are situations where when you're up against a clock and time is ticking away, that's going to add extra stress.


How do we counter fatigue? Well, to the point that you can if you can. Having a team rotation structure is great. At a minimum, the mandatory breaks your people can spare five minutes to get up and walk away from the computer to go get a glass of water. Go use the bathroom, whatever it may be, they will actually operate better.


Having had that that time to turn their brain off for a little bit. How do we reduce bias? Well, we have you have to do red teaming or in other words, you have to do what what would often be called a pre boredom of your decision. What if the decision we make is wrong? We don't often want to think about what is what.


If this was the wrong decision? What are we going to do? But what you'll notice is and here's here's something that anybody who who plays chess will understand. If you're a regular chess player, you've red team doubt every game that you're every match you're going to compete in. And you know, if this if I make this move and I made the wrong move and this person does something, I'm going to do this having that is why if you watch chess tournaments, they're able to move as quickly as they are because they've already got that decision framework together.


What's the easiest way to remove fatigue? Automate the repetitive stuff? We started to do that. We started to do that was saw. We started to say, you know what? Phishing emails have certain certain cursors, certain things we can look for to identify the chance that it's a phishing email. And we'll automate that.


There are so many decisions that are made in a day that are repetitive decisions that follow a specific process. The GRC is laid out, or that the security team is laid out, that all you need to do is automate those repetitive decisions. Every decision that you automate is one decision less that your people have to make, and one more decision that is critical to the organization, that requires human intervention, that they can focus on.


The other thing that you can do, how do you reduce bias other than what if we're wrong? Checklists. Devil's advocate. You should have somebody on the team whose job it is solely to say, what if we do this? And it may be the most absurd thing in the world, but having that person at least bring it up or challenge things before you make the decision can be the most vital thing, because you'll go, you know what?


We didn't really think about that or we don't have an answer for that. We don't know what we're going to do. If that's the case. Embedding behavioral science into incident response. Remember that an incident response incidents are typically perpetrated by people, and people act a specific way, even if it's an AI generated attack. A human has programed that a human with biases, a human with the specific way that they do things, a human that is bound by human rules.

CYBR.SEC.CON CTA


So therefore, thinking about does this seem logical? Does this is this something that could actually be done, but also understanding that the process you're following, the way you're approaching your team, the last thing that an incident response team wants to see is that is the incident commander stressed out. The last thing you want to be doing is, as a member of an incident response team is going, man, it's three in the morning.


I'm just we got to go home like this. Sounds like what? Understand that people feed off of each other, but also understand that, little things can help. Simple things. You want to increase the morale of the team when you've got an incident going on, buying pizza. I mean, it's little things like that, but it's understanding that if you're rewarding people, they're more likely to do things.


Think about it this way. Are you more willing to help somebody, your next door neighbor who you can't stand, or your best friend who you've known for 20 years, who's got every story on you and can embarrass the crap out of you. The other thing is this building decision hygiene. This is really what I'm wanted to talk a lot about, because this is where we're talking about actually using a process and reviewing the process of how we make decisions, not what was the decision, not did we make the right decision about whether or not to pay the ransom?


But how are we going to make that decision when the time comes? Encourage reflective pauses I work with a I worked with a, client recently, that I was helping them with their behavioral science. And I said to them, I said, here's what we're going to do. You guys have got have you got a problem where people are clicking on links and they're turning out to be malicious?


It hasn't harmed the organization yet, but we know it has the potential to. So here's what we're going to do. We are going to implement that the minute they click that link a pop up box is going to pop up for 30s. They can click that link again and actually go to it after that. 30s. But I want them to stop for 30s.


And what happened was in almost a third of the cases, that 30s was long enough for them to go for them to just go. You know what? While I'm sitting here, let me scroll over the link. Let me think of all the things I was I've been told to look at, and they started to look at the links.


And what we found was that the reporting rate of malicious links went way up, and the number of links that were clicked on went way down. All we were doing was asking them to take a reflective pause. How many of you actually do stop when you get the are you sure you want to delete this file? Every one of us at some point goes, wait, do I really want to delete that?


You may not do it every time, but there are plenty of times where that pop up will say, are you sure you want to send this to Recycle Bin? And you go, you know, I wasn't thinking about that, but wait a second, do I want to delete this one? Is this the right file? Did I delete there? Am I going to delete the right one?


That reflective pause delaying or intuition by seconds? Because the thing is, going back to the neuroscience of it, every time you get that link, every time you've got you take an action, especially online, you get a hit of dopamine in your brain. Dopamine is the happiness chemical. And what we do is we seek dopamine. But what we don't do oftentimes is we click the link, we get that hit dopamine and we don't stop, stop our action long enough to let the dopamine wear off and go.


Was that really something I should have done? That's what a reflective pause can actually do. Again, nudges that alert, that double check that that double check. Again, I think many of us have been saved by the outlook. Are you sure you wanted to delete that, option? Also, the most important thing we have to start stop. And this is the hardest and probably most controversial thing, but we have to stop, evangelizing the hero mentality.


We we use terms like firefighting. We use terms as in the trenches. We use these very militaristic, terms to refer to what we're doing. And as a result, we put ourselves in a position where we are unknowingly creating an atmosphere where we expect superhuman or heroics to take place. Instead, we need to say, you are a human.


I understand you're a human. It goes back to kind of the, the, situation that for those of you who are in the CSO series recording earlier, they were talking about is there what's worst scenario do you want it? Would you rather spend $1 million on a tool, and, and get not great security? Or would you rather bring your people out and have them changing every six months and not spend the million?


This is a scenario where it's very easy for us to all go, you know what? I'd rather spend the million because the human is so important. Let's make our workflows match that. Let's not make that just empty rhetoric. Let's stop forcing expecting our people to put aside their humanity, their human limits, to do things. Because what we will find is they will actually do a better job if they feel a sense of psychological safety, and if they feel that their their base needs are being met.


This goes back to for those of you who are familiar with it in psychology, the basics of Maslow's hierarchy of needs, that psychological, that safety and security level of things. If people don't feel safe and secure, they're not going to be able to do anything else for you. So what are some of the key takeaways? And then I want to open up for questions.


Stress narrows our focus. It causes us to focus on the stressor and getting removing that stressor. Fatigue erodes quality. That's seems pretty simple, but it's something we've got to continually tell ourselves. Bias blinds judgment. The example I'll give with this I love using this example because we've all had this scenario. You've all went to gone to a restaurant and gone to a movie because a friend said this restaurant was amazing, this movie was great.


And you go and you see it in Europe, or you go to that restaurant, you have a horrible experience and you're like, what the hell was my friend thinking? Like, do they not know me? We've all done that. Why? Because we had a bias towards hearing what our friend was saying. Taking that anecdotal evidence and using that. It's okay to do that, but don't expect just because your friend said it was great.


You're going to have a great experience. Leaders have to design systems that account for human limits. We have to do that. We owe it to our teams. We owe it to ourselves. We expected of we. We as the leaders expect it from our leaders, but yet we're not willing to give it to our subordinates. That seems kind of broken.


Reframed cybersecurity is behavioral risk management. It's about I brought this up at lunch as well. We've all heard the people process technology, whether it's true or not. The story that somebody told me or the thing somebody pointed out to me really rang true with me. What's the very first letter in that? It's not just for iteration, because it sounds cool, it's people.


And there's a reason people are at the forefront of it. This is cybersecurity, what we're asking people to do, what we're expecting. What good security practices are there about people? Yes, technology can help, but it's about people built culture that anticipates, not blames human error. One of the greatest examples of this Reddit last year had a breach. Somebody on their team clicked a link, entered in credentials, realized that this didn't seem right, admitted to it immediately.


Reddit was able to stop the breach. Within about an hour. They got on the they got on line and started sharing what they could about it, alerting people about it, and as a result, they came out looking a lot better than Experian or Equifax or any of those that had PR issues, because they created a culture where it's okay to say, you know what?


I made a mistake, I did. So would you rather your people not tell you that they've got a breach and it goes on for longer, or that they've had an incident and it goes on for longer and gets worse? Or would you, rather than be comfortable enough to say, I did this, I made a mistake. I'm trying to fix it.


Help me fix it. Let's, you know, let's let's respond with that. I'll open up if there are any questions. We have about six minutes left, but I'm open for any questions.


Okay. Well, if if there are no questions, I'll be here afterwards if you want to come speak for a little bit. I think we might have had one question right here. Yes, please. One moment. Here you go. Oh. What other stress, recommendations do you do recommend other companies to implement or any like maybe do jumping jacks or anything fun?


I don't know, really the biggest thing honestly, there's a couple of things. First of all, anything that you can do to reduce to to counteract because remember stress, fatigue, all of these things, they are true chemical reactions happening in your body. So whatever you can do to lower your cortisol levels. So to relax, to lay back to like even just hey, close your eyes for five minutes and meditate.


You know, mindfulness meditation. There's a reason why it works. Because it lowers, it slows your breathing. It so not maybe not necessarily. Get up and do jumping jacks, but close your eyes for five minutes. Turn on your favorite music, whatever that. But do something to counteract the chemical reaction that is happening in your body because and give yourself the time to whatever.


And it doesn't take long to decompress literally, to decompress. I have up on the screen. This actually came out today. It is now available on Amazon. I wrote a book, took a lot of the research that I've been doing over the last couple years and some new things, and wrote a book called Behavioral Insights and Cybersecurity.


It is focused on us as cyber practitioners. There's so much out there about the end users and why end users, why we think end users do stupid things. This is about why do what are the blind spots we have that we don't think about? And how do we as humans, view technology in a way that, will help make sure you're applying the right strategies in the right order and in the right with the right goal in mind.


I know as cybersecurity practitioners, QR codes are quite interesting. I promise you, this will not rickroll you. It does go to Amazon. It's it's it's been a really interesting opportunity. It is currently the number one new release in security Architecture and design on Amazon. Really proud of that. And and you know, again, learn what you can because again, you know, I bring up the observer principle, but really, it truly is an, an interesting thing that when you're aware of it, you may not be able to stop it, but you'll start to notice it and you'll start to realize that how it changes your outlook in a positive way.


If you want to follow me on LinkedIn, want to learn more about me on my website, please feel free as well. Again, no referrals. I promise. I as much as I would love to, I really do want to be respectful of everybody. So with that, thank you so much. We've got about two minutes before, you have to move to the next sessions.


Thank you all for being here today. Thank you for listening to this, which is not maybe the most technical or, you know, typical presentation. You'll see the cyber security conference, but it's an important topic and something that we really the more we look at it, the more we understand the human side of this, the better our strategies will be, the more effective they will be, the less friction you will create, the more adoption you'll have of your strategies, and the fewer tools you'll have to replace.


Because they were ineffective. So thank you very much for being here. Thank you so much, Doctor Sachs. We appreciate your time and everything that you shared with us today.

HOU.SEC.CON CTA

Latest