Presenter:
Transcript:
So Karen Worstell is a rare kind of leader. She has worked in boardrooms. She has mentored people for years. She's been a CISO for Global Brands. She's been a chaplain for people navigating crisis. That's an amazing portion of why I'm excited about this talk. I'm not going to steal in your thunder, though. She's a master coach for high achieving professionals.
She is the person you call when you're trying to figure out what you're going to do, how to get just to get some of her expertise and to see what you want your next steps to be. She's always she's just got all of those great viewpoints. Really excited for this talk. She is a founder of risk Group and that is the powerhouse behind Defensible Edge and the architect of Leadership Ascent Labs, which is a portfolio of programs that help professionals go from invisible to influential.
So that sound cool? You like that? Thank you. And her work sits at the intersection of digital risk, human dignity. And what were they thinking? Which is, if you read any breach report lately, you know, is a rich field of study. So please welcome a woman who is advised the C-suite, testified to regulators, mentored women in tech and still has time to ponder all these things.
She's going to be talking to us about Karen Freeman more. So thank you. Thank you so much. Thank you.
Well, thank you so much. Let's see. Oh, we do have a slide up okay. Now I can see. Well, thank you for being here. Good afternoon. It is an honor to be here as the closing keynote for this conference and, to be back in the great state of Texas. Even though I only lived here for four years.
I have deep ancestral roots in Texas. My great, great grandfather was reportedly a cowhand on the Chisholm Trail in the waning days of the Chisholm Chisholm Trail. And I've always loved, the huge horizon in this state. It's people. And I love Tex-Mex. And I'm so spoiled after living here and eating the food here, I can't really eat it anywhere else.
I will say that I was really entertained when we first got here about the billboards, because as we drove down I-45 and it said, it said, don't mess with Texas drive friendly. God's got you in Lakewood. And one 800 DNA test. Who's the father on series? So, it's great to be back. And what I would like to try to do today is tee up this conversation with a little bit of background about how I came up with the way I now look at the AI landscape.
So if you'll indulge me, I'll dive in a little bit to that background. So, I've spent most of my professional life in cybersecurity and risk. As Michael, noted. And what that means, I'm trained to look for what can go wrong. In fact, my primary question for years and years in my life was, what am I going to fix today?
But I would like to talk to some about something that's much more important. And that is what could go incredibly right. If we learn to lead with empathy.
Before we go further, I'd like to take a minute to kind of align on what I mean by empathy, because it's been my experience that we often confuse the word empathy, the the skill and the talent of the empathy with sympathy and kindness. So empathy isn't about feeling sorry for someone. That's sympathy. In fact, in the South we have a phrase that is the perfect epitome of what sympathy is.
And that is, oh, bless your heart. It's another way of saying it's sucks to be you.
Empathy is about having the skill to step into another person's perspective, to feel with them, not for them. And it's seeing the world through their eyes. Even if you disagree with their point of view.
You would be justified to think that empathy is in short supply today with everything that we see around us. But here's why it's so important. Brené Brown, a Houston native and a huge, huge fan of hers, says empathy fuels connection. Sympathy drives disconnection. Empathy is a choice to get close, to be present and stay present with someone else's experience, even if it's messy or uncomfortable.
So in leadership, empathy isn't just strategic or it isn't just soft. It's not just a soft skill. It's a strategic skill. It's the it's the foundation of building systems, teams and technologies that serve people, not just users. And in an AI driven world where scale replaces content, context, that ability to understand, to really, really understand another person, to understand human experience becomes more valuable than ever.
So this is our journey today. I'm going you can think of it like a little bit like a movie with six parts. We're going to move from insight into action. I'm going to talk a little bit about our experience at AT&T wireless, and then we're going to move through empathy leadership a little. Indiana Jones. When I was at AT&T wireless we developed something very practical and I call it outcome based security.
It's deceptively simple, like a kitchen remodel. It starts off thinking, or you start off thinking, how hard could it be? And six months in, you're eating over the sink with no countertops and two open registers.
Outcome based security was born at AT&T wireless. When we had to comply with Sarbanes-Oxley, because AT&T wireless had lost $350 million in one calendar quarter due to a major software failure. Anybody was an AT&T wireless customer in 2002.
Right. That was self. That was a real miserable time for all of us. And, I was the new eight. I was the new CEO there at AT&T wireless. I got quickly introduced, into our rather messy environment by what I called the Four Horsemen of the apocalypse. It was, we we called. I called them so big.
Fabian, Kandahar and Odyssey. So big was a big, huge, security. A virus that hit us, caused a ton of mayhem right before we got hit by the largest hurricane to ever hit the little island of Bermuda, where we owned the only cell phone service that was run by a tower on the tallest mountain on the island in a disused shipping container called Kandahar.
And then the big coup d'etat was a project called Odyssey, which was the replacement of the entire CRM system for AT&T wireless. If you're familiar with wireless companies, the CRM system is the heartbeat of the organization. It's the thing that activates phones, that modifies customer services. It's the customer service management system for everything. If the CRM system's not running, nobody's phone is working.
They decided to replace this on Halloween, right before the biggest sale system or sales season of the year. And, it didn't it didn't work. I mean, it was a failure of a software deployment. And, the short version of the story is by Easter the following year, we were still in the war room, trying to bring up our systems and activating phones by hand as the second largest wireless provider in the nation.
It was kind of miserable. But the thing on top of all of this was because of the loss we had to sell AT&T Wireless to Cingular. It was a very hastily put together deal because we were on we were insolvent. We were on the verge of bankruptcy, and the AT&T Death Star brand was at stake. The terms and conditions that were put in front of us for the sale was that we had to pass a cybersecurity audit across the company with zero deficiencies under the new rules that were put in place for Sarbanes-Oxley in the first year of accelerated filing.
And if any of you lived through that, and I heard from some of you who did today, you know that there were no rules. I love the idea of interrogating the regulators because it's like, spill it. I need to know what's going on. We need to know how to make this work. Well, no one could tell us. And so what?
We ended up having to do was to to design from scratch the approach that we were going to take to build a what I call a defensible standard of care. I had to be able to know that I was going to make I was going to pass the audit with zero deficiencies for under Sarbanes-Oxley rules in ten months.
On top of that, because of the amount of money that we had lost the prior year, our materiality threshold, you were talking about materiality or materiality threshold was less than 1%.
Let's look at that. So our radical reality, we had to get radically real about where we were. Because if I overlooked something, if we as an organization overlooked any detail because we had no wiggle room, because of this low level of of materiality, we would not pass the audit and the deal would be off. So it all hinged on our ability to get the cybersecurity thing straight.
Does that sound like a good time, everybody? It was crazy.
Ray Dalio talks about getting a grip. I use the radical reality term that comes from Ray Dalio. It's just like this is I don't want it sugarcoated. I don't want it overstated I don't want it understated I need to know exactly where I'm standing. Right. I always say if you're going to take a stand go ahead and take a stand, but know where your feet are.
And that's what we needed, needed to do. We hired like, something like 350 people in the course of ten months to help us get this job done.
And I would visit the teams, and I'd walk around management by walking around and ask how it's going. And one of the things I noticed was that everyone wanted to tell me what they thought I wanted to hear. They wanted me to feel good about what they were doing. They wanted us to, I guess it was to help me like, have a better day, right?
And so what we would end up with was a very sugarcoated version of what was really happening. So we came up with a mantra and that was there is no problem. That is a bad problem here, except the one we're not talking about, because if we don't know about the problem, we can't fix it. And then we had to put together our risk profile.
So this stepwise process that I go through for outcome based security is to, you know, understand where we are, be able to put together a risk profile based on that reality and what our target looks like. So it's we have to kind of consider a lot of other aspects cyber security and technical risk, obviously as part of it.
In our case, financial risk was a huge piece of it. And we, you know, operational risk, for example. So we had to, you know, it's not a solo sport. You have to go out and socialize this with everyone on the leadership team. And that was part of our success. I think at the end of the day was we had all of these people involved with us.
The process of doing this risk, profile I wanted to share with you for a second. As an aside, it's been I do this now for other companies. The process same process that I developed while we were at AT&T wireless. And the thing that's been really eye opening is every single risk management standard that's out there, whether, you know, all the ISO standards, all require us to do a context statement.
Are you familiar with that? So ISO 31,500, all of the ISO to ISO 27,001 doesn't really get into a too much. But all of the related risk management standards do, and it requires you to go out and have a very clear picture of what your business context looks like. Why? Because you can't determine what the controls need to be unless you know what the context is.
This is one of the things, in my opinion, that makes the framework actually kind of a problem, because it's a a generic framework without any context and a company that goes straight to doing a risk register based off of, of, of a generic framework is going to create a, a set of controls that they can't justify, right.
So that context step is super important. It's really shocking to me how many companies don't do this. So create the risk profiles, the next step. And then you have have to, ask or determine what the necessary outcomes are. So in our case for AT&T wireless, because of the purchase agreement that we had, was singular because it was Sox 404.
We had. And because our materiality threshold was minuscule, we ended up with a outcome statement that we engineered to. And that outcome statement was all systems contributing to the general ledger must have authenticity, accuracy, availability and integrity to within 1% of value. That doesn't sound like a traditional security statement, but it was a top level outcome that we could engineer to.
Does that make sense? So, for example, part of our radical reality was because we had grown very quickly by way of acquisition and, and mergers over the years. We had 38 different identity and access management systems and nine systems of record. Does anybody in here an auditor? You should be feinting. We had. Whereas sites running in our data center, that had been previously undetected.
Somehow. And so we had actually, we had a real mess to clean up in order for us to meet this outcome. We had to make sure that our identity and access management systems were pristine, like there couldn't. You can have a, what do they call it, a general problem. If they go out there and you find out that your identity and access management systems have, you know, bad accounts in them.
We had 60,000 accounts on one application in a company of only 3000 people. So there was a ton of cleanup to do. Okay. So. Well, once we know what the necessary outcomes are, we know where we stand. Then I ask one question, and this is where we're going to go with the rest of this presentation. What will it take for this to be true?
I have an aspirational outcome. I'm not there yet. What will it take for this to be true? And that's how we broke that down into the steps that we needed to do. So we had to clean up our identity and access management systems. The whereas in the data center, all of these thousands and thousands of accounts, 1% materiality meant we had zero room for error and the scope of our audit was going to cover the entire network from the MTA's on the cell phone towers, through the through the billing processing system and the entire network for the back office, which meant that every single system attached to the AT&T wireless network was in scope
for Sarbanes-Oxley. Okay. We had to prove that our controls were continuously in place through testing, monitoring, validation. And that became the this is how we're going to do this. The shorts, I do a whole nother workshop on how and what we did there. So I'm not going to go into any more detail on it, but the the outcome is it worked.
We were we were audited by all four big audit firms. We had zero deficiencies, not even a documentation deficiency. And the NY auditor cried and said it was a thing of beauty. I think that might have been the highlight of my entire career. And unfortunately, the sale did go through and it saved the, AT&T wireless brand.
That approach was instrumental to our successful sale of AT&T Wireless to Cingular. But here's the part I don't usually talk about. I tried to implement the same thing at Microsoft, and it was a total flop. Total flop. There's probably a cautionary tale on a whiteboard somewhere in Redmond about this. It was my failure. There is a C, so it was legendary.
But, the principle still holds. So, when people align behind a compelling outcome and believe in the vision, this whole idea of outcome based security works. I believe it also works. If we talk about outcome based. I. And it's not just a framework, it's, it's a real leap of faith, toward a big, hairy, audacious goal.
And so now I want to zoom out from all of that, and, and start to talk about what the emotional physics of that kind of leap of faith looks like. So here's something I've observed, both in my role as a leader in cybersecurity and as a chaplain.
We all dream of a better world, a better life, a better job, a better body, a better future for our kids, a better, more humane approach to innovation. And we imagined something brighter. And then we do something very strange.
We hit ourselves over the head with a sledgehammer of safety. What do I mean by that?
We say, nah, that's not possible. Who do I think I am to have a dream like this? It's too risky. And what if I fail?
Why do we do that?
Because it's really, really vulnerable to imagine greatness. It opens us up to hope. And hope exposes us. Nothing exposes us faster than hope. So we play it safe. This is resonating with anybody.
We focus on the obstacles, and we rewrite the story so that we are off the hook.
We settle for what we've been handed. And if someone else steps up and they go for the big hairy audacious goal. We critique them. We stay in the cheap seats of the arena. Teddy Roosevelt warned us about this over a century ago, and because it's still safer to be the critic in the cheap seats and face down in the dust, fighting for the dream.
So let's do something. Take an experiment and do this experiment together. If you close your eyes for a moment and imagine this a future where your organization's AI strategy is admired not just for its innovation, but for its humanity.
Where your team wakes up excited to work on something because it's meaningful and because it matters, and where your systems aren't just faster, efficient, but they're trustworthy and ethical.
So hold that vision in your mind's eye. And now ask yourself, what will it take for this to be true?
That question. That's the bridge from one from where we are today, from our radical reality to the kind of future that we want to have with AI. And it's where we're headed next.
So before we cross the bridge and we know where we're starting today as a society, our radical reality. And the truth is, I think we're starting in a world that's really fractured. I hear that word a lot. We live in what Martin or what a philosopher Martin Buber called an AI world where a world where people are treated as a means to an end.
Transactions over relationships, efficiency over empathy. And Buber contrasted that with I vow relationships and genuine mutual human connection. That was in 1923, 100 years ago, more than 100 years ago. So fast forward to 1981, philosophies philosopher Alice it's Alistair MacIntyre wrote in After Virtue that the manager treats ends as a given. His concern is with technique and effectiveness.
You know, that's what we did in cybersecurity.
We optimize for technique, for control, for frameworks and policies. And we rarely ask the question, is this humane? Is this good? We often don't design for people. I have a dear colleague here in Houston named Phil Bandy. I hope he doesn't mind me calling him out. He's a cultural anthropologist. And he worked with me on my team at Stanford Research Institute.
And we had to come up with a approach for, we had an ethics. We had an ethics policy that said we were not allowed to deceive anyone, not a not another employee, not a colleague, and not a customer. Sounds like a really good policy. But for a cyber security team who had to do social engineering because everybody was doing social engineering.
It didn't. It didn't work. Phil came up with an approach for us to be able to test our customers for their susceptibility to social engineering, without using any deception. And one of the things that I'll never forget. That realization about what it meant to do human centered design, because we could go out and do this testing and this evaluation for our customers and never have to lie to anyone.
That was huge to me, right? That's what I mean when we talk about doing doing cyber security and thinking about the way we do things that is human centered.
So over the years since even since Martin Buber, since Alistair McIntyre, we have continued from the 1960s onward. I'm a I'm, I, I grew up in the 60s. So between that and cyber security, that's why my hair's so white. But, throughout that period, we kept shrinking the AI or shrinking the we aspect of everything the, the, the AI.
And though we kept shrinking the we aspect of that and and elevating the AI, elevating the me.
Until Sherry Turkle anybody here know Sherry Turkle is work at MIT? I highly recommend it.
She wrote Alone Together in 2011. And in there, she says technology doesn't just do things for us. It does things to us, and it changes who we are. So now we connect more than ever digitally, and yet we are more alone than ever emotionally, on the brink of this AI horizon. And AI is just not on the horizon anymore.
It's embedded in our systems. We have creative tools. We do our risk models. It's in our code base. AI is coming with power, with promise and a whole bunch of problems. And we've already talked about some of those today. We're on the edge of a metaphorical chasm, just like we were in the dotcom era when we talked about crossing the chasm.
We're on one side, which is our our present, and we have a future on the other. So the question, what's in the chasm? We've already talked about some of it today. The hallucinations, the bias. Right. Surveillance, deepfakes, things that are going to erode trust, privacy invasions. Right. I want to talk about regulatory challenges. I loved your question at the end of your panel, Michael.
And so we're going to talk a little bit about regulations with AI since January of 2025. Over 700 I think it's almost 800 now. 800 state level legislative proposals have already been introduced. Okay. To put that in perspective, that's more than five times of the number of enacted privacy laws that are on the books. Now, if you are responsible for doing privacy compliance, you know what kind of a nightmare that's going to be right?
Why so many? Well, we've we're dealing with it. I'm from Colorado. We're dealing with this in Colorado right now with the, Senate bill that was that was passed last year and signed into law. Hopefully we're able to fix it before it gets, fully enacted next year. But I believe that that legislators I'm going to say something provocative so you can call me out on it later.
But I believe legislators have a desire to make a mark by legislating first.
And the apparent ignorance among our lawmakers and regulators when it comes to AI and the things that they're putting into law, which is enforceable, is staggering, staggering.
We are innovating at lightning speed and regulating with fog lights at the speed of a toddler on an espresso. This is a very familiar pattern. And we've talked about it. I've heard it mentioned already today. Back in 2000, when I was at Stanford Research Institute, we were, you might have heard of Jean Schulz. Jean Schulz was my, the one who hired me there.
We did research on on cybersecurity, and we ran a program called the International Information Integrity Institute. And I and we also did a lot of consulting for a lot of fortune 50 companies. We were known as the heartbeat of Silicon Valley. We had innovation. Like, you've heard of Doug Engelbart with the mouse. The internet. Siri Emerald.
Emerald was, was the real first, research that was being done in the area of intrusion detection and network intrusion, infection detection and bot hunters. We were doing tons of cybersecurity research. So we held the Internet Security Summit in 2000, and we had the secretary general of Interpol on the stage, along with Bank of America executives. Sri's.
President. We had government advisors, fortune 50 CISOs. And their message was and Dave alluded to this earlier today in his talk, if we collectively don't get cyber or get internet security right, it's going to be a mess. That was in 2000.
We knew then that we had to collectively balance the promise of new technology with guardrails, but we needed to do it right. Fast forward to today. Cybercrime is the third largest economy in the world. So we clearly didn't get it right. Right. The forces behind self-interest one. And we cannot afford to do that again. It's going to take leadership from people like you in this room to make that happen.
This is the holy grail of human centered technology, Technology innovation. And it needs ambivalent self-interest. This is where it gets really tricky, because we're not just facing ineffective, even harmful regulation. We're navigating something far more complex ambivalence, self-interest. This is what I mean by that.
It's when the same stakeholders, whether it's a company, a regulator or a department, once they want two conflicting things at the same time. Tech companies want the freedom to innovate, and the credibility that comes from being seen as ethical. They want public trust and proprietary dominance. They want to avoid regulation and be protected from the consequences of bad actors.
Even inside organizations, you see this when product wants be legal, wants safety, risk wants proof, and the board wants quarterly results and zero headlines. I had a boss. My boss at Microsoft wanted zero headlines. He said my performance goals was no hacks, no leaks. That's a whole nother story. That's ambivalence. And it's not malicious. It's just real.
But here's the problem. You cannot build human centered technology if the human element remains negotiable. It's not technical. It's not just technical. It's existential. So I'll ask you to indulge me for a second, and I'm going to go to a scene you might remember from a movie, Indiana Jones and the Last Crusade.
You are Indiana Jones. You've made it past the blades, the riddles and the ancient traps. Now you stand at the edge of a chasm. And before you fog. No visible path. Just air.
Your organization's future hangs in the balance. You've been tasked with building an AI strategy that is innovative, resilient and trusted and everything in you. The frameworks, the training, the risk. Instinct says wait. Wait until the time is right. Wait until the bridge is visible. Don't move. Stay safe.
But something deeper calls you. It's not what you can measure, but it's what you believe. You take a breath. You close your eyes. You lift your foot and step.
Not into chaos, but onto something solid. A bridge that only reveals itself to those brave enough to believe in empathy. It's not made of algorithms. It's made of moral imagination, responsibility, human dignity. And every step forward reveals more of the way. That, my friends, is what this moment demands.
It's not a product roadmap. It's a gigantic leap of faith.
So five forms of empathy centered leadership that we have. Empathy is moral imagination, which is how will this system impact others? A family in Mumbai, a teacher in Iowa, a refugee in Greece. It's empathy. Is listening, creating structures to hear from those who are unlike ourselves. Not just collecting data about them, but actually listening to them. It's empathy as covenant, seeing users, not as data points, but as full human beings.
Everyone is someone's child, someone's partner, someone who matters. Empathy is responsibility. Focusing not on what we can build, but on what we should build. What serves not just what sells.
And finally, empathy as ethical limits. Having the courage to say we could do that, but we shouldn't. That's real leadership.
So what kind of leader takes that step? It's not just a technologist and not just a policymaker. It's a leader who sees empathy as a strategic advantage. Because empathy, as I said, isn't soft. It's how we translate risk into reality and innovation into impact. In his keynote this morning, Scott quoted Allen Key the best way to predict the future is to invent it.
So what's the outcome that we desire? A world where every technology, technological innovation begins with the question, how will this honor human dignity?
Rabbi Jonathan Sacks of Blessed Memory warned us of the drift from we to AI. In his final book, which I highly recommend, it's called Morality Restoring the Common Good in Divided Times. He wrote, A society is strong when we care for the vulnerable. It becomes weak when we only care for ourselves. Human dignity must be the cornerstone of AI.
We're not trying to blend everyone into beige. We want technology that honors our uniqueness, our color, our complexity, and our character.
Now, a word of caution.
Empathy is powerful, but like any power, it can be misapplied. Psychologist God Syed calls it suicidal empathy. It's when our emotional instinct is manipulated or misdirected, and when we focus on what feels good instead of what does good. Some examples of what that would be. Prioritizing AI, ethics, theater over meaningful safeguards. Designing hyper protective systems that limit human agency.
Mistaking AI for something sentient and worthy of moral status.
Empathy must be wise, grounded, and directed at human flourishing. Otherwise, we build systems that feel good but do harm. So what will it take for the future? We want to be true. Let's get specific. It will take technologists who write empathy into every line of code. Business models that reward human centered design.
Regulatory frameworks that guide without suffocating education. That builds technical fluency and moral imagination. Diverse teams that reflect the lives that they aim to serve. And finally, consumers and citizens who demand tech that respects their their dignity and votes with their wallet. And above all, it will take leaders like you.
Who can stand at the edge of that chasm and choose to lead with empathy and not fear.
So let me leave you with this. Empathy is not a compliance requirements. It's not a soft skill. It's the invisible bridge between innovation and impact. And every time you ask, what will it take for this to be true? And deeply listen for the answer. You are building that bridge.
I'll leave you with one question. When you go back to your teams, to your boards, and to your company, will you be the one who asks? How will this honor human dignity? Will you step forward even if the path isn't fully visible? And will you lead not because it's safe, but because it's right?
Because this isn't just a talk. This is your permission slip to leave the cheap seats and step into the arena.
Begin building bridges and lead with empathy. Thank you.