Skip to content

AI and Deepfakes: The New Cyber Weapon

Presenter:

Dr. Joseph Ponnoly

Transcript:

Hey, everybody. Everybody ready for the after lunch sessions? Everybody's got their blood sugar. Okay? After having some food and all that stuff. Hope you got to see some of the keynote. I'm going to do a quick introduction on Brian de Pollo. And I'm going to be introducing our speaker today, Doctor Joseph Ponnoly. He is a management consultant and researcher specializing in cybersecurity, IT and data analytics, and is an advisor to multinational companies, for over 20 years.


He's a former investigator with India's Central Bureau of Investigation and Interpol, leading high profile cybersecurity and forensic cases. He's an author, academic, an author, as DBA and, data data analytics, multiple advanced degrees, author of A gateway to the Quantum Age and a contributor to AI and other publications. Well, certified, an industrial leader, industry leader.


ISACA board member. Various certifications, including a SSP, SAS, system and so on. So with that, I think, this is gonna be a great session. I will hand it over to Doctor Ponnoly. Thank you very much.


Good afternoon, everyone. Welcome to this session.


Can anybody identify this lady?


That's actress Sydney Sweeney.


But is it a real picture?


It's fake. It's not real. It is a generated. It's a fake, but not deep. Fake. So I'll explain what deep fakes are. Anybody who is not aware what deep fakes are and anybody. Anybody who has not been a victim of deep fakes.


In fact, whether we are aware of it or not, we are all victims of deep fake disinformation. The information, the misinformation, disinformation that we see online, on print media, half baked truths. We are all victims of that. We don't react to that. So we are victims of deep fakes. So deep fakes are not confined to just deep fake videos or audios.


Even deep fake text. Text that is generated by AI that is deep fake. I got interested in this topic. Deep fakes. When two of my friends became victims of deep fake cyber pornography and they committed suicide because of sextortion. Then I realized how serious this problem is, and he started researching on this. So let's have a better idea of what deep fakes are.


The next. Next slide.


Let's play a game. I want to show you two videos. One is real. One is altered by AI. Can you spot the deep fake? Prime Minister Justin Trudeau recommended one of these books. Which one? The book that I got excited about reading through. It's called This Can't Be Happening. And McDonald Hall. It's called How the Prime Minister Stole Freedom.


Yeah, it was the last one. The Prime Minister never said that. Okay. Round two, if you can see. I hope to finish this talk show one day. Will the fake Morgan Freeman please stand up? I am not Morgan Freeman. And what you see is not real. How about these videos? Florida Governor Ron DeSantis said the presidential hopeful say this.


I'm the only one that could possibly compete with Donald Trump for this. I took some very bad advice. I did some very bad things. The last one is fake. Made to make you think that DeSantis was dropping out of the race. Look at the mouth. Out of sync. The voice. I never should have challenged President Trump. One giveaway to a deep fake.


For now, from Hollywood to Washington to Ottawa, deepfakes are confusing reality with AI. Fraudsters can take three second recording of your voice. I've watched one of me and my company. I said, when the hell did I say that? Spend enough time scrolling like me. You start questioning everything on your screen. How do I know that you're not a deepfake?


That's the right question, isn't it? Honey, Fareed specializes in digital forensics out of the University of California, Berkeley. If you're trying to create a 10s cat, Mike of the Prime Minister saying something inappropriate, that'll take me two minutes to do. And very little money and very little effort and very little skill. How do I know? Okay, so a video of a politician swearing or dancing won't set off national security alarms.


President Trump is a total and complete day. But as AI technology gets better and more believable. Three worries about populations primed for manipulation. What concerns me a great deal in this country, in particular is how Partizan we are. And when you have that kind of deep, deep partizanship that outright hatred of the other side. Not just disagreement. The disinformation campaigns are much more effective because they will take hold because everybody's already there.


Ears. Something you might have seen before on your social media feed on artificial intelligence. A deepfake of The National's Ian Hanomansing. That seems to be selling cryptocurrency. More than 765. It's a scam. He never recorded that. It's made with AI. Okay, but what if during the next election, you see multiple videos that look and sound like a journalist you trust telling you the date of the election has changed?


Would you believe it? I think that's really scary when you think about that. What, 12 hours before Election Day go supernova viral? It doesn't matter if you correct the record. 12 hours later, the damage has been done. Next, it's that threat to democracy that rattles this conservative MP. Politicians have like the next. Please. You know, I have over a decade worth of speeches that are on the internet writing videos.


It'd be very easy for somebody to put together, a deepfake video of me worried that parliament isn't moving fast enough. Michelle Rempel Garner help set up a nonpartisan working group to tackle AI. We haven't even dealt with telephone stop this as a country, right? Like we really haven't dealt with, like baiting them for.


Okay, that gives you some inkling of what deepfakes are. And they can be used for elections, disinformation and so on. So we just see how deepfakes are created, how they impact us in every sphere, how they're used for scams and frauds, and how do you detect them? How do you what is the risk? How do you mitigate the risk?


And then what is the remedy? What is the legal technological, legal remedies against deepfakes. And how do you promote trust? Because ultimately deepfakes lead to erosion of trust everywhere. We do not trust online media, and we cannot trust even our own identity. There is a threat to our online identity, our digital identity. So how do we restore the trust?


So these are some of the topics that I want to cover today. Next.


So deepfakes next. Next one.


So deepfakes are manipulated video audio or text. So you can see some of these deepfake images here. They are all deepfakes generated generated using, neural technology. Neural. Yeah. So deep learning technologies. And we will see the technologies behind deepfakes. Next one.


So let's go through the research risk landscape, the threat landscape and which are the areas which are impacted by deepfakes. Next.


So fake news they threaten democracy our own digital identity. So deepfake identities are created. So even in job interviews. So there have been cases where people have used their virtual personas digitally created by created. They have attended interviews and even got employed in use corporations. And then they started exfiltrating the data. Actually, 1 in 1 case that was a spy from North Korea who got employed by News Corporation.


But fortunately, after one week of employment, he was caught by the security. So this information it can it need not be confined to the political sphere. Now we are seeing the military uses of deepfake and how it can impact warfare, the Palestinian warfare or Ukraine warfare or any warfare. So we heard the keynote speaker speaking about the importance of or the emerging importance of Taiwan or China and Russia.


So deepfakes are used extensively and will be used extensively in military warfare.


We are seeing increasing incidences of cyber fraud. We will see how I see you for how a senior executive of several corporations, they have been they have been duped by deepfake videos of CEOs and senior executives. And that led to financial frauds, transfer of frauds running to hundreds of millions of dollars. Sextortion. Blackmailing. Revenge porn. That is the another area where deepfakes are being, used.


So deepfakes, they are extending across every sphere of human life. And that's where trust, digital trust is eroding. So we will see how AI Suckers Digital Trust Ecosystem framework is trying to restore the trust, the digital trust in every digital transaction, whether it is financial transaction or otherwise. And how it becomes important, how we have to shift the focus from just risk and resilience to digital trust.


Next. So these are some representative cases that have appeared recently. So WPP is one of the largest advertising agencies in Europe or across the world. And their CEO was, was he became victim of, deepfake scam. So there was, an attempted fraud using deepfake, using deepfake of the CEO. And that led to transfer of, nearly, $230 million.


So, and the senior executive was actually duped because he believed that it was the CEO. It was it was not. Video deepfake audio. The CEO asking the senior executive to transfer the funds. So in the next case, an energy firm, again, that was energy firm. We also, the fraudsters impersonated the CEO and tricked the subsidiary into transferring funds of $243 million, the Ferrari deep CEO.


Deepfake. It was an attempt which failed. There again, the cyber fraudsters, they tried to fake the voice of the CEO, but ultimately the senior executives. He was, prudent enough or he was clever enough to ask the question. He had doubt about the fraudsters claim. And then he asked the fraudster about the new book that the CEO had asked all senior executives to read.


So he asked. So last week you asked us to tell us, tell us to read about this particular book. Forgotten about the book. Can you repeat it? And the fraudster was not able to. He was fumbling and he was not able to repeat the name of the book. And that led to, further suspecting the fraudster. And ultimately he cut off the conversation.


So that was a failed attempt by tricking, using, deep fake, voice to, for a fraud fraudulent scheme. And then deep fakes have been used in election fraud elections for manipulating public opinion. Influencing public opinion. So in Slovak election that was used even in the 2016 elections. It was used in US 2020. Again, there was an attempt by Iranian fraudsters using deep fake disinformation campaigns.


And in Maryland school, there was a deep fake racist, deep fake that was also, deep fake, information which tries to create racist slogans or racist propaganda. And that so there what had happened was the athletic director who was so who were the principal of the school. He had a case against the athletic director and then the athletic director.


He created a deep fake voice recording of the principal, principal of the school, attributing racist, messages from the school. And ultimately, it was proved to be false. It was root wave, deep fake. And the athletic director was arrested. The Taylor Swift deepfake pornography in 2020 for a number of deep fake pornography. So for Taylor Swift that circulated on the internet, on the underworld.


And that created, you know, what is the trauma that is created for any individual? As I mentioned about my friends, what they committed suicide because of shame, because of the exposure to of them. Taylor Swift, she suddenly challenged she she had a legal team and then political deep fakes the number of cases. So these are representative of the extent of deepfake, how they are extending across the spectrum, you know, society and code.


They are a threat to our digital identity and digital trust. Next.


So let's come back to this sextortion, blackmailing and deep fakes and cyber pornography. So there is a documentary, and, another body to it. It was a 2023 documentary where a university student, a lady, she fell and she was, she got a number of weird messages from her friends and also from strangers. And, she was wondering what they were.


And then she was sitting with her boyfriend and suddenly on the TV screen, pornographic images of her. They were flashing, and she was never involved in that. And that that was indicative of how deepfake videos can be used for sextortion, for cyber pornography. Without our our consent. And how it becomes for another body can replace our own body.


So another body is a documentary, short documentary, and that clearly explains the trauma and psychological trauma that the victims can feel. And ultimately, they can even commit suicide. So many celebrities have been targeted by sextortion, blackmailing and cyber porn. The famous. Some of these famous cases are listed here gal Gadot, Scarlett Johansson and the journalist Rana. You and Taylor Swift, of course.


And there were a number of school students incidents in South Korea and even in the US. So boys creating cyber cyber porn, deepfakes, cyber performance of their own classmates and blackmailing them. Deepfake messages have appeared on various messaging applications like telegram, and there are softwares know which can create, nudity photographs of people just by using the, the softwares.


Deepfakes. Child pornography has been extensive, and those children are not even aware that their bodies have been used for child pornography. Revenge porn is another area. So the legal response, the Trump administration, they came up with the takedown act in May 2025. It is still not a law. It is yet to be passed. So it requires that online platforms must remove the reported content within 48 hours.


So in San Francisco, also, there is lawsuit. And in California there is actually, law which is prohibiting deepfakes, but that is more focused on election, deepfakes used in election. But in cyber porn, that law is yet to be passed. So this is a major area, and this is the major, application area where deepfakes are used to humiliate and to, damage the reputation of individuals.

CYBR.SEC.CON CTA


It could be anyone of us. Anyone of us. Could be victims. Next.


So deepfakes, frauds and scams, they're increasing. So this I mentioned about this typical case where deepfake audio was used in the UK. Company was a subsidiary of a German company and the CEO of the German company, or the CEO of the UK company receives a phone call from the CEO of the German company, asking for transfer of $243,000 to a supplier immediately, and he transferred the funds because the phone message came from the CEO, and later on it was found that it was actually a deepfake audio of the CEO.


See, you never called the the CEO. German CEO never called the UK CEO. So these are some of the potential for, potential transfer deepfakes. And so you have to think, senior executives of, major corporations or even as managers, how to protect your firms from, these sort of, scams, the latest deepfake frauds, which, no, extensive across many organizations is you get deepfake interviews, people applying for jobs and those those job profiles or the resume is confirmed to the job required, like, requirements, almost 100%, almost 100%.


Because the, deepfake AI is used to generate the resume. And then you call them for the interview. And I know that virtual persona of the applicant appears for the interview, and he talks in response to you and you. You ask him or her and appoint him. And this I mentioned, this happened from an applicant from North Korea, and he was employed by a US firm.


And after after one week, it was found that the actual the person who actually attended the interview was a virtual persona of the person who did not meet the requirements and the resume itself was fake. So this shows the extent of scams and frauds that are taking place. So we have a number of cases where people have been scammed.


They judicial, digital, judicial, others and so on. The scams are spreading because of the use of deepfake videos and audios, and you find that it doesn't require any sort of expertise to create a deepfake video or audio. So even it's scripted, it could can create a deepfake video and audio that's play among schoolchildren. Now it is widespread and they are using to, false extortion or humiliating the classmates.


Next one.


Deepfakes for disinformation. Disinformation is malicious information that is planted for the purpose of influencing public opinion and so on. And that is being weaponized by nation states for political propaganda. During elections. They are being used, but they're weaponization by nation states. That is really alarming. So in Ukraine during this war, Ukraine war in 2022, there was a deepfake video of Zelensky saying that he's surrendering.


So of course it was detected or it was disowned by the Ukrainian government immediately. And the harm was reduced or it was averted. Even in the 2016 election, disinformation was widely there, and from Macedonia, Macedonia had had the disinformation factories, factories creating disinformation, and that disinformation was widely circulated across us in online media. In Slovenian elections again 2023 disinformation AI generated audio of politicians discussing vote rigging and that impacted the trust that the electorate had on the candidates and that that affected the prospects of some of the candidates in China.


Also, there have been a lot of, PRC public, People's People's Republic of China, what did you, China lated audio for, for campaigns with disinformation in 2022, in the elections again, Iran used disinformation campaign. So it shows that disinformation campaigns can be weaponized not only during elections to subvert democratic processes, but even to create conflicts between nations and to impact the relationships between nations.


So disinformation audios, audio videos and fake, audios, disinformation that can be used for, weaponizing, and they are being used by nation states. I do see real nation states like Russia, China, North Korea, Iran. So they are the adversaries for us. And that is a major threat that we have to guard against. Next. So fake news, what should we do about it?


So fake news is everywhere. That's why I said we are all victims of disinformation. Every one of us. So. Or that we have to do is we have to be we have to guard against it, use our critical thinking processes and check what we see or hear is really real and or fake. So that requires critical thinking and requires awareness of these pitfalls and how deepfakes can be used by cyber criminals and nation states and against individuals.


As we have mentioned next.


So even Brazil is a journalist, reputed journalist. He came up with this book recently, The Death of Truth. So are we seeing the death of truth? So what is label? What is truth for it? So with the advent of AI, are we seeing the dearth of labeled information and truth? So are we progressing? So he mentions about his own experience as a journalist late into the Ukrainian war, just a few months prior to the war, Russia had started a disinformation campaign saying that there were bio, bio weapon labs in Ukraine which were run by us.


It ran for a few months prior to the invasion of Ukraine by Russia. So Russia was preparing with disinformation, a justification for invading Ukraine, and that was directed against Ukraine and Russia and US. So he and he as a journalist, he found he had evidence late into this and he published that. But nobody nobody cared about it. And later on, Russia, Russia's GRU, as he describes in the book, they plan to.


The disinformation campaigns within U.S. soil against him as if he was an opponent of free speech, and that he became a target of, anti free speech campaign and that, the a case against him. He had to suffer because of that. And he describes all of that in this book and he has is it's the age when there is death of truth.


Next.


Another area which should be of concern to us is our own digital identity. We require our digital identity in the digital, in cyber space to prove that we are what we are. So we have all these, user stories and passwords, multi-factor authentication and so on. Now even our digital identity is cloned. Then how do we establish that we are what we are?


So impersonation, impersonation of real individuals, somebody else can take a job impersonating us. Somebody else can take a bank loan in our name. Somebody else can commit a crime and put our identity as the perpetrator as a culprit. And the police will knock on our door and try to come and arrest us, because we committed the crime when we did not.


So our own digital identity is here at stake. So foolish. Government IDs passports can easily be created now with, the use of deepfake audios, videos and photographs and synthetic identities. Identities of children. So social security number can be combined. The Social Security number can be combined with the photo of an adult, and the name can be changed.


Or the name can be same, address can be changed, and they can play for a loan or as you mentioned, for a passport and exist here. So as a citizen or whatever. So digital identity is an area where the identity is threatened because of the use of deepfakes. Identity theft. Another area. So and again the biometric authentication that we are using facial recognition systems that we we know see extensively in airports used by homeland security.


The facial recognition system itself can be, threatened. That can be fooled by deepfake videos. Somebody else can use it. So there have been instances where in, team meetings, that is video meetings, somebody else appears on your behalf, and tries to manipulate or influence the meetings. So digital identity gaining unauthorized access to financial and sensitive systems, social engineering.


So the phishing attacks earlier you used. You are honest. No phishing attacks can use audios or videos which are created by AI. Some of the defenses which banks must know try to implement. Relating to the, authentication. So even multi-factor authentication is at stake. So in high risk areas, multiple multi-factor authentication must be supplemented by some more, authentication mechanisms.


So you must not depend just on video authentication or audio audio authentication. So liveliness so that and listening whether the person is legally there what is called the liveliness test. These are some of the new techniques that are to be implemented by particularly financial institutions like banks, to defend against identity theft and impersonation.


The biggest problem is if your identity is stolen. How do you restore your identity? That is a major problem that somebody else has owned your identity. Somebody else has taken a loan in your name. A home loan. So to restore that your identity again, it's a legal process and it may take a long time. So the law has not kept pace with these techniques, the sophistication of these techniques that are being employed by criminals.


Next.


So as I mentioned, deepfakes are no threat to even the facial recognition systems because of spoofing attacks. And with AI, there are a lot of other type of rules like model poisoning, data injection, and it will prompt injection and so on. And adversarial machine learning. So there has to be all the all these indicate there are legal issues and also reputational risk associated with this.


And ultimately it is the loss of trust in the biometric system itself with all the biometric systems. Whereas for the the facial recognition systems, fingerprinting and so on. So anybody can no substitute that. With the advent of AI and deepfakes. Next.


So how are these deepfakes created next. So it is an AI technology. And number of AI technologies are behind the creation of deepfakes, particularly deepfakes. The name itself is deep learning plus fake faking of deep learning. Deep learning is, neural net, aspect of machine learning and deep learning and neural networks are the technology behind it, particularly what is called the G in Gan networks or, generative adversarial networks or Gans.


So there are two neural networks, acting simultaneously. One is creating a fake identity of the real one. The other one is a discriminator trying to see how this fake video or audio is different from the real one. And then it tries to, fill in the gaps so that the fake one becomes as near to the real one.


And ultimately, you can distinguish between the fake one and the real one. That is the again, network adversarial network technology that is mostly used for deepfake technology. But the other is like deep learning. Technology uses the convolution Cornwall convolutional neural networks, or recurrent neural networks. Those technologies are also behind this. And now in natural intelligence and natural language processing for particularly for audios to simulate our, create a clone of the audio of the voice of a person.


Then natural language processing is is being used. And for training for creation of this, they have to have they have the pre-trained the model with thousands of actual audios and videos of the particular person. So that is why it is so difficult. And it requires a lot of computing processing power. So creation of a deepfake. What do the softwares help you to create this, instantaneously or within a few minutes?


But the technology behind that, that requires a lot of computing power, a lot of pre-trained data. And that's why you if you want to protect yourself from deepfakes, your videos reduce your digital footprint, online footprint. Don't post your videos and audios on your Facebook. And, what do I miss? The on telegram and so on. So reduce your digital footprint then they will not have enough data to train the models for creating your own replica of your videos.


Next.


So there are a lot of tools which are easily available. The Photoshop and tools like Midjourney, Stable Diffusion, Dali, OpenAI, ice and Flux being image creator and the against they mentioned particular next one. So these have become notorious and particularly South and that they are Chinese creations and they make it very easy for particularly if so has been used for deepfake pornography.


And it has been removed from some of the social media platforms and the case against them. But as you as we will see, the legal, protection for victims, that is still not adequate. So fake. So and deep freeze labs are continuously being used for creation of a deepfake audios and videos and becomes very easy for anybody, even without knowledge of the technology or the technology.


We mentioned to create these deepfake videos and audios. Next. So how do you detect these deepfakes? So deepfake detection is, is the technology is keeping up, but it is not sufficient. So this, puff of image is real or fake. So it's obviously fake. You can see under it soup soup kitchen. The the words are not very clear which is very good.


It is indicative of how deep fakes are created. But you can, the, the tools that are being used for, detecting they can detect, go into the pixels of each of the images and then identify what is, what is genuine and what is, identify the features of real images and the fake images. So there are a lot of tools and techniques.


So, next. So Microsoft has got the video authenticator. Then there are the tools, like the tools I mentioned deep mail scanner Intel has got fake got. So Facebook itself has got, the deepfake detection challenge asking people to detect the, deep fakes. So in I know there is a growing field of what is called explainable AI and adversarial testing or stress testing.


And that is continuously testing the facial recognition systems against adversarial samples. So deepfake detection technologies are catching up with the actual deepfake technologies. Again, to mention, deepfake technology itself is not something bad. It was originally it started with use of, deepfake technologies that is deep rooted neural networks for entertainment in the film industry and for education.


It was used for a normal purpose for creating comics and so on, or video games and even for education to show Einstein teaching you the class. So you have cloned image of Einstein. So it was not malicious in itself, but in the hands of malicious actors, threat actors, it has been used for criminal purposes. That is why it is of concern.


So whenever there is a deepfake, case involves suddenly forensic scientists, they have to step in and use various, forensic technologies to detect, analyze, and then identify the synthetic media created by AI to prove that it is actually deepfake and not, genuine version. So there have been cases where people have been involved in crimes and they've said it is a deepfake, so it has to be just be proven in such cases, it has to be proved that the person who is discerning the the genuine video is actually, it is he is the culprit and it is not a deepfake.


So various forensic tools are available and you I sentinel hyper deep fear and so on. And there have been a number of legal cases where deepfakes have been challenged, and there have been cases where genuine videos have been challenged. Us deepfakes. So courts have ruled in various cases in different ways. And those, legal cases are just coming up because these are being challenged.


The problem with sextortion and so on, people, because they don't want to file cases because of the shame, because of the guilt, or because of the, adverse exposure that they may face next.


So what are what are the differences? How do we protect ourselves against deepfakes? So the next. So there are a number of technology solutions. So there is the what is called the C2 peer standard Adobe, Microsoft, Google and so on. They have come up with the approach proven and standard. So whenever a video is created, it must conform to the standard showing the origin and providing the metadata, and also certifying the genuineness of the integrity of the digital content.


Cryptographically signing this, creating a hash value for that. So that standard has been created by this consortium C2 consortium. And that is now almost mandatory. The European Union I it requires that, any video that is created by AI that must be labeled as genuine or AI generated so that labeling is a must in Europe under the European Union, AI act.


And the other is digital watermarking. For any video or audio or even text that is created for a particular purpose, the digital watermarking is to be used to certify that it is genuine. So that way we can distinguish if fake video or audio text from the genuine ones. So blockchain technology is also used for for for proving the integrity of the digital content, the media content, particularly for, art forms.


So a lot of paintings or even, film in the film industry, copyrighted or intellectual property, they need to be protected. For that, blockchain technology is being used. And for decoding deepfake media, we we have seen a lot of forensic tools are there. Next.


Or the legal front line for the different. So we have the European Union has got the European Union. I am the Digital Services Act, which required that, as you mentioned, digital content must be labeled. And the producers, they have to create the mark for genuineness. And then, United States, of course, United States is against regulation and it is self regulation that is promoted now because now it is considered that regulation is an enemy of innovation, which need not be.


But that is how our, administration views it. So we do not live for, cyber pornography. The takedown act, has been introduced, but it is yet to be enacted into law. Then anti deepfake law is there in California, but that is mostly targeted against, political use in elections. But the major, bottleneck against regulation is section 230 of the Communications Decency Act of 1996, which supports freedom of speech and First Amendment and online.


So anything any content on the internet is, is cannot be challenged because we are challenging freedom of speech. But restrictions on freedom of speech, particularly when people are harmed and when loss of lives, resulting certainly section 230 needs to be amended, but that is a future goal. So, you know, even in China, that is restrictions on, deepfake and, deepfakes and synthetic media and across the world, there is no an awareness of the damage that deepfake videos, audios and text and misinformation can cause to individuals and society.


So UN has stepped in with a un strategy for, hate speech. And then the UN Global Digital Compact is, another effort by UN to notably, fill this gap of, spread of cyber crimes relating to, this deepfake area. Next. So digital trust ultimately it erosion of digital trust. So I circa has come up with, digital trust framework focused on these four pillars of, the digital world, people, process and technology.


That is what we are focusing on. But organization and culture, that is very important. Next one. So I suggest come up with digital trust ecosystem framework based on this. These four pillars of the digital world and digital transactions considering the various trust factors. And that is helping us to promote digital trust in cyberspace. Next one.


So creating digital trust that is also emphasized by Yuval Harari and his book Nexus and also by Kissinger came up with that book, Genesis. So I advise you to read this, the best play in the future of information technology and how the information society can be impacted by disinformation and deepfakes. So trust is important, and the future of digital identity also depends on curtailing or restricting deepfakes.


Next. So summary. So the threat landscape we have seen deepfakes and disinformation disinformation are real threats to democracy can erode digital trust and our identity, honor and reputation are a stake. Deepfakes enable cyber frauds, and it is a new cyber weapon which is used by cyber criminals and nation states. Promoting crimes and war can be an existential threat to human society.


There are technological legal safeguards, but trust, truth and trust.


May be dead, but must be resurrected. So the real safeguard is each of us, our human intelligence, and how we are alert to misinformation, disinformation and the fraudulent activities of cyber, cyber, criminals and nation states. So we have to protect our digital identity and our order, and we have to promote digital trust. And we can all create a trustworthy look.


Thank you.

HOU.SEC.CON CTA

Latest