Three essential steps for organizations to safeguard against deepfakes

Click here to visit Original posting

Our identities face unprecedented threat. While AI has the potential to be a force for good, in the hands of nefarious actors it can have the opposite effect, amplifying these dangers. Among these threats are deepfakes: synthetic media used to impersonate real individuals. Over the past year, these fraudulent impersonations have surged, targeting individuals across various platforms. As deepfakes become more convincing, cybercriminals are finding new ways to exploit them, posing serious risks to personal and organizational security.

While deepfakes have been circulating online since 2017, their impact has recently escalated. Initially used to impersonate celebrities and public figures, deepfakes have now become more personal, targeting senior executives across nearly every industry—from retail to healthcare. A notable case involved a finance employee who was deceived into transferring an astonishing £20 million to fraudsters who used a video deepfake to impersonate the company's chief financial officer.

Exacerbating the issue is the need for more awareness among the general public. A recent survey by Ofcom revealed that less than half of UK residents are familiar with deepfakes, increasing the likelihood of these attacks succeeding. Equally concerning is that according to KPMG, 80% of business leaders believe deepfakes pose a significant risk to their operations, yet only 29% have implemented measures to counteract them.

The first step in addressing the deepfake challenge to cybersecurity is raising awareness and adopting proactive strategies to combat the threat. But where should organisations begin? Let's delve deeper, looking at three solutions that organisations can take to prevent being caught out by deepfakes.

A Dual Approach: The Importance of Passive and Active Identity Verification

To effectively counter deepfakes, organizations must adopt a multifaceted approach to identity management and verification. While biometric authentication methods such as fingerprint or facial recognition are robust, more than a single mode of authentication is required to protect against today's sophisticated cybercriminals. Multiple layers of authentication are necessary to safeguard against these threats without compromising the user experience.

This is where passive authentication, particularly passive identity threat detection, becomes crucial. Operating alongside active authentication methods—such as user-initiated verifications—passive identity threat detection works behind the scenes, primarily focusing on identifying potential risks. This technology can activate alternative verification methods, such as a push notification to confirm location or device usage when suspicious login attempts or behavior are detected. Rather than overwhelming users with additional authentication steps, passive identity threat detection alerts both the user and the organization to potential fraudulent activity, preventing it before it escalates.

The concept of implicit trust—where we naturally trust what we see and hear—is diminishing as deepfakes increasingly compromise identity verification. In today’s “trust nothing, verify everything” era, explicit trust measures, such as sending a text message, push notification, or other credential checks outside the usual communication channels, have become essential. While not necessary for every interaction, these additional verifications are crucial when dealing with sensitive actions like transferring money or clicking on potentially malicious links, ensuring authenticity in a world where appearances can deceive.

Deepfakes are often used to socially engineer victims, exploiting channels like voice, images, and video over unauthenticated platforms. For instance, an employee might receive a Zoom call from someone impersonating their CEO, asking to reset a password or make an urgent payment. We have been encouraged by employers for years to trust our colleagues, but this rise in deepfakes presents a challenge to the fabric of work culture.

Leveraging AI for Good: Using Emerging Technologies to Combat Deepfakes

Society is at a critical juncture where AI tools can be used for good and evil, with human identity caught in the middle of this technological tug-of-war. As trust erodes and our identities are increasingly at risk, it is imperative that we stay vigilant and proactive in the fight against deepfakes.

AI, while contributing to the deepfake problem, also offers solutions to mitigate it. To reduce the prevalence of deepfakes, organizations must harness emerging technologies designed to detect these fraudulent media. These include image insertion detection, which identifies if an image was manually or falsely added to a communication, and audio detection tools that determine if an audio file was synthetically generated. As AI technology continues to evolve, we can expect the development of even more sophisticated deepfake prevention methods. However, in the meantime, organizations must leverage existing technologies to stay ahead—just as cybercriminals do with AI on the other side of this battle.

As with any cybersecurity threat, the best protection comes from being one step ahead. The more prepared organizations are for potential deepfake attacks, the better they can protect against future threats. By adopting a multifaceted approach to identity verification and remaining aware of the tactics employed by cybercriminals, we can safeguard our identities and maintain trust in a digital world.

We've listed the best network monitoring tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro