days until DORA is in effect on January 17, 2025. Download your cheat sheet here.
Everything You Need to Know About Deepfakes
From entertainment to disinformation to fraud, deepfakes—synthetic media where a person is superimposed onto a video or image using machine learning or artificial intelligence (AI)—are becoming increasingly common and increasingly realistic.
As the technology continues to advance, deepfake content has a building presence across almost all industries and verticals. In the entertainment world, deepfake technology has been used to recreate Anthony Bourdain’s voice in a posthumous documentary. And more concerningly, in politics, an AI-generated video of Hilary Clinton endorsing Republican Ron DeSantis for president went viral in 2022.
In some cases, deepfakes can be used for financial fraud and information theft—and have the potential to cause tremendous damage to a company’s finances and reputation. For instance, in 2022, a Hong Kong bank lost $35 million due to a “deep voice” deepfake to clone a director’s voice to request transfers. And, earlier this year, an employee with access to company financials at an international firm paid fraudsters $25 million after falling victim to a deepfake video call with an individual they believed to be the company’s chief financial officer.
Types of deepfake attacks
Organizations already have long list of cyber threats to worry about, from insiders to malware, but it’s increasingly critical that they keep deepfakes on their radar as well. As generative AI continues to get more powerful and accessible, it’s letting people create more compelling deepfakes “at a fraction of the cost,” driving a 3,000% spike in deepfake phishing in 2023 alone. With malicious uses of synthetic media becoming more common, organizations must prepare employees to identify and mitigate related threats.
Deepfake attacks can then be carried out in two ways:
- Face-swapping: With these attacks, bad actors superimpose the victim’s face on themselves or another person to synthetically produce a photo or video that resembles the victim. This can often be done by simply gathering footage of the target from publicly available sources, from social media to corporate websites.
- Real-time conversations: By using AI to mimic a trusted individual’s voice, bad actors can enable real-time conversations in which they ask for money or make other demands. Sometimes, attackers will first spam call an individual to collect voice metrics that allow AI to replicate their speech.
In both cases, attackers tend to impersonate organizational leaders and stress both urgency and secrecy regarding their demands. In one such attack, bad actors allegedly used AI to create a deepfake of an executive’s voice, and successfully ordered a direct report to wire hundreds of thousands of dollars to him.
The convincing nature of recent deepfakes backed by the success rates of related breaches stands to underscore the importance of educating individuals and organizations on how they can be spotted to prevent further malicious use.
Preventing deepfake attacks
As deepfake attacks grow more prominent, organizations need to take proactive steps to prevent them. In a recent report, the Department of Homeland Security emphasized the need for “technological innovation, education, and regulation” regarding deepfakes. Already, the manipulation of media files is a criminal offense in ten states, while a federal law allowing victims of deepfakes to sue their creators was recently introduced in the Senate. Until regulation fully catches up to the threat, organizations should guard against deepfakes by:
- Establishing procedures for conducting callbacks to verify any requestor’s integrity before sending money or sensitive information.
- Require multiple people to authorize any financial transfer that takes place.
- Establishing code words between approvers to verify everyone’s identities before any transfer takes place.
- Watermarking all digital media to distinguish real videos from fake ones.
- Educating employees how to spot signs of a deepfake, such as looking for inconsistencies in the audio or video.
Additionally, encourage employees to trust their gut. If a call or request seems suspicious, it’s always best to err on the side of caution.
The bottom line
Despite efforts to prevent them, deepfakes are likely here to stay. Deepfakes have the potential to cause very real damage to a company. Thus, the time is now for organizations to bolster their technical and non-technical controls to reduce the likelihood of successful attacks.