Deepfakes Pose Businesses Risks—Here's What to Know (2024)

Every day, companies rely on photos, videos, audio, and other media to shape their brand, enable decisions by business leaders, and carry out key functions while protecting their networks, communications, and sensitive data. But what if, hidden among all the authentic media, there are deepfakes—highly realistic synthetic media made with artificial intelligence (AI)? Now more than ever, it’s clear that deepfakes pose business risks. In fact, misinformation/disinformation ranks as the most severe near-term global risk in the World Economic Forum’s 2024 Global Risks Report. Like government leaders, commercial chief information security officers, executives, and boards want to understand and mitigate deepfake risks.

It’s easy to imagine how criminals might use deepfakes to undermine a brand, impersonate leaders and financial officers, and compromise vital data and systems. Threat actors are already creating deepfake images, audio, and video content with lifelike facsimiles of real people. Celebrities, the public, and businesses are being targeted. Fake imagery is being used to cause reputational harm, exact revenge, and carry out fraud. There’s a low barrier to entry into this malicious activity because the tools needed to create deepfakes are widely available and accessible.

Generative AI (GenAI), which has immense positive potential, is being abused to create deepfakes, often through generative adversarial networks (GAN). GenAI refers to the ability of machines to create new content, such as text, code, images, and music, that resembles what humans can create.  In parallel with the deepfake problem, there’s a growing risk that large language models (LLM) will be used to craft very convincing, native-language text for phishing schemes, false backstories, and information manipulation and interference operations. What’s more, threat actors are combining this language with deepfakes to manufacture potent lies at scale.

It’s easy for adversaries to find useful material to inform impersonations. Thanks to the multitude of social media sites and personal content readily available online, a skilled threat actor can quickly research their target, develop a deepfake, and deploy the deepfake for malicious purposes. Executives, senior IT staff, and call center management are particularly attractive targets for such schemes because of the high potential to monetize the impersonation.

There is no technological silver bullet to counter the risks posed by deepfakes. Deepfake detection is still an active research challenge and will continue to increase in complexity as the quality of media generation rapidly advances. Leading techniques typically take one of a handful of approaches:

  • Deep Learning – Using deep learning techniques to develop a model to distinguish between real and fake content by recognizing underlying patterns in the data.
  • Artifact Detection – Using specific “tells” or artifacts within the content to identify key differences (e.g., close examination of subjects’ eyes and mouths or monitoring the blood flow or pulse by detecting small movements and color changes in the video content).
  • Pairwise Analysis – Taking a pairwise view focusing on the direct comparison of content to find which is more likely to be fake, the idea here being that providing a ranked rating may be more successful than a non-contextual prediction.

Within these areas, several studies report deepfake detection accuracy in the 90%-100% range, but there are also limitations to consider.

A large limitation of deepfake detection techniques is their generalizability. While training a model to perform well on a closed subset of media generation techniques is approachable, training a model that will perform well on previously unseen (or not-yet-invented) generation techniques is much more challenging. AI-based approaches look to find patterns or tease out small differences that can allow them to model a clear separation between classes. However, performance quickly degrades when the model has challenges finding where to look for these differences or if differences occur across several areas.

Another challenge is creating techniques that withstand reverse engineering attacks. Suppose threat actors can find specific features that lead to the determination that an image, voice sample, or video is fake. In that case, they may be able to manipulate this feature in future deepfakes to trick detection models into classifying them incorrectly and bypassing detection systems. A successful model must also work even with large variations in sample quality.

As the field continues to advance, new promising approaches must be weighed against the current technological landscape as well as the use case in question. There may be techniques that sufficiently address a given requirement. However, care must be taken to evaluate the changing threat landscape and the overall risk continuously.

To fight AI with AI, detections need to become targeted and refined. While there is no streamlined AI-based defense against deepfake threats, organizations can mitigate the risks by building a robust, security-centered culture:

  1. Educate staff about the risk of deepfakes, the potential for damage, and tips for spotting deepfakes. Personnel can use this understanding to identify where an image or video may be distorted or appear fake—for instance, characteristics such as hollow eyes, odd shadows, hands, words on signs in the background, or other blurred features that can stand out to a trained eye. Also, track tips and research on countering voice-cloning risks.
  2. Increase protection against deepfake threats with robust authentication and verification, fraud detection, highly tuned phishing detection tools, and a defense-in-depth posture with multiple layers of defense that can withstand the compromise of a single control. Prioritize shoring up existing cybersecurity controls and tools, ensuring they are well-tuned and detecting threats as needed. Also, apply frameworks like DISARM to characterize, discuss, and hunt disinformation threats.
  3. Review recent U.S. cybersecurity guidance on deepfake threats. It discusses using technologies to detect deepfakes and show media provenance as well as applying authentication techniques and/or certain standards to protect the public data of key individuals. The latter includes planning and rehearsing, reporting, and sharing experiences, training personnel, using cross-industry partnerships, and understanding what companies are doing to preserve the provenance of online content.
Deepfakes Pose Businesses Risks—Here's What to Know (2024)

References

Top Articles
Latest Posts
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 6256

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.