Deepfakes: What are they, and why are they dangerous?

Published in: SC Lawyer Magazine

By now, most everyone has seen synthetic video reproductions of political figures (including Donald Trump, Barack Obama and Boris Johnson), business leaders, or celebrities. Deepfakes, a portmanteau of “deep learning” and “fake,” are fake images, video, audio, and text content that appear to be real. This technology can be used to make people appear to say or do things they never said or did, or to substitute people in existing videos. Over the last few years, deepfake technology has become increasingly sophisticated and at the same time more widely accessible. In fact, there are any number of apps a person can download for free or low-cost to utilize deepfake technology.

Many of the deepfakes are created today by leveraging artificial intelligence such as machine learning or deep learning. One of the most recent advances in deepfakes has come from the use of generative adversarial networks (“GANs”), which consists of two competing neural network models called the “generator” and the “discriminator.” The generator takes samples of data – such as images or audio of a particular person – and tries to deceive the discriminator from distinguishing between the real and fake samples. The feedback from the discriminator enables the generator to improve in appearing or sounding more realistic and the process repeats itself. After a certain point, not even a trained eye, or ear, can detect the fake.

Deepfakes’ potential for future harm

Not that long ago, the ability to disseminate video, photographs or audio to large numbers of people was limited. Times have clearly changed. And while media manipulation is not a new phenomenon, modern technology and social media platforms allow for convincing deepfakes to be rapidly and easily disseminated to a global audience in a matter of minutes. There are countless legitimate uses for deepfakes today, such as for art, entertainment and education. But deepfakes are also frequently used for harassment, intimidation and extortion against individuals and businesses. Indeed, many malicious deepfakes have been used for nonconsensual pornography, primarily targeting women.

Deepfakes will likely migrate far beyond the pornography context in the next few years, with great potential for harm. At the beginning of 2021, the Federal Bureau of Investigation (FBI) recognized deepfake technology as a new emerging danger and warned that malicious actors “almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”1 In addition to the threats identified by the FBI, others worry deepfakes could undermine public trust through misinformation campaigns, influence political elections, jeopardize national security, interfere with the stock market, lead to corporate espionage, and so forth. A few of these risks are examined below.

  1. Cybersecurity Threats

    Scenario: Susan, Jack’s boss and the CEO of the company where he works, calls on Friday afternoon.

    Susan says: “Jack, hi, this is Susan. I’m at the airport getting ready to take off for Bobby’s lacrosse tournament in New York this weekend. Listen, Globex just called and needs us to wire $107,000 for the next shipment. You’ll be receiving an email in just a few minutes. Can you please wire the funds?”

    Jack responds: “Sure, yes, no problem. I can take care of that.” Susan: “Great, thanks. Hope you and Cary have a great weekend. See you next week.” Jack opens the email, clicks the link, and deposits the funds without a second thought.

    Only one problem: the caller was not Susan. The attacker found a presentation Susan gave to a professional organization on YouTube and used it to create synthetic audio to impersonate the tone, inflection and idiosyncrasies of Susan’s voice in less than 10 minutes. A quick Google search identified an article talking about her family and mentioning her son Bobby. Bobby had a public Instagram account where he shared videos of his lacrosse skills, and on that Friday afternoon, posted a picture of himself at the airport with the caption: “Headed to NY for Regionals with the fam.” From any number of publicly available materials, the attacker discovered that Globex is a supplier for Susan’s company. And what about Jack? His bio as the CFO is listed on the company website, where he mentions he enjoys hiking with his wife, Cary, and their two children. The attacker was able to acquire all this information in less than 30 minutes.

    The use of synthetic content to carry out cyberattacks against organizations is referred to as Business Identity Compromise (BIC). Criminals are investing in deepfake technology, which has the potential to alter the cyber threat landscape. BIC will involve the use of content generation and manipulation methods to create synthetic corporate identities or a sophisticated replication of a current employee. The FBI warns that “[t]his emerging attack vector will likely have very significant financial and reputational impacts to victim businesses and organizations.”2

    To create a deepfake, threat actors search for videos, speeches and social media posts on publicly accessible websites to gather the information they need. They may conduct months-long surveillance on the victim before committing the attack. The deepfake ploys would basically be advanced forms of phishing, but considerably more difficult to detect.

    Criminals have already used, or attempted to use, synthetic audio in the commission of illicit activities, including blackmail and social engineering, as evidenced by publicly available examples of their success. A cybersecurity company reported earlier this year that it had worked with 17 companies who had lost an average of $175,000 apiece from deepfake voice scams.3 During one of the first known successful financial scams involving audio deepfakes in 2019, the CEO of an energy company believed he was on the phone with his boss, the chief executive of the German parent company.4 The caller requested that the CEO send €220,000 (approximately $243,000) to a Hungarian vendor. However, it turned out that the caller was not his superior. The CEO was duped by a deepfake audio file that simulated his boss’s German accent, according to reports.

    As another example, a lawyer received a deepfake audio call from his distressed “son,” who claimed he had just injured a pregnant woman in an accident and required a $9,000 bail bond.5 The lawyer said the caller’s voice was identical to his son’s, with the same cadence and word choice. Before any money was exchanged, the lawyer was able to contact his son and determine the call was a hoax. The Federal Trade Commission (FTC) has reported that numerous consumers have fallen to synthetic audio scams, including elderly people and employees.6 The FTC attempted to highlight the possible harm associated with this technology by hosting a public workshop on voice cloning technologies and ethical considerations surrounding cloned voices in 2020.

  2. Attacks to Critical Medical Infrastructure

    Even more startling than deepfake audio: deepfake medical images. In the study, “CT-GAN: Malicious Tampering of 3D Medical Imagery Using Deep Learning,” Israeli researchers demonstrated how simple it is to modify MRI or CT scan pictures using deepfakes.7 The research team successfully infiltrated a hospital’s information technology system via a covert penetration test and subsequently attacked the radiology network using CT-GANs to inject mock tumors and remove actual cancers. The CT-GANs were undetected by radiologists who later examined the images.

    This study is especially worrisome in light of the fact that patients are routinely diagnosed and treated only on the basis of medical scans. As a result, an attacker who obtains access to a patient’s MRI or CT scans could potentially influence the outcome of the patient’s diagnosis. “An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market,” the researchers observed in their paper.

    Apart from manipulating evidence of malignant tumors, bad actors could employ GAN assaults to change evidence of heart problems, blood clots or infections. The research team identified a variety of motivations for conducting these sorts of medical attacks, including falsifying research evidence, insurance fraud, corporate sabotage, job theft, terrorism, assassination, and even murder. Several factors, including outdated information technology systems and a lack of encryption, make hospitals, and particularly radiology departments, vulnerable to these attacks, according to the study.

  3. Escalating Disinformation Campaigns

    In addition to audio, video and images, undetectable deepfake text may be used to influence the public, generate fear and doubt, and undermine public trust. The Generative Pre-Trained Transformer 3, or GPT-3, is the latest iteration of language processing machine learning program from Open AI, an enterprise funded in part by Elon Musk. GPT-3 is an autoregressive language model that utilizes deep learning from a dataset of a trillion words from the web to generate new human-like text. GPT-3 could facilitate mass quantities of deepfake text content online.

    Earlier this year, a group of Georgetown University researchers set out to discover if GPT-3 can be used to produce content for misinformation operations.8 As a result of their study, the researchers concluded that GPT-3 can generate moderate-quality or better misinformation in a very scalable way. Furthermore, they discovered that GPT-3 may be used to generate short messages on social media that might be used to affect public opinion on topics related to international diplomacy. For the purpose of testing greater volumes of synthetic text content, a college student did his own experiment using GPT-3 by establishing an entirely fake blog by a fictitious author.9 Only a few individuals, according to reports, questioned if the blog was created by artificial intelligence, and one of the blog postings even reached the top of the most popular Hacker News page.

    Compared to conventional internet bots, GPT-3 will inevitably have a significantly greater impact on the internet. Many threat actors are driven to employ deepfake text in order to distort information and sway opinions. Using deepfake text, criminals can achieve a wide range of destructive objectives, including defaming corporations and attacking their products, disseminating misleading information about stock markets or interest rates, or submitting fictitious comments on new regulations in order to exert influence over the regulatory process. The creation of phony social media profiles from deepfake pictures and content will help facilitate large-scale disinformation campaigns.

  4. Evidentiary Impact in Litigation

    Deepfake evidentiary issues are likely to surface in litigation in the coming years. The proliferation of deepfakes may cause anybody to doubt the veracity of evidence. The fact that deepfakes may be both convincing and difficult to identify raises concerns about how this technology can compromise the court’s truth-seeking duty. At least one instance of potential abuse of this technology has already occurred. In a recent child custody battle in the United Kingdom, one of the parties produced proof of an audio tape in which the other parent was making dangerous and violent threats.10 Fortunately, with forensic analysis, the accused parent’s lawyer was able to establish that the incriminating audio was manipulated and was not real.

    As a means of safeguarding the legal process against deepfake dangers in the future, judges may be compelled to take on a more rigorous gatekeeping role in their own courtrooms. Attorneys will need to be on the lookout to ensure that digital evidence can be verified in accordance with evidentiary rules. A client’s digital evidence should be scrutinized for odd characteristics such as unnatural eye movement, shadow inconsistencies or absence of shadows, strange background noise, and so forth. When analyzing digital evidence, it is also important to look for inconsistencies in the metadata. Parties may be required to obtain additional corroborating evidence to demonstrate the authenticity of digital evidence. In addition, forensic specialists may be called upon to provide proof of evidence tampering or to assist in determining the validity of electronic evidence.

    The mere possibility that digital evidence may be fabricated could also result in an erosion of trust in the justice system. Parties will invariably attempt to portray real evidence as fabricated, which will grow more credible as deepfakes become more prevalent. This concept is referred to as the “liar’s dividend,” because it permits the guilty party to acquire an advantage in an environment where it is difficult to distinguish between what is real and what is not.11

What is being done to address harms associated with deepfakes?

Trying to hold deepfake perpetrators responsible presents a slew of challenges. For starters, prosecutors or litigants may be unable to identify the culprits behind deepfakes. Furthermore, deepfake producers, like many cyber offenders, may be working outside of the United States, putting them beyond the reach of legal authority. Besides that, regulation of synthetic media is subject to First Amendment challenges. Deepfakes used for parody or satire, for example, may be protected by the First Amendment. Our country’s legal framework is not well equipped to respond to some of these technological evolutions.

Even while deepfakes have the potential for wide-ranging and dangerous implications for our society, they are still mostly unregulated in the United States. In recent years, deepfake legislation has been introduced at the state and federal levels in response to targeted deepfake activities. However, just a few states have passed legislation to govern deepfakes, and the scope of those enacted laws is very narrow.

The ability to attribute fake speech or behavior to a political candidate is a powerful and dangerous new tool in the arsenal of those seeking to confuse voters through misinformation campaigns. Both domestic and international actors have a strong incentive to deploy deepfake tools to interfere with elections. Texas became the first state to prohibit the creation and dissemination of deepfake videos with the intent of harming candidates for public office or interfering with electoral processes. Under Texas Senate Bill 751, a deepfake video is a video “created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”12 Publishing a deepfake video within 30 days of an election with the “intent to injure a candidate or influence the result of an election” is a Class A misdemeanor, punishable by up to a year in jail and a fine of $4,000.13 California has also enacted legislation to regulate deepfake content intended to influence an election. AB 730 makes it unlawful to distribute “materially deceptive audio or visual media” with actual malice and intent to injure an election candidate or deceive voters within 60 days of an election.14

Legislation has also targeted pornographic deepfakes. Virginia was the first state to make deepfake pornography a criminal offense. The distribution of nonconsensual and “falsely created” nude images and videos is a Class 1 misdemeanor in Virginia, punishable by up to a year in prison and a $2,500 fine.15 In addition, California’s AB 602 is intended to prevent the unauthorized use of one’s image in pornographic material.16 California’s law allows victims to bring an action for injunctive relief and damages. It remains to be seen whether other states, as well as Congress, will adopt legislation targeting deepfakes.

Deepfakes may also run afoul of existing defamation, extortion, copyright, and data privacy laws. However, in the absence of additional legislative remedies, it will primarily be up to the private sector, companies like Facebook and Twitter, to detect and prohibit deepfakes. Numerous business and academic initiatives are now underway to create technologies capable of detecting deepfakes. For example, Facebook and Michigan State University recently announced the development of a reverse engineering approach for detecting deepfakes.17 Microsoft introduced its Video Authenticator tool in 2020, which provides a confidence level indicating if media content was artificially manipulated.18 Additionally, the National Defense Authorization Act for Fiscal Year 2020 (“NDAA”) established a “Deepfakes Prize Competition” to promote deepfake detection research, development and commercialization.19

How can you protect yourself or your business from deepfakes?

Unfortunately, the adage “see something, say something” may not be adequate to identify increasingly realistic deepfakes. However, there are several measures that individuals may take to mitigate the risks associated with deepfake activities. These are as follows:

  • Educate your organization’s personnel on the scope and dangers of deepfakes, as well as how to identify them;
  • Be alert when consuming content online and do not assume the content is real based on the existence of images, video, audio, or text;
  • Never disclose personal or sensitive information about the organization to anyone online without first confirming their identification using a reliable second source or independent sources;
  • Encrypt all devices, accounts and systems using multi-factor authentication;
  • Tighten payment permission processes — for example, requiring payment requests to be submitted via business email accounts or demanding multi-person authorization for larger payments;
  • Invest in detecting tools that will screen communications for suspected deepfake technology usage;
  • Verify that insurance policies cover damages suffered as a result of deepfake fraud; and
  • Consider who will have access to your personal pictures and videos and use caution when accepting new social media contacts.

1 Private Industry Notification, Federal Bureau of Investigation, Cyber Division, Malicious Actors Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operations (March 10, 2021), https://www.ic3.gov/Media/News/2021/210310-2.pdf

2 Id.

3 Jennifer Alsever, Beware: Phone scammers are using this new sci-fi tool to fleece victims, Fortune (May 4, 2021), https://fortune.com/2021/05/04/voice-cloning-fraud-ai-deepfakes-phone-scams/.

4 Greg Noone, Listen carefully: The growing threat of audio deepfake scams, AI software capable of cloning voices is proving a useful weapon for fraudsters., Tech Monitor (Feb. 4, 2021), https://techmonitor.ai/techonology/cybersecurity/growing-threat-audio-deepfake-scams.

5 Id.

6 You Don’t Say: An FTC Workshop on Voice Cloning Technologies (Jan. 28, 2020), https://www.ftc.gov/news-events/events-calendar/you-dont-say-ftc-workshop-voice-cloning-technologies.

7 Yisroel Mirsky et al., CT-GAN: Malicious Tampering of 3D Medical Imagery Using Deep Learning, USENIX Security Symposium (June 6, 2019), http://bit.ly/2WzJzNH.

8 Ben Buchanon et al, Truth, Lies, Automaton: How Language Models | Could Change Disinformation, Center for Security and Emerging Technology (May 2021), https://cset.georgetown.edu/publication/truth-lies-and-automation/.

9 Karen Hao, A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it., MIT Technology Review (Aug. 14, 2020), https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/.

10 Matt Reynolds, Courts and lawyers struggle with growing prevalence of deepfake, ABA Journal (June 9, 2020), https://www.abajournal.com/web/article/courts-and-lawyers-struggle-with-growing-prevalence-of-deepfakes.

11 Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1785-86 (2019).

12 Tex. SB 751.

13 Id.

14 Calif. AB-730.

15 Va. Code Ann. § 18.2-386.2

16 Calif. AB-602.

17 Xi Yin & Tal Hassner, Reverse engineering generative models from a single deepfake image (June 16, 2021), https://ai.facebook.com/blog/reverse-engineering-generative-model-from-a-single-deepfake-image/

18 Tom Burt & Eric Horvitz, New Steps to Combat Disinformation, Microsoft on the Issues (Sept. 1, 2020), https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/.

19 Pub.L. 116–92. NDAA also requires a report on the foreign weaponization of deepfakes and Congressional notification of significant deepfake activities directed at elections in the United States.

Picture of Rachael Lewis Anna

Rachael Lewis Anna

Rachael Lewis Anna is a Member of Wyche’s Litigation Team.  Her practice focuses on complex business litigation including representing clients in antitrust, trade secret, unfair trade practice, False Claims Act, healthcare, contractual, and professional malpractice disputes. 
RELATED ARTICLES

Stay in Touch

Join one our mailing lists and receive regular updates!

Contact Us