Following his trip to Ukraine in late February, President Biden held a presidential address where he announced he was reinstating the draft to deal with “Putin’s illegal occupation of Kyiv and the impending Chinese blockade of Taiwan,” which “created a two-front national security crisis that requires more troops than the voluntary military can supply” (Newsweek). The video of the president circulated on social media, garnering outrage and concern over a possible draft.
Except this never happened. There was no presidential address. Biden didn’t reinstate the draft (nor would he be able to without authorization from Congress). The footage of Biden’s speech is a deepfake, a realistic-looking video, image, or audio of a person that’s been digitally altered. It was created as a “sneak preview” of a “future” where Biden reactivates the Selective Service Act but was edited without clarification and reshared, spreading misinformation to millions.
• Learn how to detect the difference between synthetic and authentic media.
• Sign this petition to help shut down image-based sexual abuse websites.
What are deepfakes?
A deepfake is digitally-generated media produced from a form of machine-learning AI called deep learning, which takes real images, video, or audio of people and creates a fabricated version that mimics the real thing (Wired).
The AI-generated version will look, sound, and/or have mannerisms like the person, making it appear like they’re saying or doing something they aren’t. Deepfakes can be fabricated entirely, like these pictures of Trump resisting arrest, or be a manipulation of an actual video with altered audio, like this video of Biden making transphobic comments or Barack Obama calling Donald Trump a “complete dipshit.”
They range from harmless and comical, like this Tom Cruise TikTok account, which uses an impersonator and face-mapping effects to show the actor hanging with other celebrities and dancing, to insidious and malicious, like an ad where actress Emma Watson seems to initiate a sex act.
How are they made?
Accessing and using deepfake technology is extremely easy since many face-swapping apps are available on Google Play or the Apple App Store. Deepfake generators are available online, though, the quality varies.
The user simply uploads real audio, images, or video of a specific person for the program to recognize patterns in the subject’s speech and movement to duplicate their likeness, creating the manipulated content. The result is someone else’s face and/or voice imposed on the original video or the subject’s words altered. Audio clones can also be created. Since celebrities and public figures have hours of readily available photos and videos online, they are often the subjects of deepfakes, though private citizens have also been targets.
How are they used?
The number of deepfakes is rapidly rising and their quality is improving, making it harder to decipher reality from fiction.
Deepfakes have been used to commit bank fraud, renew interest in a 2003 cold case of a murdered teen, replace actors’ faces in movies, insert actors in archival footage, spread political misinformation, and create fake nonconsensual pornography—mainly against women.
A 2019 report found that 96% of deepfake content online was nonconsensual porn and almost exclusively featured women (EuroNews). One of the first widely public introductions to deepfakes was in 2017 when a Reddit user created and posted digitally altered sexually explicit videos of female celebrities like Gal Gabot and Taylor Swift online (Vice). Private citizens are also targeted. In 2020, more than 100,000 fake nudes of nonconsenting women were generated using deepfake software (The Verge). They are often made to embarrass, defame, degrade, or harass victims.
Despite this, discussions on deepfakes mainly focus on the impact such technology will have in altering national politics. The political landscape and public perception are susceptible to the deception of deepfakes.
“These realistic yet misleading depictions will be capable of distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office. Beyond domestic politics, deepfake technologies pose a threat to America’s diplomacy and national security.”
– Legal scholars Bobby Chesney and Danielle Citron via Brookings.
What is being done? Are there legal ramifications?
There is no comprehensive law against the creation of deepfakes. New York, Virginia, Georgia, and California are the only states with legislation addressing deepfake porn (Cyber Civil Rights Initiative). Texas and California passed laws that ban deepfakes designed to influence an election and harm a candidate’s reputation.
Legal frameworks like copyright, defamation, and the right of publicity can be used to fight deepfakes, yet the removal of content risks violating fair use laws and the First Amendment (Forbes).
Deepfake detection software is a crucial countermeasure, but such technology has yet to catch up with the rapid production of altered or fabricated content. The onus falls on social networks like Reddit, Facebook, TikTok, and Twitter to stop harmful and misleading media. Despite their rules prohibiting deepfake porn and mis/disinformations, they are still found and shared on these platforms.
How can we detect a deepfake?
Some image detection tools can help verify the authenticity of media, but learning how to distinguish deepfakes is essential. While deepfakes are progressively getting harder to detect (MIT Media Lab), spotting fakes requires you to go beyond a quick first glance and look at the fine details:
Do the words and sounds from the video match with the visuals?
Is there a lag in the audio, or is the lip-synching off?
Does the skin look too smooth or too wrinkly, almost unnatural?
Does the person blink enough or way too much?
Are facial expressions (specifically the eyebrows) and movements exaggerated?
Do the people, including those in the background, look blurry or have warped faces? Do their hands look unnatural?
Additionally, using media literacy methods can also help (The ARD). Ask yourself: who is sharing this, why, and can I trust them? What do they gain from sharing this? Do other reliable sources back this up? Who else is talking about this?
Even reverse image searching can help you gain more information on the photo, the source of an image, if and where it has been shared, and similar photos (Business Insider). Learn how to reverse image search here.
Why does this matter?
In an age of misinformation, fake news, and conspiracies, the distortion of reality affects us all, whether on a political, social, or personal level. Deepfakes have harmed and tarnished the reputation and lives of real people. They make it easier to disseminate misinformation but also to contest the authenticity of credible photos and other media, sowing doubt in our senses and stripping away at a shared public reality of fact.
“As a consequence of this, even truth will not be believed. The man in front of the tank at Tiananmen Square moved the world. Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of … believing what you see is under attack, that is a huge problem.”
– Nasir Memon, professor of computer science and engineering at New York University (Brookings).
• Deepfakes blur the lines between reality and fiction.
• They have largely been used to create digitally altered sexually explicit content against women.
• The current political landscape and news media distrust make the rise in deepfakes dangerous.