John Villasensor, Brookings Institution, Monday, June 3, 2019
What happens when you mix easy access to increasingly sophisticated technology for producing deepfake videos, a high-stakes election, and a social media ecosystem built on maximizing views, likes, and shares? America is about to find out.
As I explained in a TechTank post in February 2019, “deepfakes are videos that have been constructed to make a person appear to say or do something that they never said or did.” With continued advances in artificial intelligence-based techniques for performing detailed frame-by-frame editing, it’s easier than ever to create highly convincing depictions of events that never actually occurred. But you don’t need AI to produce an altered video. The late May release of a video modified to make House Speaker Nancy Pelosi appear to slur her words underscored how even rudimentary digital content manipulations can be highly effective at creating an alternate reality.
With the 2020 election campaign heating up, deepfakes are likely to be part of the landscape, spurring discussions regarding their role in elections and what can be done to minimize their impact. Here is some information that can be useful in informing those discussions:
Deepfakes Can Influence Voters
Under the right set of circumstances, deepfakes will be very influential. They don’t even have to be particularly good to potentially swing the outcome of an election. As with so much in elections, deepfakes are a numbers game. While the presence of tampering in all but the most sophisticated deepfakes can be quickly identified, not everyone who views them will get that message.
More fundamentally, not everyone wants to get that message. As can occur with other forms of online misinformation, deepfakes will be designed to amplify voter misconceptions, fears, and suspicions, making what might seem outlandish and improbable to some people appear plausible and credible to others. To influence an election, a deepfake doesn’t need to convince everyone who sees it. It just needs to undermine the targeted candidate’s credibility among enough voters to make a difference.
Why It’s Different This Time Around
Deepfakes weren’t an issue during the 2016 election campaign. But the technology to produce them has advanced rapidly in the past few years. It’s also much more widely available. As a result, there’s a powerful new tool in the toolbox for people who might contemplate using digital misinformation techniques to attempt to influence an election.
Deepfakes can be made by anyone with a computer, internet access, an interest in influencing an election, and a lack of concern with the associated ethical and legal ramifications. There are multiple playbooks that can be used.
People acting alone can produce deepfakes, launch them into the wild via social media, and hope that they become viral. Another possibility is that a nation-state intent on influencing a U.S. election could engage in a carefully planned months-long offensive, producing multiple high-quality deepfakes and strategically releasing them in a manner aimed at maximizing their impact. Yet another possibility is that members of a campaign staff could encourage the creation of deepfakes targeting their opponent, being careful to do so in a manner that preserves plausible deniability for their own candidate if any links between the campaign and deepfake creators are ever exposed.
Why Deepfakes Are So Concerning
In politics, deepfakes are the inevitable next step in the attack on truth. Historically, misinformation in politics involved saying or writing something false about someone else. With deepfakes, attackers cause their targets to become agents of their own subversion. The dissemination of a lie and its apparent confirmation become one and the same.
In addition, deepfakes weaponize information in a way that takes maximum advantage of the dynamics of a social media ecosystem that prizes traffic above nearly all else. While deepfakes require some investment of time and work to create, like other digital content they can be easily distributed via social media to an audience that—with the right combination of planning, timing, and luck—could reach into the millions.
The Challenges Facing Social Media Companies
As the unwitting distribution networks for deepfakes, social media companies are going to need to up their game in addressing them. One important step is to develop and deploy better technology for detecting deepfakes and tracking their propagation. In addition, social media companies are going to need to formulate and implement clearer policies regarding when they will remove (or decline to remove) deepfakes from their sites.
The policy challenge facing social media companies is much more complex than it might initially appear. A policy generally targeting false, deceptive, digitally altered content would be unworkable, undesirable, and overly broad, as not all such content is problematic. Suppose that a person who is a bad piano player posts a video doctored to portray himself or herself as a good piano player. It would make little sense to suggest that it should be the business of a social media company to figure that out and remove the video.
What about a policy targeting only deepfakes? That would raise the question of why deepfakes were handled differently from other altered and/or false content. And as this example from an art museum illustrates, not all deepfakes are nefarious.
A social media company might contemplate a policy of removing only malicious deepfakes. But that would require defining “malicious.” If a deepfake is used in a manner that is clearly a parody, would a “no malicious deepfakes” policy give the person portrayed grounds to demand its removal from a social media site? What about targeting only deepfakes related to an election? That would raise yet more definitional challenges. For example, was the Pelosi video—which wasn’t even technically a deepfake—related to an election?
In short, it won’t be easy for social media companies to develop deepfake polices that are practical, narrowly tailored, and that maintain consistency with how they address other forms of false or altered content.