The Coming Automation of Propaganda

Commentary by Frank Adkins and Shawn Hibbard
War on the Rocks, August 6th 2019
https://warontherocks.com/2019/08/the-coming-automation-of-propaganda/

If you want a vision of the future, imagine a thousand bots screaming from a human face – forever (apologies to George Orwell). As U.S. policymakers remain indecisive over how to prevent a repeat of the 2016 election interference, the threat is looming ever more ominous on the horizon. The public has unfortunately settled on the term “bots” to describe the social media manipulation activities of foreign actors, invoking an image of neat rows of metal automatons hunched over keyboards, when in reality live humans are methodically at work. While the 2016 election mythologized the power of these influence-actors, such work is slow, costly, and labor-intensive. Humans must manually create and manage accounts, hand-write posts and comments, and spend countless hours reading content online to signal-boost particular narratives. However, recent advances in artificial intelligence (AI) may soon enable the automation of much of this work, massively amplifying the disruptive potential of online influence operations.

This emerging threat draws its power from vulnerabilities in our society: an unaware public, an underprepared legal system, and social media companies not sufficiently concerned with their exploitability by malign actors. Addressing these vulnerabilities requires immediate attention from lawmakers to inform the public, address legal blind spots, and hold social media companies to account.

Characterizing the Threat

What the American public has called AI, for lack of a better term, is better thought of as a cluster of emerging technologies capable of constructing convincing false realities. In line with the terms policymakers use, we will refer to the falsified media (pictures, audio, and video) these technologies generate as “deepfakes,” though we also suggest a new term, “machine persona,” to refer to AI that mimics the behavior of live users in the service of driving narratives.

Improvements in AI bots, up to this point, have mostly manifested in relatively harmless areas like customer service. But these thus far modest improvements build upon breakthroughs in speech recognition and generation that are nothing short of profound.

OpenAI, a project Elon Musk founded, made headlines this year for its GPT-2, a text generation language model the organization deemed “too dangerous to release.” This framing was perhaps an exaggeration, but OpenAI’s work was impressive nonetheless. Testers gave the algorithm 40GB of seed text from links aggregated across the Internet, which it studied with the aid of a supercomputer, producing a lightweight output that a regular desktop could run. OpenAI released a toned-down version of the algorithm to the public, but the products the organization revealed of the full version were remarkable. Though OpenAI admits to taking a few tries to get a good sample, given the first line of Orwell’s 1984, “It was a bright cold day in April, and the clocks were striking thirteen,” it eventually produced a coherent opening to a near-future novel set in Seattle. With an opening line about the discovery of unicorns in the Andes, an article GPT-2 produced wouldn’t look at all out of place in a pop-science website. That is, apart from the subject. The “fake news” applications require little imagination. One study explored this exact scenario, showing that GPT-2 was able to generate foreign policy news that subjects rated on average only marginally less credible than the New York Times seed text.

These developments aren’t mere science projects either, but beneficiaries of market forces. Companies have used natural language processing (NLP) and generalized text generation to automate a growing share of the customer service and information technology workforce, cutting labor costs and freeing skilled labor from menial tasks. Advances in text generation have greatly benefited journalism in particular, driving media companies to invest in generating ever more believable content. However, NLP is just one facet of the AI revolution.

Advancements in image recognition and generation can now produce faces that are almost entirely indistinguishable from those of real humans. Intelligence organizations have already used this technology to solicit unwitting contacts through social media. The same underlying technologies have also led to a recent spike in deepfake videos, now letting anyone with at-home software blend real footage almost seamlessly with generated content. And you don’t have to take our word for it, trust former president Barack Obama.

AI technology also has less flashy, but no less substantial applications in influencing what users see online. Social media platforms work by identifying trending content and boosting it into the feeds of other users. While the case varies from platform to platform, these trend algorithms tend to be a function of ‘likes,’ ‘retweets,’ or ‘upvotes’ over time, but they weight early interaction most strongly. This means that a small, concentrated burst of interaction at the birth of new content is often all that is necessary to send it trending, pushing it into the feeds of thousands of legitimate users.

Understanding the Impact

To appreciate the threat at the intersection of deepfakes and machine personas, consider your daily diet of online information. You probably know enough to avoid following small, suspicious accounts on Twitter or browsing links to sites of which you’ve never heard. You probably don’t accept Facebook friend requests from people you’ve never met and generally stay out of the seedier parts of Reddit. However, the Internet is an ecosystem built for virality. A disruption somewhere can have impacts almost anywhere.

Imagine an influence-actor posts a deepfake video of the NYPD beating a young minority man to death in an alley. The alley in the background is real. The faces of the police doing the beating are real. The face of the man beaten to death is real, taken from a list of missing persons. The video need only be dropped in a forum somewhere for the Internet to do the rest. Machine personas can then set about controlling the dialogue, goading opposition, reinforcing extremists, and generally shaping the conversation in the most confrontational direction possible. In popular forums such as Reddit, they automatically identify and signal-boost comments about the incident that threaten violence against the police and government. Users claiming the footage is deepfaked are targeted by these same machine personas with downvotes and accusations of supporting a cover up. The omnipresent machine personas fake a public consensus and make dissenters feel they are an unwelcome minority. Posts about the incident reach the front page of Reddit where real users pick up and spread the “news” across Facebook and Twitter, reaching an audience of millions in just a few hours.

As the NYPD struggles to evaluate the video and debunk it, an operative assuming the identity of a concerned NYPD officer sends a deepfake audio file to a major U.S. news publication. The file contains the supervisor of the framed officers engaging in a racial epithet-laden rant about the alleged cover-up. The deceived news organization vouches for its credibility, lending its authority to the outrage. Machine personas automatically identify and signal-boost tweets and comments advocating for protest marches. In the following days, a video surfaces on 4chan of immigrants kidnapping and sexually assaulting a young girl, though nobody in the video actually exists to undermine its authenticity. Machine personas then begin advocating on 4chan and 8chan for acts of revenge against the planned protesters, who the machine personas label politically responsible for advocating pro-immigration policies.

Whether a malicious group would engage in such an overtly provocative act or merely patiently stoke the same resentments is debatable. The danger is that these technologies exist now. Though they may only be prototypes, it is ill-advised to bet against technological progress. In a world where the above scenario is possible, curating your social media contacts is insufficient to insulate yourself from the effects of malign actors. If you are American, you also may have imagined Russia behind this fictional attack, but we ask you to think more broadly. While the resources of a state actor made possible the interference into the 2016 U.S. presidential campaign, AI technologies could put this power into the hands of minor state or even non-state actors. In fact, nothing about this vignette couldn’t be done by a talented lone wolf lacking any intelligence footprint whatsoever.

Countering the Effects

There are no easy solutions to the informational challenges AI presents. Each challenge warrants a deeper discussion than we can deliver here, and many of these challenges will have consequences that will require considerable reflection. Rather than proposing solutions in a vacuum, this conversation is best framed in terms of the vulnerabilities that any solution would need to address.

The first vulnerability is a lack of public awareness or skepticism towards content that users view online. A concerted effort should be made by U.S. legislators and Silicon Valley to bring public attention to AI-enabled disinformation. The June 13 congressional hearing on AI-enabled influence operations was an important first step, and it is encouraging to hear bipartisan consensus on the threat. However, awareness has limitations. The public is fortunate that deepfakes and text generators still have a somewhat identifiable “off” quality to them. Yet the technology is unlikely to plateau here. Technical solutions also exist in identifying machine-generated content. However, there is no inherent quality of “realness” to a real image that more sophisticated software couldn’t eventually recreate. In a world where malign actors can generate pixel-by-pixel accurate content in the comfort of their own basements, distinguishing fake content from a grainy cell phone video could conceivably become impossible. Everyone should already have an untrusting eye turned towards what they see online, but we ourselves can’t claim to always abide by this virtue. Still, the public should be continuously confronted with the ease with which machine personas will soon be manufacturing provocative and disgusting content. To mitigate this vulnerability, our first impulse when we see members of a different political persuasion engaging in outrageous behavior should not be to share it, but to question its veracity.

The second vulnerability is a legal system with numerous blind spots that lawmakers should close. Reddit is the third-most-popular social media site on the Internet, surpassing Facebook among American Internet users. It is also shockingly vulnerable, requiring only an e-mail address to register an account. Consequently, anyone could theoretically register an unlimited number of accounts and, being careful not to stand out to system administrators, effectively control conversations on whatever topics they want. This is no hypothetical — you can pay for this service right now. (Please don’t.) It’s hard to imagine a real-life analogy, but would Americans defend the right of the local felon to march on city hall with a thousand androids masquerading as fellow citizens? This drives to the heart of an ongoing legal debate on what exactly social media is and how regulators should treat it. However, to accept the status quo is to accept that such behavior is no more serious than a terms of service violation. This unsettled status that regards social media as no more than the footprint of its company is insufficient to capture the scope and impact that users who abuse social platforms have on American society.

The third vulnerability is online anonymity. While we absolutely do not advocate for de-anonymizing the Internet, it is now so influential over American society that legislators should not leave regulation to social media platforms alone. Congress should put more pressure on social media companies to ensure their users are, at a minimum, who they say they are. That said, it is still important to remember that anonymity is both a bug and a feature, and not something regulators should crush out of hand. The Internet’s capacity to act as a platform for dissidents is not something lawmakers should root out, though any attempts are likely to fail anyway. Still, wherever a microphone appears before a crowd online, there should be no question that malicious actors will seek to place themselves before it. When they can do so with anonymity, tracing the origins of deepfaked media and rooting them out becomes a nearly impossible task. There is a middle ground between anarchy and government-issued Facebook accounts. That middle ground likely involves a far better vetting process for account creation at major sites. A modified pseudonymity system is one possibility, whereby a third party cryptographically verifies an individual can hold an account, then ties the account to that identity without disclosing the name of the holder. This is also not without its faults, both technical and otherwise, not least of which is who the third party should be. Though the government is one clear answer, for a public so enamored with conspiracy theories, Americans shouldn’t expect federally managed Internet identity tracking to be popular with any demographic except federal officials. Platforms also come and go, meaning that companies and regulators would have to continuously renegotiate such a solution. The Internet is also international — forcing sites to navigate the myriad of requirements various states impose on them. However, no solution needs to be 100% effective. Nor could it, in the face of well-resourced state actors. It should only make reaching critical mass in public spaces prohibitively expensive.

Every solution will be painful. Consequently, we don’t expect that regulators will take any significant steps in the directions outlined above until the effects of machine personas become undeniable. It is the responsibility of both the U.S. government and Silicon Valley to ensure that the American public is aware of this threat so that policymakers have the necessary political capital to take action. The public should also be prepared for the possibility that malign actors will put their thumbs on the scale to the benefit of one political entity over another, so they should have a united voice in rejecting anti-democratic interference. The Internet may already be past the era of speculation about the problem and in the age of persistent machine interference. To protect both it and democracy, the American public needs to begin these conversations in earnest.

Capt. Frank Adkins and Capt. Shawn Hibbard are both active duty Air Force cyber officers and graduates of the U.S. Air Force Academy. They’ve worked at Cyber Command in various positions as operators, red teamers, and planners on the leading edge of the U.S. cyber mission. Capt. Adkins received his Master’s in computer science from Northeastern University with a focus in cyber vulnerability assessment, and Capt. Hibbard received his in strategic intelligence from the National Intelligence University, studying the strategic implications of next-generation supercomputing technology. The views expressed are those of the authors and do not necessarily reflect the official policy or position of the U.S. Air Force, Cyber Command, or the U.S. government.

Image: Daniel Carlbom and Johnny Lindner, adapted by War on the Rocks