The Center for Security and Emerging Technology (CSET) at Georgetown’s Walsh School of Foreign Service released a two-part series last month on “AI and the Future of Disinformation Campaigns,” examining how advances in AI could be exploited to enhance operations that automate disinformation. CSET is a research organization focused on studying the security impacts of emerging technologies, supporting academic work in security and technology studies, and delivering nonpartisan analysis to the policy community.
In Part 1: The RICHDATA Framework, the authors introduce the RICHDATA framework—a disinformation kill chain—to describe the stages and techniques used by human operators to build disinformation campaigns. The RICHDATA Framework is a pyramid representing multiple stages of disinformation campaigns.
- Reconnaissance: operators surveil the environment and understand the audience that they are trying to manipulate.
- Infrastructure: messengers, believable personas, social media accounts, and groups carry their narratives.
- Content: posts and long-reads to photos, memes, and videos, are a must to ensure their messages seed, root, and grow.
- Deployment: into the stream of the internet
- Amplification: these units of disinformation are amplified by bots, platform algorithms, and social-engineering techniques to spread the campaign’s narratives.
- Sustained engagement: with unwitting users through trolling—the disinformation equivalent of hand-to-hand combat.
- Actualization: In its final stage, a disinformation operation is actualized by changing the minds of unwitting targets or even mobilizing them to action to sow chaos.
In Part 2: A Threat Model, the authors build on the RICHDATA framework to describe how “AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.” This report looks at how computing and machine learning (ML) algorithms excel at harnessing data and finding patterns that are difficult for humans to observe. The report offers several recommendations policymakers and industry alike for mitigating the effects and countering the trend.
- Develop technical mitigations to inhibit and detect ML-powered disinformation campaigns. These include limiting access to user data, increasing transparency through interoperable standards for detection, forensics, and digital provenance of synthetic media, and labeling chatbots as such.
- Develop an early warning system for disinformation campaigns. Expand cooperation and intelligence sharing between the federal government, industry partners, state and local governments, and likeminded democratic nations to develop a common operational picture and detect the use of novel ML-enabled techniques, enabling rapid response.
- Build a networked collective defense across platforms. All platforms should establish policies and processes to discover, disrupt, and report on disinformation campaigns, and Congress should remove impediments to threat information sharing while enabling counter-disinformation research.
- Examine and deter the use of services that enable disinformation campaigns. Congress should examine the use of machine learning-enabled content generation tools and build norms to discourage their use by candidates for public office.
- Integrate threat modeling and red-teaming processes to guard against abuse. Platforms and AI researchers should adapt cybersecurity best practices to disinformation operations, adopt them into the early stages of product design, and test potential mitigations prior to their release.
- Build and apply ethical principles for the publication of AI research that can fuel disinformation campaigns. The AI research community should develop a publication risk framework to guard against the misuse of their research and recommend mitigations.
- Establish a process for the media to report on disinformation without amplifying it.
- Reform recommender algorithms that have empowered current campaigns. Platforms should increase transparency and access to vetted researchers to audit and help understand how recommendation algorithms make decisions and can be manipulated by threat actors. They should invest in solutions to counter the creation of an information bubble effect that contributes to polarization.
- Raise awareness and build public resilience against ML-enabled disinformation. The U.S. government, social media platforms, state and local governments, and civil society should develop school and adult education programs and arm frequently targeted communities with tools to discern ML-enabled disinformation techniques.