Countering Cognitive Warfare in the Digital Age: A Comprehensive Strategy for Safeguarding Democracy against Disinformation Campaigns on the TikTok Social Media Platform
Authors:
Shane Morris, David Gurzick, Ph.D., Sean Guillory, Ph.D., Glenn Borsky
Abstract
In the contemporary digital age, the battle for cognitive supremacy has extended beyond traditional arenas to the ubiquitous domain of social media platforms. Among these platforms, TikTok has emerged as an unexpected yet potent vector for state-sponsored disinformation campaigns. This study scrutinizes the deployment of Large Language Models (LLMs) by the Russian intelligence entity, the GRU, to propagate misinformation aimed at destabilizing Western democratic norms and undermining the geopolitical standing of the United States and its allies. Through a methodical approach involving web scraping tools and the strategic use of change data capture technologies, coupled with the deployment of Retrieval Augmented Generation (RAG) models, the GRU has executed a campaign of unprecedented sophistication. This campaign is not merely an attempt to misinform the public but a calculated strategy to erode their trust in essential institutions, from government and media to the electoral process, thereby fracturing societal cohesion.
The urgency of addressing this threat cannot be overstated. The GRU’s tactics signify a shift towards cognitive warfare, exploiting the viral nature of social media to achieve a scale of psychological impact previously unattainable with traditional propaganda methods. The use of LLMs allows for the generation of contextually relevant, persuasive, and tailored disinformation at a pace that outstrips the ability of human moderators and existing automated systems to effectively counteract. This not only amplifies the potential reach and impact of disinformation but also significantly complicates the detection and mitigation of such campaigns.
Moreover, the GRU’s focus on TikTok, a platform with a vast and predominantly young user base1According to May-Sept. 2023 data regarding the growth of TikTok, “A third of U.S. adults (33%) say they use the video-based platform, up 12 percentage points from 2021 (21%).” Among those aged 18- to 29, 62% say they use TikTok. GOTTFRIED, J. & ANDERSON, M. 2024. Americans’ Social Media Use. Pew Research Center., highlights a strategic investment in long-term cognitive influence. The platform’s algorithmic predispositions, notably the “Monolith” advertising algorithm, are exploited to ensure the wide dissemination of disinformation, leveraging the inherent weaknesses in TikTok’s design as a media network lacking robust community and messaging tools. While TikTok is the largest userbase with this system design, the GRU could exploit similarly designed social platforms with comparable weaknesses.
The implications of inaction are profound. The GRU’s operational success on TikTok provides a blueprint for other adversarial actors, threatening not only the integrity of democratic discourse but also the security of the industrial base essential for national defense and the support for allies like Ukraine. The window for effective countermeasures is narrowing as the techniques and technologies employed become more refined and embedded within the digital ecosystem.
This paper contends that the immediate development of counterstrategies, including the creation of an open-source dashboard by the US Department of Defense (DoD), is imperative. Such initiatives would enable transparent research into both operations and misinformation campaigns, facilitating the identification and removal of inauthentic accounts. By enhancing public awareness and providing tools for the civilian sector to recognize and resist misinformation, democratic societies can begin to fortify themselves against the insidious threat of cognitive warfare. The urgency of this endeavor cannot be overstated; the defense of the informational commons is a critical front for preserving democracy and national security in the digital age.
Introduction
In the current digital era, the delineation between social media platforms and traditional media (e.g., broadcast television, radio, print) outlets has become increasingly blurred. TikTok, ostensibly a social media network, has transcended its initial branding to emerge as a formidable rival to broadcast and streaming media in terms of user engagement and consumption time – a growing proportion of U.S. adults now regularly get news from the platform2 “The share of U.S. adults who say they regularly get news from TikTok has more than quadrupled, from 3% in 2020 to 14% in 2023.” MATSA, K. E. 2023. More Americans are getting news on TikTok, in contrast with most other social media sites [Online]. Pew Research Center. Available: https://pewrsr.ch/49Er7sE [Accessed]. . This evolution is not merely a testament to the platform’s viral appeal but signals a cultural shift in how the public consumes, processes, and values information. This paper endeavors to dissect the implications of TikTok’s metamorphosis from a social network to a dominant media entity, particularly in the context of its exploitation by the Russian intelligence agency, the GRU, for the dissemination of misinformation.
TikTok’s ascent to a media powerhouse is characterized by its unprecedented user engagement metrics. Unlike traditional social networks that primarily facilitate interpersonal communications and content sharing among user-defined networks, TikTok relies on a sophisticated algorithmic engine — notably, its “Monolith” advertising algorithm — to curate and push content to users based on inferred preferences. (Boeker and Urman, 2022) This model of content delivery, detached from the confines of social connections, enables TikTok to function more as a broadcaster than a platform for social interaction. The result is a user experience that is highly addictive and engrossing, with individuals spending significant portions of their daily screen time immersed in TikTok’s endless content streams (Ionescu and Licu, 2023).
The implications of this shift are profound. As TikTok becomes a primary source of information for vast segments of the population, especially among younger demographics, its influence over public discourse and opinion formation rivals that of traditional media outlets. This evolution has not gone unnoticed by state actors seeking to manipulate public perception and sow discord. The GRU’s operational focus on TikTok is emblematic of a strategic recalibration towards platforms that command substantial user engagement and can amplify disinformation at scale. By embedding misinformation within the platform’s content ecosystem, the GRU is leveraging both the addictive consumption patterns of TikTok users with the content-pushing algorithm of the platform to disseminate narratives designed to undermine trust in democratic institutions, manipulate public sentiment on geopolitical issues, and erode societal cohesion.
The significance of TikTok’s role in the contemporary media landscape cannot be overstated. As a platform, it represents the forefront of a new wave of digital consumption, where algorithmic curation supersedes social networking as the primary driver of content engagement. This shift poses unique challenges for countering misinformation, as the mechanisms of dissemination are deeply intertwined with the platform’s core functionality. Understanding TikTok’s transformation from a social media brand to a de facto media entity is crucial for developing effective strategies to mitigate the impact of state-sponsored disinformation campaigns. The GRU’s exploitation of TikTok underscores the urgency of addressing these challenges, as the stakes encompass not only the integrity of democratic discourse but the very fabric of societal trust and cohesion.
Methodology
The methodology employed by the GRU in leveraging TikTok for misinformation campaigns involves a complex integration of technologies designed to automate and scale the dissemination of disinformation. At the heart of this operation are bot or “sock puppet” accounts, which are automated entities that mimic real users’ activities. These bots are not rudimentary scripts but are powered by advanced Large Language Models (LLMs) equipped with LangChains and Retrieval Augmented Generation (RAG) frameworks. This section elaborates on the operational mechanics of these bots, highlighting the strategic use of comment responses as prompts and the resultant unprecedented volume of comments that far exceeds human capabilities.
Operational Mechanics of Bot Accounts
Bot accounts on TikTok are programmed to identify posts and creators with significant followings or engagement levels, ensuring that inserted comments reach a wide audience. The operation begins with web scraping tools like Beautiful Soup and Selenium, which are employed to harvest HTML and XML data from TikTok’s web version. This data includes engagement and content relevance metrics, enabling bots to target their efforts effectively.
Once potential targets are identified, the bots utilize change data capture services, such as AWS Athena, to monitor for new comments and engagements in real time. This monitoring allows the bots to insert comments that are contextually relevant and timed to maximize visibility and impact.
Integration of LLMs with LangChains and RAGs
The bots’ capability to generate persuasive and contextually appropriate comments is powered by an integration of LLMs with LangChains and RAGs. LangChains allows for the sequential processing of language tasks, enabling bots to understand the context of a post or a thread of comments before generating a response. This understanding is critical for creating comments that are not only relevant but also deeply engaging and likely to entice further interaction (Gurzick, 2009, Boot et al., 2009). While text-centric comments can effectively stimulate reflection and response (Baker et al., 2009), the emerging AI advancements in video production can similarly be strategically designed to enhance authenticity (Gurzick et al., 2009a).
RAGs further augment this capability by combining information retrieval with LLMs’ generative prowess. In this framework, when a comment or post is identified as a target, it serves as a prompt for the RAG model. The “retrieval” component of RAG searches a vast dataset of disinformation narratives, argumentative constructs, and factually incorrect information. Then, the “generation” of comments is customized based on this retrieved data, ensuring that the misinformation is not only adjusted to the current dialogue but also discreetly interwoven within seemingly genuine discussion. This method of tailoring and strategically positioning content, informed by context and intended impacts, has shown to be markedly effective in promoting desired behavior. (Gurzick et al., 2009b) What previously required significant effort and time has now been automated and condensed into just milliseconds.
Unprecedented Volume and Impact
The combination of these technologies enables the GRU’s bot accounts to operate at a scale and with a level of sophistication that far surpasses human capabilities. Unlike human operators, who are limited by physical constraints and cognitive processing speeds, bots can generate thousands of comments across multiple posts and conversations simultaneously. This unprecedented volume of comments ensures that disinformation can infiltrate a wide array of discussions, significantly increasing the likelihood of its acceptance and propagation among real users.
Furthermore, the strategic use of comment responses as prompts allows for a dynamic and adaptive approach to disinformation. Each interaction provides new data that can be used to refine and target subsequent comments, creating a feedback loop that continually enhances the efficacy of the misinformation campaign.
The methodology employed by the GRU on TikTok represents a significant escalation in the sophistication of cognitive warfare tactics. By harnessing the power of LLMs with LangChains and RAGs, these bots are not only capable of generating disinformation at an unprecedented scale but also of adapting to and exploiting the nuances of human discourse and algorithms, making them a formidable tool in the arsenal of state-sponsored misinformation efforts.
Analysis
The GRU’s strategic use of TikTok through advanced bot operations and the deployment of Large Language Models (LLMs) integrated with LangChains and RAG) models represents a nuanced evolution in the landscape of cognitive warfare. This section delves deeper into the implications of such operations, analyzing the impact on public discourse, the erosion of trust in democratic institutions, and the broader geopolitical ramifications.
Impact on Public Discourse
The infiltration of TikTok’s content ecosystem by GRU-operated bots has profound implications for public discourse. By generating and disseminating misinformation at an unprecedented scale, these operations exploit the platform’s algorithmic predispositions towards content that receives higher engagement, which often amplifies divisive narratives. The use of comments as a primary vector for spreading misinformation leverages the social proof heuristic, wherein users perceive comments with significant engagement as credible or worthy of trust (Silva, 2022, Naeem, 2021). This perception is manipulated to normalize disinformation, gradually altering public discourse. The strategic insertion of misinformation into highly engaged discussions not only ensures visibility but also fosters an environment where divisive and falsified narratives can flourish, polarizing communities and undermining the fabric of constructive social dialogue.
Erosion of Trust in Democratic Institutions
A critical target of the GRU’s misinformation campaigns is the trust in democratic institutions and processes. By crafting narratives that question the integrity of electoral systems, the efficacy of governmental bodies, and the conventional media’s credibility, these operations aim to sow seeds of doubt among the populace. The adaptive nature of LLMs, enhanced by RAG models, allows for the generation of highly persuasive and context-specific misinformation that resonates with existing societal grievances or anxieties. This erosion of trust is not incidental but a deliberate attempt to weaken democratic resilience, making societies more susceptible to external influences, emotional contagion, and manipulation.
Geopolitical Ramifications
On a broader scale, the GRU’s operations on TikTok extend beyond the immediate domestic social and political consequences to encompass significant international geopolitical ramifications. By undermining public support for Ukraine and casting doubt on the commitments of NATO and FVEY countries, Russia advances its strategic interests with minimal direct confrontation. The manipulation of public opinion regarding the allocation of resources—such as military aid to Ukraine—weakens collective defense initiatives and erodes the unity of international alliances. Furthermore, the dissemination of misinformation targeting the defense industrial base highlights a sophisticated approach to destabilizing adversaries by attacking the economic and technological pillars of military capability.
Algorithmic Exploitation and the Monolith Advertising Algorithm
The GRU’s success in leveraging TikTok for cognitive warfare is intricately linked to its exploitation of the platform’s “Monolith” advertising algorithm. This algorithm, designed to maximize user engagement and time spent on the platform, creates an affordance (Norman, 1990) that facilitates the spread of misinformation by prioritizing content that generates strong reactions, regardless of its veracity. The lack of robust content verification mechanisms, combined with the algorithm’s susceptibility to manipulation, underscores the vulnerabilities inherent in TikTok’s content distribution model. This exploitation reveals a critical oversight in the design of social media algorithms, where the emphasis on engagement metrics overshadows the imperative for information integrity.
Our comprehensive analysis of the GRU’s misinformation campaigns on TikTok reveals a multifaceted strategy aimed at destabilizing democratic societies, eroding trust in institutions, and advancing Russia’s geopolitical objectives. The operation’s sophistication, underscored by the use of advanced LLMs and algorithmic manipulation, represents a significant escalation in the realm of cognitive warfare. Addressing this threat requires a concerted effort encompassing technological solutions, strategic counter-narratives, and international cooperation to safeguard the integrity of public discourse and preserve free thought and democratic resilience against external manipulations.
Discussion
In addressing the sophisticated use of TikTok by the GRU for disseminating misinformation, the role of the DoD, in collaboration with other intelligence agencies, becomes paramount. However, the traditional paradigms of intelligence operations and countermeasures may not suffice in the digital realm where public perception and trust are continuously at stake. This section advocates for a strategy of radical transparency, involving the public in understanding and defending against misinformation, thus fostering a more resilient democratic society.
Collaboration Across Intelligence Agencies
The intricacies of modern misinformation campaigns necessitate a collaborative approach among intelligence agencies. The GRU’s operations on TikTok, characterized by their technological sophistication and psychological astuteness, require counteractions that are equally advanced and nuanced. This involves not just the DoD but also the National Security Agency (NSA), Central Intelligence Agency (CIA) and other entities within the intelligence community. By pooling resources, expertise, and data, these agencies can develop a more comprehensive understanding of the threat landscape and devise more effective countermeasures. Collaboration can extend to international partners, reflecting the global nature of the challenge and the need for a concerted effort to safeguard democratic values.
Strengthening Trust through Radical Transparency
The cornerstone of this proposed strategy is radical transparency. In the context of countering misinformation, this means providing the public with access to data and methodologies used in identifying and neutralizing misinformation campaigns. Instead of a paternalistic “trust us” approach, the message should be “trust us because you can see the data and methodology for yourself.” This transparency serves multiple purposes:
- Demystifying Intelligence Operations: By making the processes of identifying and countering misinformation open to public scrutiny, intelligence agencies can demystify their operations, dispelling myths and misconceptions that fuel conspiracy theories.
- Building Public Confidence: Transparency in the methodologies employed for safeguarding public discourse reinforces confidence in democratic institutions. When the public has direct access to the evidence of foreign interference and understands the efforts made to counter it, trust in the system is strengthened.
- Empowering the Public: Educating the public about the nature of misinformation and the tactics used by adversaries empowers individuals to critically evaluate the information they encounter. This informed skepticism is a potent defense against misinformation.
- Complicating Adversarial Strategies: When the methodologies and data underpinning counter-misinformation efforts are transparent, it becomes more challenging for adversaries to devise effective counterstrategies. Openness about the detection and mitigation processes forces adversaries to constantly adapt, draining their resources and diminishing the effectiveness of their campaigns.
Implementation Considerations
Implementing radical transparency requires careful consideration of security and privacy concerns. While the overarching goal is openness, it is crucial to balance this with the need to protect sources, methods, and the privacy of individuals. This might involve anonymizing data or providing access to data and methodologies through controlled environments that protect sensitive information.
Moreover, transparency initiatives should be accompanied by public education efforts. Understanding complex data and methodologies requires a certain level of digital literacy. Educational programs aimed at enhancing the public’s ability to critically assess information can maximize the benefits of transparency.
In the face of sophisticated threats to public discourse and democratic institutions, the response must be innovative and inclusive. Collaboration among intelligence agencies, underpinned by a commitment to radical transparency, offers a path forward. By inviting public scrutiny and participation, democratic societies can not only counter the immediate threats posed by misinformation but also build a foundation of trust and resilience that safeguards against future challenges. This approach does not merely aim to protect democratic institutions but to strengthen them through active engagement and the empowerment of the public.
Recommendations
As the United States approaches a critical election cycle in November 2024, the urgency for immediate action to counteract the sophisticated misinformation campaigns orchestrated by the GRU and similar adversarial entities cannot be overstated. The mission of the DoD, alongside other intelligence agencies, to defend democracy, extends beyond the realms of traditional kinetic warfare into the increasingly pivotal arena of cognitive warfare. In this context, defending democracy necessitates a proactive and innovative response to the challenges posed by the digital dissemination of misinformation.
Immediate Action for the 2024 Election Cycle
The proximity of the 2024 election cycle underscores the necessity for swift and decisive measures to safeguard the integrity of the democratic process. Misinformation campaigns, particularly those aimed at undermining election security, eroding trust in democratic institutions, and polarizing the electorate, pose a significant threat to the foundation of democracy. Immediate action is required to effectively identify, counter, and neutralize these campaigns.
Cognitive Warfare as a Defense Priority
The defense of democracy in the age of digital information must prioritize the battle against misinformation. The analogy of traditional warfare is apt in illustrating the current threat landscape: our adversaries are dropping bombs on our population. However, unlike the munitions of kinetic warfare, these bombs are composed of lies, propaganda, and manipulated narratives. The targets of these attacks are not factories or military installations but the very fabric of our society, reaching into our smartphones, computers, and media outlets. This insidious form of warfare seeks not to destroy physical infrastructure but to erode the trust, cohesion, and values that underpin democratic society.
Radical Transparency and Public Engagement
To counter this threat, the recommendations for the DoD and intelligence agencies include:
- Development of an Open-Source Dashboard: The creation of an open-source dashboard that provides real-time insights into misinformation campaigns, including their origins, targets, and tactics. This tool should be designed to offer the public and researchers transparent access to data and analysis, empowering them to understand and recognize misinformation efforts.
- Enhanced Collaboration with Social Media Platforms: Engaging with social media companies, including TikTok, to share intelligence and strategies for the identification and removal of inauthentic accounts and misinformation content. This collaboration should aim to improve the platforms’ algorithms to resist manipulation by adversarial actors.
- Public Education Initiatives: Launching comprehensive public education campaigns to enhance digital literacy and critical thinking among the electorate. These initiatives should focus on equipping citizens with the skills to critically evaluate information, understand the tactics used by misinformation agents, and foster a resilient information ecosystem.
- Legislative and Policy Measures: Advocating for and supporting legislative and policy measures that enhance the transparency, accountability, and responsibility of social media platforms in combating misinformation. This includes the exploration of regulatory frameworks that balance the need for free speech with the imperative to protect the democratic discourse from foreign interference.
Conclusion
Throughout its storied history, America has stood as a beacon of resilience and unity in the face of adversity. From the fires of the Revolutionary War and the War of 1812, where our cities were engulfed in flames, to the divisive turmoil of the Civil War, America has demonstrated an unwavering commitment to its principles and the defense of its sovereignty. The Civil War, in particular, tested the fabric of our nation, yet it ultimately showed us that we are stronger together, united under a single cause. This unity and strength propelled us to support our European allies in the monumental conflicts of World War I and World War II, facing down tyranny to secure freedom not just for ourselves but for the world.
The attack on Pearl Harbor marked a pivotal moment in our history, one where the choice was stark, and the stakes were survival. We chose to fight, to rally against a clear and present danger, proving once again that when America is challenged, we rise to the occasion with courage and determination.
Today, we face a new kind of warfare, one that does not confront us on traditional battlefields but in the cyber domain, targeting the very cognition of our society. This cognitive warfare, waged with lies, misinformation, and propaganda, seeks not to destroy our infrastructure but to undermine our trust, our unity, and our democratic values. Our adversaries are committed to making our nation weaker, exploiting the vulnerabilities of the digital age to sow discord and chaos.
Yet, just as we have in the past, we must rise to meet this challenge. Cognitive warfare must be treated with the same seriousness and urgency as kinetic warfare. The battlefront may have changed, but the essence of what is at stake remains the same: our freedom, our democracy, and our way of life. We must act immediately, marshaling all resources at our disposal, from the DoD to intelligence agencies, from technological innovations to the spirit of the American people.
As we stand on the brink of the 2024 election cycle, the need for action has never been more critical. The defense of our democracy is not only about protecting against physical threats but also about safeguarding our information space, our public discourse, and the integrity of our democratic processes.
Let this moment in history be remembered not as a time when we faltered in the face of a new kind of enemy but as a time when we adapted, innovated, and united to defend what we hold dear. As in times prior, we are aware of this threat and we have both the resources and aptitude to respond. Let us draw inspiration from our past, from the resilience we have shown and the battles we have won, to face this new era of warfare with resolve and determination. Together, as a nation, we have overcome every challenge posed to us. Together, we will defend our democracy against cognitive warfare, ensuring that the beacon of freedom and unity that is America continues to shine brightly for generations to come.
References
BAKER, L., SONNENSCHEIN, S., SULLIVAN, C., BOOT, L. & GURZICK, D. Engaging adolescents in discussions about their education through an Internet-based multimedia community. Society for Research in Child Development (SRCD), 2009 Denver, CO.
BOEKER, M. & URMAN, A. An empirical investigation of personalization factors on TikTok. Proceedings of the ACM web conference 2022, 2022. 2298-2309.
BOOT, L., BAKER, L., SONNENSCHEIN, S., GURZICK, D. & SULLIVAN, C. 2009. The Fieldtrip Project. International Journal of Ubiquitous Learning, 1, 79-88.
GOTTFRIED, J. & ANDERSON, M. 2024. Americans’ Social Media Use. Pew Research Center.
GURZICK, D. 2009. Designing deeply engaging online communities for adolescents. Ph.D. Doctoral Dissertation, UMBC.
GURZICK, D., LUTTERS, W. G. & BOOT, L. Preserving an authentic voice: Balancing the amateur and the professional in teen online video production ACM Conference on Supporting Groupwork (GROUP), 2009a Sanibel Island, FL. ACM.
GURZICK, D., WHITE, K. F. & LUTTERS, W. G. A view from Mount Olympus: The impact of activity tracking tools on the character and practice of moderation. ACM Conference on Supporting Groupwork (GROUP), 2009b Sanibel Island, FL. ACM 361–370.
IONESCU, C. G. & LICU, M. 2023. Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review. Social Sciences, 12, 465.
MATSA, K. E. 2023. More Americans are getting news on TikTok, in contrast with most other social media sites [Online]. Pew Research Center. Available: https://pewrsr.ch/49Er7sE [Accessed].
NAEEM, M. 2021. The role of social media to generate social proof as engaged society for stockpiling behaviour of customers during Covid-19 pandemic. Qualitative Market Research: An International Journal, 24, 281-301.
NORMAN, D. A. 1990. The design of everyday things, New York, Doubleday.
SILVA, M. 2022. Addressing cyber deception and abuse from a human factors perspective. University of Florida.
- 1According to May-Sept. 2023 data regarding the growth of TikTok, “A third of U.S. adults (33%) say they use the video-based platform, up 12 percentage points from 2021 (21%).” Among those aged 18- to 29, 62% say they use TikTok. GOTTFRIED, J. & ANDERSON, M. 2024. Americans’ Social Media Use. Pew Research Center.
- 2“The share of U.S. adults who say they regularly get news from TikTok has more than quadrupled, from 3% in 2020 to 14% in 2023.” MATSA, K. E. 2023. More Americans are getting news on TikTok, in contrast with most other social media sites [Online]. Pew Research Center. Available: https://pewrsr.ch/49Er7sE [Accessed].
It is crucial to define upfront what is fact versus fiction. The last pandemic left a bitter taste in people’s mouths regarding how governments handled the crises, which resulted in less trust. Developing a less paternalistic approach and moving towards a “Show Me Data” approach will create strides in population confidence, but it will take a lot of time to regain some of that trust, if ever. I would also recommend the debate of the facts by subject matter experts so the population can measure the merits for themselves.