We need a Center for Cognitive Security. As a former Defense Advanced Research Projects Agency (DARPA) program manager, I created and managed a $50 million program, Social Media in Strategic Communication, that resulted in more than 200 publications as well as the first major attempt to build a social media gaming platform/laboratory test bed at scale and helped lay the groundwork for the creation of a science of social media. Here is my draft of a proposal for such a center including an example of the type of long term research agenda I feel is required. I believe there is now a solid foundation of research work and practical experience from around the world to build on for such a proposal. The need is acute. Please send your comments to rand.waltzman@information-professionals.org.
Proposal for a Center for Cognitive Security (COGSEC)
by Rand Waltzman
The massive explosion of behavioral data made available by the advent of social media has empowered researchers to make significant advances in our understanding of the dynamics of large groups online. However, as this field of research expands, opportunities multiply to use this understanding to forge powerful new techniques to shape the behavior and beliefs of people globally. These techniques can be tested and refined through the data-rich online spaces of platforms like Twitter, Facebook and, looking to the social multimedia future, Snapchat.
These techniques may be put to ethical or unethical ends. One might imagine using these techniques to encourage positive norms and stamp out pernicious misinformation online. Alternatively, one might use these technologies to spread disinformation or to break apart existing social bonds and erode trust.
Cognitive Security (COGSEC) is a new field that focuses on this evolving frontier and suggests that in the future researchers, governments, social platforms, and private actors will be engaged in a continual arms race to influence — and protect from influence — large groups of users online. Although COGSEC emerges from social engineering and discussions of social deception in the computer security space, it differs in a number of important respects. First, whereas the focus in computer security is on the influence of a few individuals, COGSEC focuses on the exploitation of cognitive biases in large public groups. Second, while computer security focuses on deception as a means of compromising computer systems, COGSEC focuses on social influence as an end unto itself. Finally, COGSEC emphasizes formality and quantitative measurement, distinct from the more qualitative discussions of social engineering in computer security.
The US Department of Defense Dictionary of Military terms defines the Information Environment (IE) as “the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information.” The IE consists of a wide variety of complex interacting and interconnected components ranging from individual people to groups of individuals at multiple scales of organization to physical systems such as the power grid and medical facilities. The decisions and actions taken by these components both as individuals and collectively simultaneously shape and are shaped by the IE in which we live. The nature of interaction within the IE is rapidly evolving and old models are becoming irrelevant faster than we can develop new ones. The result is uncertainty that leaves us exposed to dangerous influences without proper defenses. The goal of the Center for Cognitive Security is to create and apply the tools needed to discover and maintain fundamental models of our ever-changing IE and to defend us in that environment both as individuals and collectively. The Center will bring together experts working in areas such as cognitive science, computer science, engineering, social science, security, marketing, political campaigning, public policy, and psychology to develop a theoretical as well as an applied engineering methodology for managing the full spectrum of information environment security issues.
The IE can be broadly characterized along both technical and psychosocial dimensions. The solutions that we seek require a holistic and multidisciplinary view of the entire space. IE security today (often referred to as Cyber security) is primarily concerned with defense of the purely technical features of the IE, for example, defense against denial of service attacks, botnets, massive thefts of IP and other attacks that typically take advantage of security vulnerabilities at low levels of code, such as the operating system or firmware. This view is too narrow. For example, little attention has been paid to defending against incidents like the Associated Press Twitter Hack from April of 2013 where a group used (“hijacked”) the Associated Press Twitter account to put out a message reading “Two explosions in the White House and Barack Obama is injured.” The result of this message, with the weight of the Associated Press behind it, was a drop and recovery of roughly $136 billion in equity market value over a period of about 5 minutes. This attack exploited both technical (hijacking the account) and psychosocial (understanding how the markets would react) features of the IE. An attack that exploited purely psychosocial features was an incident in India in September 2013 designed to fan the flames of Hindu-Muslim violence by posting to the Internet a gruesome video of two men being beaten to death with the caption that it depicted two Hindu men being beaten to death by a Muslim mob. It took 13,000 Indian troops to put down the resulting violence. It turned out that while the video did show two men being beaten to death, it was not the men claimed in the caption and in fact was an incident that did not take place in India. This attack required no technical skill whatsoever. It required psychosocial understanding that suggested the right place and right time to place the video to achieve the desired effect.
The Center’s goal of creating a holistic approach to COGSEC will require bringing together expertise ranging from the purely technical to the psychosocial. While much time is spent discussing the technical security of computer networks, less well examined is the extent to which persistent computer networks create threats for the security of social systems as well. Operations through the Internet and New Media (INM) can be used to influence, misinform, manipulate and erode trust among targeted communities and within the public at large. Alternatively, they can be used to ensure greater trust among members, inhibit susceptibility to false information (Cognitive Vulnerability), and enhance awareness of malicious third parties trying to exert influence. In short, INM enable the large-scale “hacking” of group behavior, for good or ill and one of the goals of the Center is the development of defenses against such social hacking. “Social engineering” in the computer security space typically has been the venue for discussing the hacking of social interaction. However, the scope of this has, for the most part, been limited to only a narrow field of action: one-on-one conversations that garner sensitive information from gullible members of a target organization. The work of the Center will update and expand this limited concept to meet the modern realities of influence.
The creation and controlled dissemination of information and disinformation for purposes of social and political manipulation and as an integral part of warfare have been well known since ancient times. However, INM have resulted in massive changes of scale in time, space and cost of information flows. Diffusion of information is now practically instantaneous across the entire globe. This has resulted in qualitatively new phenomena in the landscape of influence and persuasion in three major ways. First, the ability to influence is now democratized, in that any individual or group has the potential to communicate and influence large numbers of others online in a way that would have been prohibitively expensive in the pre-Internet era. It is also now significantly more quantifiable, in that data from INM can be used to measure the response of crowds to influence efforts and the impact of those operations on the structure of the social graph. Finally, influence is also far more concealable, in that users may be influenced by information provided to them by anonymous strangers, or even in the simple design of an interface. A complete understanding of these phenomena will be needed to develop appropriate defenses against malicious actors and requires the type of interdisciplinary approach advocated by the Center.
Here are some examples of research issues that the Center will address that require a truly interdisciplinary approach:
• Algorithms for real-time detection and tracking of memes at scale
• Specialized algorithms to recognize purposeful or deceptive messaging and misinformation, persuasion campaigns, and influence operations across social media
• Scalable, efficient, and accurate social malware detection algorithms.
• Integration of algorithms for meme detection and tracking with algorithms for detecting deception, persuasion, and influence operations.
• Develop high fidelity diffusion models for messages, narratives, and information across social media.
• Demonstrate methods for countering adversary influence operations using techniques of semi-automated narrative creation based on predictive social dynamics models.
• Algorithms for doing sentiment analysis for content on developing social multi-media platforms.
• Design and incorporate social environmental models of context and its dynamics into algorithms to recognize purposeful or deceptive messaging and misinformation, persuasion campaigns, and information operations across social media.
• Integrate image understanding and speech/natural language technologies to extend previously developed text-centric algorithms to social multi-media platforms.
• Design algorithms for real-time trans-media (social media, printed media, television, etc.) detection and tracking of memes and other information flows.
• Refine methods for countering adversary influence operations using techniques of semi-automated narrative creation based on predictive social dynamics models to social multi-media platforms.
• Develop techniques to track and analyze dynamics and interactions of multiple competing narratives.