More Cowbell Unlimited recently released a technical paper which presents a holistic Information Warfare Defense (IW-D) Standard. The Standard may be adopted by any country or organization. The technical paper includes a novel IW Attack and Defense Methodology along with plausible human, machine, machine-human, machine-machine, and emerging IW attack scenarios and mitigation strategies in the Appendix.
Technical paper comments are welcome.
The standard is available for free on the More Cowbell Unlimited website.
The information age is a glorious dawning which promises hope, abundance, and solutions to vexing challenges. It has also turned every smart device into an information warfare (IW) attack vector delivery vehicle and the entire planetary ecosystem into a cognitive battlespace. The United States and its Allies are under IW attack at this very moment. Peer adversary and non-state offensive cyber domain capabilities are likely to exceed the United States’ ability to defend critical infrastructure, according to domain experts (Cyber-enabled Information Operations 2017). An information arms race is underway, and “peacetime is the decisive phase of operations (Cyber Endeavour Conference with non-attributional Chatham House Rules 2019).” IW domain scholars observe that technologies exist for scaling IW attack manually; the next stage of IW technology development is to automate it (Paul and Matthews 2016; Waltzman 2017). This should be considered a chronic disease with no cure; however, it may be managed (Waltzman 2019).
IW and the Weaponized AI Landscape
In February 2019, The White House issued an Executive Order on Maintaining American Leadership in Artificial Intelligence (AI). The Order mandate is far reaching. Among many other things, it directs Federal agencies to ensure technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation (Executive Order on Maintaining American Leadership in Artificial Intelligence 2019).
The information age presents vivid new security challenges related to information weaponry. For instance, Russia sowed confusion and distrust during the 2016 United States Presidential election quite effectively with high volume and multi-channel (Paul and Matthews 2016) micro-targeted information and disinformation campaigns (Mueller 2019). Russia’s highly analytical IW technique combines models of decision-making processes with information attack vectors designed to exploit process weaknesses–meticulously introducing into human or machine processes data which incline the adversary toward taking an action that favors the attacker (Chotikul 1986; Thomas 2004; Bicknell and Krebs 2019). As of 2017, domain scholars observe that technologies exist for scaling IW attack manually; the next stage of IW technology development is to automate it (Paul and Matthews 2016; Waltzman 2017).
There are many ways to harness AI in an offensive capacity to commit warfare. For example, derived models of adversaries’ societies and political landscapes may be probed for weaknesses and suggest information vectors which exploit those weaknesses (Bicknell and Krebs 2019). Additionally, convincing text “spambots” could lead to hard-to-stop torrents of realistic fake text information “too dangerous to release” into the public domain (Whittaker 2019). IBM’s Project Debater has shown considerable progress in enabling machine intelligences to persuasively debate humans (IBM Research AI – Project Debater 2018). Unethical marketing micro-targeting in which AI is used in conjunction with advertising micro-targeting to transmit otherwise contradictory marketing messages to unsuspecting recipients as was heavily covered in the media (Bicknell and Krebs 2019; Cambridge Analytica Scandal Raises New Ethical Questions About Microtargeting 2018; Watson 2017). Recent concern about the potential use of AI “DeepFake” technology (DNI Worldwide Threat Assessment 2019) as a more advanced form of traditional propaganda video manipulation as seen in a recent viral fake video of United States House Speaker Nancy Pelosi (Wait, is that video real? 2019). Hybrid trolling operations target the credibility and stability of governments as well as public support for them, which may then justify the waging of a conventional war campaign as observed with Russia’s annexation of the Crimean Peninsula in 2014 (Internet Trolling as a hybrid warfare tool: the case of Latvia 2017).
Technological advancements are a double-edged sword. AI is a tool which promises great things for humanity, such reducing poverty and allowing creativity to flourish; however, there is a dark side which we believe must be the focal point of national security. Unsurprisingly, hunger for dominance and money are present in this discussion, too. Feeding large hordes of private information into an AI to create a “World Brain” is plausible and provides a vehicle to project power in various ways (Google and the World Brain 2013; Pomeroy and Wells 2017). One way to monetize and project power from this information is through advertisements. Another way–perhaps one we are already seeing– is through IW. As the world becomes more reliant upon information, IW enhanced with weaponized AI is a major threat.
In 1996, the Defense Science Board published what may be one of the most comprehensive IW-D touchstones available, recommending over 50 actions designed to better prepare the Department of Defense (DoD) for this form of warfare (Report of the Defense Science Board Task Force on Information Warfare-Defense 1996). Since the 2016 election, Congressional and Defense leaders have pushed to strengthen our strategic IW capabilities and create a Chief Information Warfare Officer within DoD (National Defense Authorization Act 2018; SASC Wants New Chief Information Warfare Officer with Authority Over Space 2017; Wanted: Chief Information Warfare Officer 2018).
The FOCAL IW-D Standard
The vision for the FOCAL IW-D Standard is to be adopted extensively in order to preserve freedom and individual liberty. Corporations, Federal government agencies, state and local governments, grassroots efforts, and entire nations may use the standard and develop a whole-of-society effort to manage the IW threat. Public and private critical infrastructure participants are especially encouraged to adopt the Standard, or at least take seriously the growing IW threat and formal measures to counter it.
“FOCAL” is an adjective which means “relating to the center or main point of interest.” We believe this is highly appropriate. IW should be a national security focal point. AI advances will fuel IW capabilities which are difficult to fathom. This standard will help the nation and organizations develop IW-D mindsets and competencies.
The standard is divided into five interlocking tenets. Together, these tenets help organizations understand IW, shift culture, train the workforce, methodically identify vulnerabilities, prepare for attack, recover from attack, and contribute to the IW-D community as vested stakeholders.
- Framework: Organizations, societies, and nations developing an IW-D program are accountable according to principles of freedom and individual liberty. IW-D programs should be designed in a way that respects the rule of law, human rights, democratic values and diversity. A comprehensive framework is transparent and should include appropriate safeguards. Moreover, this standard does not exist in a vacuum. Rather, continual auditing and integration with appropriate personnel security, physical security, cyber security, and process improvement frameworks is vital for program success.
- Operations: No two organizations are the same. An effective IW-D program consists of a strong analysis component which is customized for the organization, its competitive environment, and strategic goals. This includes red teaming with lessons learned, data driven vulnerability detection and cataloging, and antifragility analyses which might uncover single points of failure.
- Communication & Crisis Response: Organizations must communicate to the world and to their workforces that they are serious about IW-D. Communication campaigns are critical for marshaling grassroots efforts, as well. Brand reputation and management are critical components to maintaining a growing bottomline. IW-D crisis response plans help organizations communicate and reduce the risk before, during, and after suspected IW attacks.
- Administration and Training: Simply knowing about the possibility of IW along with relatively simple vigilance practices, such as “Just Doesn’t Look Right” (JDLR), are a large part of an effective IW-D program. This requires general training as well as role-specific training. Training should be highly engaging and demand effort from the entire organization.
- Leadership, Culture, Community: Leadership from the top drives cultural shifts and is absolutely essential for effective IW-D. Engaged executives, who lead by example and set the tone within organizations, communicate to the entire workforce how devastating IW can be to the organization, and the nation. IW is an active and creative space; leadership engagement within thought communities also demonstrates commitment to IW-D. This tenet encourages everyone to develop an IW-D mindset.
The standard is available for free on the More Cowbell Unlimited website. We use it during client engagements to provide data science services relative to IW-D.
Bicknell, John W, and Werner G Krebs. 2019. “Process Mining: The Missing Capability in Information Warfare.” ResearchGate. https://www.researchgate.net/publication/331744765_Process_Mining_The_Missing_Piece_in_Information_Warfare (May 10, 2019).
“Cambridge Analytica Scandal Raises New Ethical Questions About Microtargeting.” 2018. NPR.org. https://www.npr.org/2018/03/22/596180048/cambridge-analytica-scandal-raises-new-ethical-questions-about-microtargeting (June 2, 2019).
Chotikul, Diane. 1986. The Soviet Theory of Reflexive Control In… Monterey, California: Naval Postgraduate School. http://nsarchive.gwu.edu/dc.html?doc=3901091-Diane-Chotikul-The-Soviet-Theory-of-Reflexive (February 19, 2019).
Cyber Endeavour Conference with non-attributional Chatham House Rules. 2019.
Cyber-Enabled Information Operations. 2017. (SASC) Washington DC. https://www.armed-services.senate.gov/hearings/17-04-27-cyber-enabled-information-operations (June 12, 2019).
“Executive Order on Maintaining American Leadership in Artificial Intelligence.” 2019. The White House. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ (May 15, 2019).
Google and the World Brain. 2013. https://www.amazon.com/Google-World-Brain-Brendan-Price/dp/B07K5LMFL9 (June 8, 2019).
“IBM Research AI – Project Debater.” 2018. IBM Research AI Project Debater. https://www.research.ibm.com/artificial-intelligence/project-debater// (June 2, 2019).
Internet Trolling as a Hybrid Warfare Tool: The Case of Latvia. 2017. NATO Strategic Communications Center of Excellence. https://www.stratcomcoe.org/internet-trolling-hybrid-warfare-tool-case-latvia-0 (June 12, 2019).
Paul, Christopher, and Miriam Matthews. 2016. “The Russian ‘Firehose of Falsehood’ Propaganda Model.” https://www.rand.org/pubs/perspectives/PE198.html (June 11, 2019).
Pomeroy, Barry, and H. G. Wells. 2017. H.G. Wells’ World Brain: Annotated with an Introduction by Barry Pomeroy, PhD. Bear’s Carvery.
“Report of the Defense Science Board Task Force on Information Warfare-Defense.” 1996. https://www.hsdl.org/?abstract&did= (June 1, 2019).
“SASC Wants New Chief Information Warfare Officer With Authority Over Space.” 2017. https://spacepolicyonline.com/news/sasc-wants-new-chief-information-warfare-officer-with-authority-over-space/ (May 14, 2019).
“Text – H.R.2810 – 115th Congress (2017-2018): National Defense Authorization Act for Fiscal Year 2018.” 2018. https://www.congress.gov/bill/115th-congress/house-bill/2810/text/eas (May 16, 2019).
Thomas, Timothy. 2004. “Russia’s Reflexive Control Theory and the Military.” The Journal of Slavic Military Studies 17(2): 237–56.
“Wait, Is That Video Real? The Race against Deepfakes and Dangers of Manipulated Recordings.” 2019. USA TODAY. https://www.usatoday.com/story/tech/2019/05/13/deepfakes-why-your-instagram-photos-video-could-be-vulnerable/3344536002/ (June 2, 2019).
Waltzman, Rand. 2017. “SASC Testimony: The Weaponization of Information.” https://www.rand.org/pubs/testimonies/CT473.html (June 9, 2019).
———. 2019. “Proposal for a Center for Cognitive Security.”
“Wanted: Chief Information Warfare Officer.” 2018. SIGNAL Magazine. https://www.afcea.org/content/wanted-chief-information-warfare-officer (May 14, 2019).
Watson, Sara. 2017. “Perspective | Russia’s Facebook Ads Show How Internet Microtargeting Can Be Weaponized.” Washington Post. https://www.washingtonpost.com/news/posteverything/wp/2017/10/12/russias-facebook-ads-show-how-internet-microtargeting-can-be-weaponized/ (February 22, 2019).
Whittaker, Zack. 2019. “OpenAI Built a Text Generator so Good, It’s Considered Too Dangerous to Release.” TechCrunch. http://social.techcrunch.com/2019/02/17/openai-text-generator-dangerous/ (May 29, 2019).
Worldwide Threat Assessment of the Intelligence Community. 2019. https://www.dni.gov/files/ODNI/documents/2019-ATA-SFR—SSCI.pdf (June 2, 2019).