Searching for AI Best Practice in DoD: The Great Camp Divide and Lessons from a Commercial Setting

By Paul Lieber and Janis Butkevics

During Phoenix Challenge 23-2 (hosted by the Georgia Tech Research Institute in Atlanta GA), a prominent and common theme amongst its panels, keynote speeches and six working groups was the application of artificial intelligence (AI) to cognitive security. Universally, everyone present recognized the very real impact of both AI-generated content and AI-driven process automation on national security and the specific need for the US government (USG) to formally posture in a direction where this impact deliberately considered in policy can also affect strategy and tactical execution.

A sister topic discussed at Phoenix Challenge 23-2 was AI’s most suitable role – if any – in disinformation & misinformation activities and countermeasures. At present, some US and partner nation defense leaders default to adopting automated technologies without sufficient counsel or informed consideration of whether these capabilities will address most if any truth/manipulation detection requirements, period. 

Beyond Phoenix Challenge 23-2 and while AI remains a hot topic of discussion and thought pieces, use of and trust in AI lags far behind the hype that it currently generates. As the technical skill set required to both build complex AI systems and capability and interpret the decisions it produces remains rare, this technology can appear ominously amorphous, thus viewed by some as dangerous.  In contrast to the hesitant and resistant camp, there exists a second camp that entirely dismisses risks posed by rampant AI decision making or rushes to implement unvetted capabilities. Ask 100 random people about perceptions of ChatGPT as evidence of these two dichotomously opposed camps; this was evidenced by a May 2023 Pew Research Center study finding quite varied opinions on the utility of this capability and on appropriate application of AI in general.

For influence activities, this camp divide can become quite heated as AI application may significantly impact both content and contextual understanding needed to conduct them. While AI can boost analyst productivity by auto-summarizing a day’s worth of foreign news into a contextually rich paragraph or quickly extract conversation trends for a key target audience, AI recommendations and decisions are never perfect without human-led guidance (e.g. subsequent analysis, model training and refinement, etc.).  

Regardless of camp, in 2023 the US Department of Defense (DoD) recognized a very real need to address and update policy on human-machine teaming and systems, now requiring a human-in-the-loop when AI or machine learning (ML) inserted into select DoD processes and activities (DoD Directive 3000.09). This is a dramatic shift from 2020 comments made by Gen. Terrence J. O’Shaughnessy, former commander of the United States Northern Command (NORTHCOM) and of the North American Aerospace Defense Command (NORAD), who reasoned for a human more removed from AI driven activities, to be on the loop as oversight but not embedded in decision making processes. For the US and its partners, the current human-in-the-loop requirement serves a secondary, unstated goal of formulating limits and penalties for organizations pondering over-application (deliberate or otherwise) of AI to circumnavigate authorities on attribution requirements, avoid chief of machine consent, or connect with target audiences beyond legal means. 

Recognizing reality and knowing how to navigate it are two entirely different items. Most would-be AI best practices for cognitive security problem sets are being built on-the-fly in seeming response to near-peer threat competitors employing such technologies. Specifically, Russian and Chinese AI application of automated actors, messaging, and unions of physical and information data toward regional gains  at time of publication introduced an array of new challenges and related second- and third-order risks requiring strategic and tactical response. Moreover, their extensive use of AI also highlighted the very real likelihood of leveraging these technologies to serve as would-be humans and cause considerable chaos for vulnerable target audiences on the receiving end of influence information or related actions. 

While AI-connected questions may seem new and important for DoD, they are certainly not when considered in a commercial context. For decades, commercial content and marketing/advertising providers sought ways to automate understanding of and reach to most salient audiences, likewise improve assessment criteria and outcomes to explain these contact actions. Surprisingly, DoD and others within USG employ many of these providers for any array of consultation, message creation, and audience understanding activities but rarely, if ever, adopt the best practices behind them. 

In contrast to DoD, competition in the commercial advertising and marketing spaces expedited integration of automated systems and AI capabilities, as traditional human-only methods couldn’t compete with the scale of data, channels, and analyses at speed required for effective decision making in such spaces. In today’s media environment, optimizing media mix and finding the right messaging requires a human-machine team. Working in unison, this team couples data mining with rapid, digital A/B testing to locate the right mix between message, medium, and style to increase interaction with target audiences. Thanks to R&D investments by data-centric companies such as Google, AI can now help humans rapidly perform large scale analyses likely considered impossible back in 2020.

This is not to insist, however, use of AI and automation offers a Skynet scenario where machines nefariously aspire to replace humans. Rather, technology can serve as a force multiplier that empowers human experts to greatly increase their own impact. For marketing and advertising, the most common application of AI is in programmatic buying of digital and physical advertising in applications, websites, and connected devices.  The recent explosion in generative AI also presented new creative marketing and advertising possibilities via auto-generating alternative messaging and designs. As always, even generative processes are nested within more traditional systems where humans direct the AI ones, validate their outputs, and provide guidelines for any automated actions.

Also, commercial applications of AI to DoD are never a one-to-one transition due to DoD OPSEC, legal requirements, messaging objectives, and authorities. Utilization of any AI commercial best practice or capability will therefore always require some translation to suit a DoD space. 

As an example, my company is partnered with a global marketing and advertising firm with a long pedigree in servicing a robust and diverse portfolio of USG and partner nation stakeholders, including DoD. This communication partner’s method of audience assessment employs a process for comprehensive data consideration and analysis heavily reliant on human-machine teaming to derive meaningful and proactive conclusions for its customers. The method utilizes a commercial programmatic media planning capability with AI-assisted analysis to build customizable user journeys for target audiences and subsegments within them. There can be hundreds or thousands of custom user journeys based on contextual and programmatic signals, all meticulously incorporating any agency’s security and policy guidelines. Put in perspective, this method and partner is but one of several AI-informed capabilities we provide to DoD customers, with the company currently servicing all combatant commands featuring a cognitive security information related requirement.

This is neither insisting that any specific approach is the best practice nor can be copied and pasted to address a DoD need. Also, that the use preferences of AI by prominent adversaries will remain stagnant to ever allow any singular standard against it. Arguably most important is a recognition that application of AI to DoD environments cannot sustain a mentality of two camps: fearing or dismissing its utility entirely, or defaulting to any single or hybrid commercial standard.

Thus, all explorations into DoD AI strategy and execution must be thoughtfully considered, and best practices from commercial and other lab-like environments at minimum observed and noted. There are simply no good means of calculating risk and reward for particular pathways involving AI adoption and process insertion without these steps. In tandem, insert training and education requirements within the Services and related academies to better educate and explain how AI can and should be ethically considered in US DoD cognitive security efforts. Technical ignorance is no longer an option.

 

AUTHOR BIOGRAPHIES

Paul Lieber, Ph.D. is Peraton’s Chief Data Scientist for its Cyber Mission Sector and Associate Research Faculty at University of Maryland’s Applied Research Lab in Intelligence and Security.

Janis Butkevics is a Senior AI Subject Matter Expert at Peraton.