Editor’s Note: A ‘much mentioned’ but rarely discussed aspect of online discourse that’s happening before our eyes: The use of fake personas and AI algorithms to produce content with the intent merely to sow discord. Today it’s the Russians and Chinese and tomorrow…..almost anyone…? These tools don’t require super computing or some special internet access – they are available now. I was recently involved in a seminar in which invitees were asked to participate in various scenarios that were designed to test US and allied responses in a scenario that eventually led to conflict. The participate were asked to use their specific expertise across a broad range of cognitive and technical skill sets yet none had a coherent response to the existence of AI generated memes. The west is under attack now by those with the means to use these tools effectively. Hopefully our responses are robust and we’re just not reading about it…..
“In other words, we found a sprawling web of nonexistent authors turning Russian-government talking points into thousands of opinion pieces and placing them in sympathetic Western publications, with crowds of fake people discussing the same themes on Twitter. Not all of these personas or stories were hits—in fact, very few of the ISMC’s articles achieved mass reach—but in the strange world of online manipulation, popularity isn’t the only goal. If fake op-eds circulate widely and change American minds about Syria or the upcoming election, that’s a success. If a proliferation of fake comments convinces the public that a majority feels some particular way about a hot topic, that’s a success. But even merely creating cynicism or confusion—about what is real and who is saying what—is a form of success too.
To quote GPT-3, the dilemma is this:
….In a future where machines are increasingly creating our content, we’ll have to figure out how to trust.”
Read it all here.