Cade Metz and Scott Blumenthal, June 7th 2019, New York Times
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.
One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:
- The White Helmets alleged involvement in organ, child trafficking and staged events in Syria.
- The White Helmets secretly videotaped the execution of a man and his 3 year old daughter in Aleppo, Syria. (This one)
Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence. As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.
In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.
Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.
“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.
This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next.
We wanted to see what kind of text each of the labs’ systems would generate with a simple sentence as a starting point. How would the results change if we changed the subject of the sentence and the assertion being made?
Visit the article to see the results of their tests.
OpenAI and the Allen Institute made prototypes of their tools available to us to experiment with. We fed four different prompts into each system five times.
What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.
Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information. Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.
Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.
“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.
OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them. The Allen Institute plans to release its false news generator for this reason.
Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.
“If you have the generative model, you have the ability to fight it,” he said.