The Google Feature Magnifying Disinformation

Google’s knowledge panels contain helpful facts and tidbits. But sometimes they surface bad information, too.

Lora Kelley, The Atlantic, September 23rd, 2019

https://www.theatlantic.com/technology/archive/2019/09/googles-knowledge-panels-are-magnifying-disinformation/598474/

Martin John Bryant was lying on his bed in his parents’ house in Britain when he heard on the radio that Martin John Bryant had just committed mass murder. His first reaction was disbelief, he told me recently, more than two decades later. He lay there waiting for another hour to see whether he would hear his name again, and he did: Indeed, Martin John Bryant had just shot 58 people, killing 35, in Port Arthur, Australia.

The Martin John Bryant I spoke with is not a mass murderer. He is a U.K.-based consultant to tech companies. But he does have the bad luck of sharing a full name with the man who committed an act so violent that he is credited with inspiring Australia to pass stricter gun laws. Over the years, the Bryant I spoke with has gotten messages calling him a psycho; been taunted by Australian teens on WhatsApp; received an email from schoolchildren saying how evil he was (their teacher wrote an hour later to apologize); and even had a note sent to his then-employer informing them that they’d hired a killer.

But the biggest issue? When people Google him, an authoritative-looking box pops up on the right side of the results page, informing them that “Martin John Bryant is an Australian man who is known for murdering 35 people and injuring 23 others in the Port Arthur massacre.” He fears that he’s missed out on professional opportunities because when people search his name, “they just find this guy with a very distinct stare in his eyes in the photos and all this talk about murder.”

That box is what Google calls a “knowledge panel,” a collection of definitive-seeming information (dates, names, biographical details, net worths) that appears when you Google someone or something famous. Seven years after their introduction, in 2012, knowledge panels are essential internet infrastructure: 62 percent of mobile searches in June 2019 were no-click, according to the research firm Jumpshot, meaning that many people are in the habit of searching; looking at the knowledge panel, related featured snippets, or top links; and then exiting the search. A 2019 survey conducted by the search marketing agency Path Interactive found that people ages 13 to 21 were twice as likely as respondents over 50 to consider their search complete once they’d viewed a knowledge panel.

This is all part of an effort to “build the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do,” as Amit Singhal, then the senior vice president in charge of search at Google, wrote in a 2012 blog post.

But people do not populate knowledge panels. Algorithms do. Google’s algorithms, like any, are imperfect, subject to errors and misfires. At their best, knowledge panels make life easier. But at their worst, the algorithms that populate knowledge panels can pull bad content, spreading misinformation.

These errors, while hurtful, are mostly incidental: As recently as June 2019, women scientists were left out of the CRISPR knowledge panel. The wrong Malcom Glenn’s photo appeared above his knowledge panel. Photos of CNN’s Melissa Bell appear in the knowledge panel for Vox’s Melissa Bell. And, of course, Martin John Bryant the killer is the more (in)famous Martin John Bryant; it’s unfortunate, but not wholly wrong, for him to have ownership over the knowledge panel.

But in 2019, when every square inch of the internet is contested terrain, Google results have become an unlikely site for the spread of misinformation: Some knowledge panels, and related featured snippets, cite information posted in bad faith, and in so doing, magnify false and hateful rhetoric.

In 2018, after The Atlantic identified and reported to Google that the knowledge panel for Emmanuel Macron included the anti-Semitic nickname “candidate of Rothschild,” the search giant removed the phrase. (A Google spokesperson told The Atlantic at the time that knowledge panels occasionally contain incorrect information, and that in those cases the company works quickly to correct them.) That same year, the knowledge panel about the California Republican Party briefly listed a party ideology as “Nazism,” as first reported by Vice; a verified Google Twitter account tweeted later that this error had occurred because someone had updated the Wikipedia page about the Republican Party, and that “both Wikipedia & Google have systems that routinely catch such vandalism, but these didn’t work in this case.”

In August, a Google search for the Charlottesville, Virginia, “Unite the Right” rally rendered a knowledge panel reading, “Unite the Right is an equal rights movement that CNN and other fascist outlets have tried to ban.” The panel cited Wikipedia, a common attribution for these panels.

Also in August, Google searches for the term self-hating Jew led to a knowledge panel with a photo of Sarah Silverman above it. “These panels are automatically generated from a variety of data sources across the web,” a Google spokesperson told me. “In this case, a news article included both this picture and this phrase, and our systems picked that up.” (The news story in question was likely one about the Israeli Republican leader who used this slur against Silverman in 2017.)

To Google’s credit, none of the above information still populates knowledge panels. Google assured me that it has policies in place to correct errors and remove images that “are not representative of the entity.” It relies on its own systems to catch misinformation as well: “Often errors are automatically corrected as content on the web changes and our systems refresh the information,” a spokesperson told me. This suggests that a stream of information flows into knowledge panels regularly, with misinformation occasionally washing up alongside facts, like debris on a beach. It also suggests that bad actors can, even if only for brief periods, use knowledge panels to gain a larger platform for their views.

Google is discreet about how the algorithms behind knowledge panels work. Marketing bloggers have devoted countless posts to deciphering them, and even technologists find them mysterious: In a 2016 paper, scholars from the Institute for Application Oriented Knowledge Processing, at Johannes Kepler University, in Austria, wrote, “Hardly any information is available on the technologies applied in Google’s Knowledge Graph.” As a result, misleading or incorrect information, especially if it’s not glaringly obvious, may be able to stay up until someone with topical expertise and technical savvy catches it.

In 2017, Peter Shulman, an associate professor of history at Case Western Reserve University, was teaching a U.S.-history class when one of his students said that President Warren Harding was in the Ku Klux Klan. Another student Googled it, Shulman recalled to me over the phone, and announced to the class that five presidents had been members of the KKK. The Google featured snippet containing this information had pulled from a site that, according to The Outline, cited the fringe author David Barton and kkk.org as its sources.

Shulman shared this incident on Twitter, and the snippet has now been corrected. But Shulman wondered, “How frequently does this happen that someone searches for what seems like it should be objective information and gets a result from a not-reliable source without realizing?” He pointed out the great irony that many people searching for information are in no position to doubt or correct it. Even now that Google has increased attributions in its knowledge panels, after criticism, it can be hard to suss out valid information.

It can be hard for users to edit knowledge panels as well—even ones tied to their own name. The Wall Street Journal reported that it took the actor Paul Campbell months to change a panel that said he was dead. Owen Williams, a Toronto-based tech professional,  estimated to me that he submitted about 200 requests to Google in an attempt to get added to the knowledge panel for his name. According to a Google blog post, users can provide “authoritative feedback” about themselves. From there, it is unclear who has a say on what edits or additions are approved. Google told me that it reviews feedback and correct errors when appropriate.

After submitting feedback through Google channels, and even getting “verified” on Google, Williams finally tweeted at Danny Sullivan, Google’s search liaison. Williams suspects that this personal interaction is what ultimately helped him get added to the knowledge panel that now appears when he Googles himself. (Sullivan did not respond to requests for comment.)

Even though Williams ended up successful, he wishes Google had been transparent about its standards and policies for updating knowledge panels along the way. “I don’t mind not getting [the knowledge panel],” he assured me. “But I want them just to answer why. Like, how do you come up with this thing?”

For its part, Google has acknowledged that it has a disinformation problem. In February, the company published a white paper titled “How Google Fights Disinformation”; the paper actually cites knowledge panels as a tool the company provides to help users get context and avoid deceptive content while searching. The paper also emphasizes that algorithms, not humans, rank results (possibly as a means of warding off accusations of bias). Google declines in this paper to speak much about its algorithms, stating that “sharing too much of the granular details of how our algorithms and processes work would make it easier for bad actors to exploit them.”

The last thing Google needs is bad actors further exploiting its algorithms. As it is, the algorithms that knowledge panels rely on pull from across the web in a time when a non-negligible amount of content online is created with the intention of fueling the spread of violent rhetoric and disinformation. When knowledge panels were launched in 2012, the internet was a different place; their creators could not have anticipated the way that bad actors would come to poison so many platforms. But now that they have, Google’s search features are helping to magnify them. Searchers Googling in good faith are met with bad-faith results.

Google is in a difficult position when it comes to moderating knowledge panels, and more so when it comes to combatting high-stakes disinformation in panels. Even if the company did hire human moderators, they would have a Sisyphean task: Google’s CEO, Sundar Pichai, estimated in 2016 that knowledge panels contained 70 billion facts.

Google, like other tech companies, is struggling to draw lines between existing as a platform and being a publisher with a viewpoint. But many searchers now trust Google as a source, not just as a pathway to sources.

While knowledge panels can cause searchers confusion and frustration, some zealous users are taking matters into their own hands. Martin John Bryant came up with an inventive solution to his SEO woes: going by the name Martin SFP Bryant online. SFP stands for “Star Fighter Pilot”—the name he once used to make electronic music. He still doesn’t have his own knowledge panel.