Disinformation’s Huge Inaction Problem

Jamie Condliffe, May 31 2019, New York Times
https://www.nytimes.com/2019/05/31/technology/facebook-disinformation-nancy-pelosi.html

Almost everybody wants something done about disinformation. So why does it seem that nothing is changing?

Recently, a fake video of Speaker Nancy Pelosi hit the web. It was slowed down and pitch adjusted so she appeared drunk or ill. Despite being fake, it was a totem for many Republicans looking to celebrate their dislike for the politician.

Facebook and Twitter left the video up; YouTube took it down. Ms. Pelosi accused Facebook of “lying to the public” by not removing the video. Facebook said it had to balance “encouraging free expression and promoting a safe and authentic community.”

There were so many takes, from support for Facebook to the proclamation of an “existential threat to American democracy.” Polarized reactions are understandable: Though virtually everyone dislikes some disinformation, there’s no single correct fix.

It’s also why nothing much that’s effectual is happening to solve the broader problem.

Facebook’s response is predictable. In an op-ed for The Washington Post in March, Mark Zuckerberg, the company’s chief executive, asked to have less control over speech. “Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum,” he wrote. Essentially: Tell Facebook what to block, and it’ll block it.

A problem: Despite a newfound taste for tech regulation, lawmakers worry about running afoul of the First Amendment if they limit content. (In fact, the White House is looking in the other direction: It wants people to report incidents believed to be political censorship on social media.)

So who steps up?

Companies aren’t constrained by the First Amendment, and can block content if they want. That’s what YouTube did with the Pelosi video, explaining: “Spam, scams and other deceptive practices that take advantage of the YouTube community aren’t allowed on YouTube.”

What if this is the new normal and social networks must choose their own red lines, then let the market decide? Maybe there’s an uber-libertarian site at one extreme, an inoffensively bland one at the other, and everything else sits in between. Users, vote with your eyeballs!

There are so many reasons this wouldn’t work. The biggest: Nobody likes bland content. Nothing would change.

So instead of tech heavyweights and regulators doing nothing, maybe all of them should do … something?

Rasmus Nielsen, a professor of political communication at Oxford University who studies misinformation, said Facebook could do more when misinformation was spotted without taking stuff down — act faster, give better warnings around the material, inform users that have shared it, be transparent about how it made decisions.

And Cass Sunstein, a legal scholar writing for Bloomberg Opinion, drafted a proposal for regulation blocking content, based on libel law, that he thinks may not infringe the First Amendment. It would ban videos that showed “people in a false and negative light and so are injurious to their reputations — unless reasonable observers would be able to tell that those videos are satires or parodies, or are not real.”

None of this is perfect: They are embryonic, untested proposals. But that may be better than two sides staring blankly at each other, waiting for each other to act.
G.D.P.R. is one year old!

That went fast.

On May 25, Europe’s General Data Protection Regulation turned one year old. What has it achieved?

By the numbers, according to statistics from the European Data Protection Board, there have been:

¦ 281,088 total cases based on the regulation, of which 37 percent are still active.

¦ 56 million euros, or about $62 million, in fines, of which €50 million was a single fine against Google.

In Ireland, where most of the major tech companies have European head offices for tax reasons, 19 statutory investigations have been started — 11 of which focus on Facebook, WhatsApp and Instagram.

The regulation has been cited as inspiration for other data rules globally, including in Japan, Brazil, India and China. And many American lawmakers have cited it as a model on which an American regulation could be based.

But there have also been unintended consequences: The G.D.P.R’s right to access has put reams of information in the hands of the wrong people, for instance, and its right to be forgotten has stopped people from fully exploring the histories of miscreants.

¦ A TikTok phone could be coming. The video app’s owner is reportedly planning to develop its own smartphone. It didn’t work for Facebook, but, hey, who knows?

¦ MacKenzie Bezos pledged her fortune to charity. Her $36 billion fortune, from a roughly 4 percent stake in Amazon acquired from her divorce from Jeff Bezos, highlighted how little Mr. Bezos gives to charity.

¦ The music industry has a metadata headache. By some estimates, poor labeling of streamed tracks means billions of dollars haven’t been paid to artists.

¦ Twitter can’t decide if it should bar white supremacists. It has reportedly asked researchers whether it should try to change their worldview instead.

¦ Google has funded research into cold fusion, the questionable energy-creation process that recreates reactions at the heart of stars at room temperature. So far, no dice.

¦ Tech firms are taking facial recognition to the Middle East. IBM is marketing biometric surveillance systems in the region, as are the Chinese giants Hikvision and Huawei.

¦ Your kids think you’re addicted to your phone. But families are having fewer arguments about screen time, according to a new report.

“What’s not so clear yet is whether G.D.P.R. has had an effect on privacy and on corporate data practices,” said Omer Tene, vice president and chief knowledge officer at the International Association of Privacy Professionals. “Has the underlying business model of the internet changed? Is consumer privacy better? I think those questions are very much still open.”
Facebook’s ‘many open questions’

This year, Mr. Zuckerberg has pitched his vision for a private Facebook. In theory, that looks like more sharing in private groups, ephemeral content and encryption by default on messaging.

As I’ve said, one thing is missing from the pitch: a business model. This past week, we got a glimpse into Facebook’s thinking, though, in a letter from its vice president of United States public policy, Kevin Martin, to Senator Josh Hawley, a Republican from Missouri and an outspoken critic of Facebook who raised questions about its privacy push.

Facebook said it planned to collect less data, keep it for less time and hide message content from itself. But there were some telling responses. Would it make inferences about encrypted messages from metadata? There are “many open questions” there. Share metadata around the company? “Outstanding questions.” How about its use of transaction data? “Information about transactions can be used for personalization,” Mr. Martin responded.

Facebook will squeeze as much insight as it can out of whatever data it gets, so its future business model may not differ too much from its current one. Or as Mr. Hawley said: “I thought they’d swear off the creepier possibilities I raised. But instead, they doubled down.”