A Question of Social Credibility

On November 1, 2017 the US Senate Select Committee on Intelligence held an open hearing titled “Social Media Influence in the 2016 U.S. Elections.” Following the hearing, Sen. Dianne Feinstein (D-Calif.) was quoted in the Washington Post as saying “I went home last night with profound disappointment.” Well, I too am profoundly disappointed with the hearing, but perhaps for somewhat different reasons.

First, questions were almost exclusively put in terms of influence by foreign governments – specifically the Russian government – even though neither of the words “foreign” or “government” appear in the title of the hearing. Are we to conclude that influence campaigns, including the use of fake accounts, bots, and general disinformation campaigns initiated by and run for American based interests that have absolutely nothing to do with the Russians or any foreign government are OK? Such campaigns can be related to elections as well as, for example, US corporate “public relations” that are extensions of their lobbying efforts. Such efforts are not necessarily performed for the public good and very possibly to the great detriment of the US public. Did the Committee members mean to imply that all of those types of activities are OK as long as they have nothing to do with foreign government interests?

Second, another key issue the Committee members failed to address is who exactly is going to be the judge of what is fake and deceptive? Does the Committee really plan to leave it to the platforms themselves to decide, or does it propose to set up an independent organization – either within government or without – that is going to render judgment?

Finally, I did not hear one member of the Committee raise the question of how they or anybody else will know or verify that the platforms are really doing the remedial or preventative actions they are claiming to take? It is easy for the platforms to make all kinds of claims about how they are on the problem, but how does anybody outside the company know what they are really doing when there is no transparency into their actual activities. They can easily say that they are on the case and fully cooperating with the government. They can easily answer all of the questions as posed by the Committee. But there is, in fact, at present no way to verify the answers.

If I were on the Committee, here are two questions I would have asked the witnesses:

1. You claim that you are on the case and that you are making great efforts to address issues that have been raised. But how do we really know? Are you willing to have an independent group come in and conduct an extensive audit on how you apply and enforce what you claim to be your policies? This would involve allowing a group to come in and providing them access to all data, procedures, operations and employees – much the same way extensive financial audits are conducted. Simply supplying some of the data to a third party only solves a part of the problem – it is what you do with the data counts. And analogous to a financial audit, the result would be a statement to the effect that you are actually and effectively doing what you say you are doing. The same way that the result of a financial audit is a statement that the financial statements provided by the company are a true and accurate representation of the state of the company. No judgement as to whether your policies are good or bad – those discussions should be completely separate. The audit could be conducted by a group of international experts put together, for example, by the National Academy of Science and sponsored as a study for Congress.

An example of where I expect Facebook to fail such an audit is the application of their censorship policies. See my previous post: https://information-professionals.org/unintended-consequences/

2. In terms of developing a real understanding of the exact nature of the societal impact of your platforms, would you be willing to cooperate with an extensive study by allowing those conducting the study full access to your data and operations (including detailed analysis of what algorithms you use for what purpose). This can be accomplished while protecting your intellectual property by use of proper Non-Disclosure Agreements on the part of the investigators. Such a study can also be conducted by the National Academies and sponsored by Congress.

So yes, these are businesses and yes, they might be under no legal obligation to open their kimonos to an independent investigative organization. But if they really want to be socially responsible as they claim, it is really only this type of investigation that will provide them with credibility.

Well, that is what I would do for starters. As interesting and important as they are, there has been too much focus on Russian efforts that only represent a part of the problem. It is time to take a broader and more fundamental view of the massive scale pollution of the information environment – with domestic as well as international sources – that is posing an existential threat to our democratic system.

2 Replies to “A Question of Social Credibility”

  1. Good point– Quis custodiet ipsos custodes? I too listened intently to the hearings and thought about how important attribution is in the election system– as in any other information system where verification of the source can expose illegitimate influence activities. It is too bad that Congress has rolled back campaign finance rules that require all advocative entities to disclose their donor identities. Only by understanding who is behind an ad can we have cognitive security.

    The good news is, at least in FB’s case, anyone, not just a small group of auditors, will be able to validate who’s behind the ads they’re seeing. That doesn’t mean you don’t need the auditors– you still need someone who’s on the job, raising awareness. Unfortunately, when the government has tried to help its citizens resist corporate interests it is not always successful because corporate and civic interests don’t always align. It might not be possible to get the right kind of legislation passed to take an active approach in policing social media space. So, I suppose we have to take Zuck at his word when he promises, “We believe that when you see an ad, you should know who ran it and what other ads they’re running — which is why we show you the Page name for any ads that run in your feed. To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well — including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency.”

    1. Max – thanks for the spot-on comment. See also Dr. Waltzman’s previous post on “Unintended Consequences.” One can hope that some significant percentage of readers in the future will critically examine the source of anything posted on a ‘pass-through’ social media site. Alas, I’ve always been an optimist. – MLW

Comments are closed.