Industry Responses to the Malicious Use of Social Media

Industry Responses to the Malicious Use of Social Media

NATO’s Center of Excellence for Strategic Communication put forth several reports this month. Researchers from Oxford University worked on this report, Industry Responses to the Malicious Use of Social Media which

Click for the report

2016 was a defining moment for social media platforms. The ongoing shock relating to election interference, computational propaganda, and the Cambridge Analytica scandal, combined with deeper concerns about the viability of the business model for established news media, all conspired to undermine the confidence of citizens and of public authorities in social media platforms. Initially, major social media companies fell back on traditional postures—minimizing the impact by quoting statistics about the number of accounts involved—but our inventory of industry responses identifies and tracks changing attitudes.

Since November 2016, there has been a raft of self-regulatory responses by all three of the platforms examined in this paper. A key area for intervention is enforcement of existing terms and policies, as well as taking steps towards increased collaboration with other actors, including news media, election committees and campaigns, fact-checkers, and civil society organisations. However, we found little evidence of major changes to the underlying user policy documents. This may change as pressure to regulate platforms continues to mount following formal government inquiries into Cambridge Analytica  the spread of ‘fake news’, and evidence of foreign interference.

There may be trouble ahead as Google, Twitter, and Facebook appear to be taking conflicting stances on their responsibility for content. As government regulation appears inevitable, the platforms have formulated numerous solutions to combat the malicious use of social media. Yet, despite more than 20 months of inquiries and bad press, there is little evidence of significant changes to the companies’ terms and policies, which grant extensive powers
over users’ content, data, and behavior. Thus far, most of the self-regulatory responses have been reactive, responding to media cycle concerns around Cambridge Analytica or foreign interference in elections. The platforms themselves have not taken any meaningful steps to get ahead of the problem and address the underlying structures that incentivize the malicious use of social media—whether for economic gain or political influence. For meaningful progress to be made, and trust to be restored, the relationships between platforms and people needs to be rebalanced and platforms need to proactively work alongside government and citizenry as responsible actors.

IPA Permalink: