Government Responses to Malicious Use of Social Media

Government Responses to the Malicious Use of Social Media

NATO’s Center of Excellence for Strategic Communication put forth several reports this month. Researchers from Oxford University worked on this report, Government Responses to the Malicious Use of Social Media which

Click for the report

There is no simple blueprint solution to tackling the multiple challenges presented by the malicious use of social media. In the current, highly-politicized environment driving legal and regulatory interventions, many proposed countermeasures remain fragmentary, heavy-handed, and ill-equipped to deal with the malicious use of social media. Government regulations thus far have focused mainly on regulating speech online—through the redefinition of what constitutes harmful content, to measures that require platforms to take a more authoritative role in taking down information with limited government oversight. However, harmful content is only the symptom of a much broader problem underlying the current information ecosystem, and measures that attempt to redefine harmful content or place the burden on social media platforms fail to address deeper systemic challenges, and could result in a number of unintended consequences stifling freedom of speech online and restricting citizen liberties.

As content restrictions and controls become mainstream, authoritarian regimes have begun to appropriate them in an attempt to tighten their grip on national information flows. Several authoritarian governments have introduced legislation designed to regulate social media pages as media publishers fine or imprison users for sharing or spreading certain kinds of information, and enforce even broader definitions of harmful content that require government control. As democratic governments continue to develop content controls to address the malicious use of social media in an increasingly securitized environment, authoritarian governments are using this as a moment to legitimize suppression and interference in the digital sphere.

In the future, we encourage policymakers to shift away from crude measures to control and criminalize content and to focus instead on issues surrounding algorithmic transparency, digital advertising, and data privacy. Thus far, countermeasures have not addressed issues surrounding algorithmic transparency and platform accountability: a core issue is a lack of willingness of the social media platforms to engage in constructive dialogue as technology becomes more complex. As algorithms and artificial intelligence have been protective of their innovations and reluctant to share open access data for research, technologies are black boxed to an extent that sustainable public scrutiny, oversight and regulation demands the cooperation of platforms. Governments have put forward transparency requirements regarding political advertisements online, such as the Honest Ads act in the United States. While some platforms have begun to self-regulate, their self-prescribed remedies often fall short of providing efficient countermeasures and enforcement mechanisms.

IPA Permalink: