The changing landscape of disinformation and cybersecurity threats: A recap from Verify 2019

Monica Ruiz, May 2nd 2019, William + Flora Hewlett Foundation
https://hewlett.org/the-changing-landscape-of-disinformation-and-cybersecurity-threats-a-recap-from-verify-2019/

In 2007, cyber threats weren’t even mentioned in the intelligence community’s list of worldwide threats to U.S. national security; this year, cyber is the top threat—for the seventh year in a row. But while the scope and scale of the threat is growing, and affecting the U.S. on political, economic and military fronts, decisionmakers in government and the private sector are still looking for ways to collaborate and share information, a so-called “whole-of-society” approach.

“Cybersecurity is the single greatest transformation challenge the [FBI] has faced in my lifetime,” said former FBI Director James Comey. “We failed to do an adequate job of pushing the information flow across the semi-permeable barrier across the government and private sector. We’re nowhere near where we need to be.”

That was one of the key themes of this year’s Verify conference, a Hewlett Foundation event that brings together top national security officials, tech industry leaders, academic and civil society experts, and journalists to discuss thorny cyber issues, from understanding how emerging technologies like 5G affect everything from the future of cyber operations to global competition among nations, to how social media platforms operating in countries around the world should balance privacy, security, and other demands in the ongoing battle against disinformation.

Here are some key takeaways from the event:

Cyber threats come from a range of actors – not just cyber criminals, hacktivists, or insider threats, but nation-states who are adversaries and competitors seeking political, economic, and military advantage through cyber means. Former Homeland Security Advisor Lisa Monaco said that “if you look at the threat actors as basically aligned in a drag race—nation-states, non-state actors, hacktivists, criminal groups—my sense is that the nation-states have far and away set themselves apart,” adding that they are “increasingly using cyber means as a tool of geopolitical one-upsmanship.” Examples such as Russia’s interference in the 2016 U.S. presidential elections attest to that point and show that operational sophistication and tradecraft has improved, as captured in a recent ranking that quantifies how long it takes for an intruder to begin moving laterally to other systems in a targeted network. This increased sophistication has triggered companies to set up small intelligence agencies that track and stop foreign intelligence agencies in order to protect individuals on their platforms, Monaco said. In some cases, it has resulted in these companies becoming the most effective responders to these threats.

Tech companies are increasingly filling roles governments can’t or won’t. Asked about the 2020 elections, Matt Masterson, the Department of Homeland Security’s senior adviser on election cybersecurity, said domestic disinformation was the “hardest challenge” for government to tackle because of First Amendment protections. Indeed, former Facebook Chief Security Officer Alex Stamos said technology companies now act in a “quasi-governmental manner,” filling the void left by slow-moving government policies and attempting to regulate speech and activity through content moderation on their platforms. But Stamos warned that this approach can be problematic because the private sector has less accountability than government entities. This is especially concerning when talking about rules that affect billions of people, many of whom live in non-free countries. As a start, he called on tech companies to be innovative and embrace transparency about content moderation, more clearly explaining their decision-making processes to help set norms and build legitimacy.

Clear global norms around cybersecurity are needed. Former Obama White House Deputy National Security Advisor Avril Haines explained the difficulties, but slow advancements, in creating legal and normative frameworks for cyberspace and drew comparative examples of how long it took in other domains (e.g., UN Convention on the Law of the Sea (UNCLOS), the agreement that governs maritime matters, took hundreds of years to develop). Haines emphasized that there are a lot of challenges, specific to cyber, in building this kind of normative framework, highlighting actions with other stakeholders that can help promote norms, including the 2015 G20 communiqué language on commercial espionage and cybersecurity, the UN Group of Governmental Experts on Information Security and the private sector-led Tech Accord.

Microsoft President Brad Smith also touched on the complexities in norm-building. He explained why the company is backing the Paris Call for Trust and Cybersecurity in Cyberspace and—acknowledging the role that the private sector plays in cybersecurity—stressed the importance of a multinational, multi-stakeholder approach. Smith also explained the hard revenue choices technology companies are faced with as emerging technologies evolve. If certain technologies—such as facial recognition technologies— were made available to authoritarian governments, it could potentially impact basic human rights, including “all rights of people to assemble to express their points of view.” He added that “the only way to prevent a race to the bottom [by companies] is to have a regulatory floor.”

Technology companies, policymakers and the public need to grapple with the tradeoffs between privacy, safety, and security. Stamos, the former Facebook executive, offered a presentation on the challenges that platforms face in balancing different values held by their users, and knowing which ones to optimize for. “You cannot both say that platforms are responsible for the content on it and knowing who their users are, and then also [say that they] need to provide perfect privacy,” Stamos said. “You can’t moderate content unless you see it, and you can’t find bad guys unless you’re collecting data about them.” Europe’s General Data Protection Regulation (GDPR) and WhatsApp’s end-to-end encryption are both privacy-enhancing, but there’s a cost around security and safety, Stamos said. Experts at the event noted that society’s competing demands for privacy and transparency are hard to satisfy, and that optimizing for the highest levels of privacy has made it challenging to get access to social media data to enable independent research that can help evaluate the impact of social media and digital disinformation on elections.

Civil society is a target of cybersecurity threats — and doesn’t have the same protections or resources as large companies or governments. Ron Deibert of University of Toronto’s Citizen Lab described the group’s research into how authoritarian governments—Saudi Arabia, for example—and criminal groups in Mexico purchased commercial spyware to surveil and attack identified targets, which include anti-corruption investigators, advocacy groups, scientists and researchers, and journalists. Deibert explained that commercial surveillance technology—designed to infect and remotely monitor mobile phones— has spread globally and is being abused by its clients, resulting in mass proliferation and harm.

What’s the next frontier for disinformation operations? That’s a question that was asked repeatedly during the conference. As NPR’s Tim Mak tweeted, “If 2015/2016 Russian disinfo happened because of failure of imagination, what can we imagine happens next?” Suzanne Spaulding, a Department of Homeland Security undersecretary in charge of cyber and infrastructure protection in the Obama administration, said that it’s critical to think more broadly than elections and look to other democratic institutions that may be targeted by foreign influence operations, such as the judiciary. Stanford’s Nate Persily said to watch sophisticated WhatsApp campaigns, including what may be happening in other elections such as the one taking place between mid-April and mid-May in India—but these are inherently hard to track, due to their encryption. Researcher Renee DiResta stressed that we should look to the most innovative digital marketers to see what tactics they’re using, because those are the same tactics that foreign influence operations will follow. More recently, following the release of the Mueller report, DiResta has commented on how Russia’s Internet Research Agency (IRA) focused on the infiltration of movements and activation of Americans—e.g., through repeated outreach on Messenger—but went far beyond the actions of a social media marketer: “The IRA … leveraged techniques used by intelligence pros to target Americans, develop trust, get [people] to take action. When we think about how disinformation will spread in 2020, this kind of engagement with real, aligned Americans will be a big part of it.”

And lastly, the news media are not just neutral observers in the disinformation story. Buzzfeed media editor Craig Silverman explained both how disinformation campaigns feed, and feed off of, polarization that already exists in the public and also how journalists themselves are targeted by bad actors and exploited as channels for amplification. He encouraged reporters to be strategic about what they report, expose origins and tactics to build resilience in their audiences and contextualize reporting on leaks. Stamos called on journalists to have a plan for how to handle hacked or stolen information in 2020. And multiple experts made a plea for the news media to explain the complexity of the issues and avoid bumper-sticker narratives of simple conflict. Hopefully Verify 2019 brought us closer to that goal.