LIVING LABORATORY

There will be election manipulation in 2020. Scientists can’t stop it, but they can study it

How to protect against election interference from Russia?
How to protect against election interference from Russia?
Image: Reuters/ Denis Sinyakov
By

There’s no doubt Russian influence spread misinformation in the 2016 election. Plenty of questions remain though: Did Russian efforts successfully change election results? How can we know if election interference happens again? And, perhaps most importantly, how can we identify it early enough to to stop it?

Faced with the prospect of ongoing election interference, two MIT professors have published a paper in Science calling for researchers to study election interference, and for social media companies to release the data that makes research possible.

The article, by MIT management professor Sinan Aral and marketing professor Dean Eckles, lays out the four steps necessary to study the impact of election interference: (1) track exposure to manipulation; (2) combine that exposure data with data on voting behavior; (3) assess the effects of manipulative messaging on behavior; (4) calculate the aggregate impact of changes in behavior on the election result. 

Researchers are currently working on all four steps. But they’re hobbled by issues of access. It’s difficult to connect exposure with voting behavior, for example, because researchers have to rely on self-reports to know if and how people vote. A solution, Aral and Eckles hope, lies in data collected by a variety of platforms on voters’ smartphones.

To more accurately study the effects of misinformation on voting behaviors, they write, researchers could track message recipients, physically, all the way to the polls—an obvious conflict with privacy regulations. But this location data is already used, unknowingly to most smartphone owners, to carry out political campaigns, says Eckles. Perhaps citizens can be convinced to forego some privacy to protect themselves and their democratic systems against greater invasions.  

Tracking exposure is also no mean feat. How to know if someone really read or even saw a message, as opposed to wandering out of the room with their laptop on? Plus, it’s difficult to understand the impact of exposure on a wide variety of platforms. It took from 1996 to 2008 for Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, to figure out a way to reliably compare messages shared on radio, cable, and broadcast television. It’s similarly complicated to get clear data comparing the effect of seeing a message on Facebook versus Twitter, or any other program.

Even if researchers can overcome these hurdles, that doesn’t mean we’ll understand the dynamics of manipulation fast enough to stop it. “You have to wait until voting happens to know if voting choices are affected,” says Eckles. Bad actors may circulate propaganda to no effect—only election day votes will reveal their impact.

And because of the fast-paced evolution of online platforms and misinformation tactics, academics run the risk of devotedly studying what went wrong in the past, only to completely miss what happens next. Online platforms have mechanisms to report misleading images, but what happens when humans can’t even tell when a deep-faked image or video has been doctored?  As Eckles says: “When you adapt your defenses, your adversary adapts their offense.”

Even if academics can’t quite keep up, though, Eckles believes studying the impact of misinformation can guide policy responses. “What kinds of ad reviews should be conducted? What sort of info should platforms be compelled to provide to governments?,” he says. The article in Science calls on all platforms to give academics access to data so as to better track exposure; monitoring the various algorithms Facebook uses, for example, creates random variation that can help quantify the effects of exposure to manipulation, notes the article. 

It’s not all on Facebook and Twitter, though. In 2016, says Jamieson—who recently published a book with evidence that Russian hackers helped elect Trump— Russian presence extended far beyond those sites. “It was Youtube, Pinterest, Instagram. Pick a platform, they were probably there,” she says. Currently, her team is looking at every possible platform, from WhatsApp to TikTok, to prepare for 2020. All the data in the world won’t mean anything unless they can point to examples of manipulative foreign messages, she says: “The question is, will we know that it’s there?”