Publication Date: March, 2018
Photo: Melanie Gonick/MIT
A new study by three MIT scholars has found that false news spreads more rapidly on the social network Twitter than real news does — and by a substantial margin.
“We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, a professor at the MIT Sloan School of Management and co-author of a new paper detailing the findings.
“These findings shed new light on fundamental aspects of our online communication ecosystem,” says Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab and director of the Media Lab’s Laboratory for Social Machines (LSM), who is also a co-author of the study. Roy adds that the researchers were “somewhere between surprised and stunned” at the different trajectories of true and false news on Twitter.
Moreover, the scholars found, the spread of false information is essentially not due to bots that are programmed to disseminate inaccurate stories. Instead, false news speeds faster around Twitter due to people retweeting inaccurate news items.
“When we removed all of the bots in our dataset, [the] differences between the spread of false and true news stood,”says Soroush Vosoughi, a co-author of the new paper and a postdoc at LSM whose PhD research helped give rise to the current study.
The study provides a variety of ways of quantifying this phenomenon: For instance, false news stories are 70 percent more likely to be retweeted than true stories are. It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people. When it comes to Twitter’s “cascades,” or unbroken retweet chains, falsehoods reach a cascade depth of 10 about 20 times faster than facts. And falsehoods are retweeted by unique users more broadly than true statements at every depth of cascade.
The paper, “The Spread of True and False News Online,” is published today in Science.
Why novelty may drive the spread of falsity
The genesis of the study involves the 2013 Boston Marathon bombings and subsequent casualties, which received massive attention on Twitter.
“Twitter became our main source of news,” Vosoughi says. But in the aftermath of the tragic events, he adds, “I realized that … a good chunk of what I was reading on social media was rumors; it was false news.” Subsequently, Vosoughi and Roy — Vosoughi’s graduate advisor at the time — decided to pivot Vosoughi’s PhD focus to develop a model that could predict the veracity of rumors on Twitter.
Subsequently, after consultation with Aral — another of Vosoughi’s graduate advisors, who has studied social networks extensively — the three researchers decided to try the approach used in the new study: objectively identifying news stories as true or false, and charting their Twitter trajectories. Twitter provided support for the research and granted the MIT team full access to its historical archives. Roy served as Twitter’s chief media scientist from 2013 to 2017.
To conduct the study, the researchers tracked roughly 126,000 cascades of news stories spreading on Twitter, which were cumulatively tweeted over 4.5 million times by about 3 million people, from the years 2006 to 2017.
To determine whether stories were true or false, the team used the assessments of six fact-checking organizations (factcheck.org, hoax-slayer.com, politifact.com, snopes.com, truthorfiction.com, and urbanlegends.about.com), and found that their judgments overlapped more than 95 percent of the time.
Of the 126,000 cascades, politics comprised the biggest news category, with about 45,000, followed by urban legends, business, terrorism, science, entertainment, and natural disasters. The spread of false stories was more pronounced for political news than for news in the other categories.
The researchers also settled on the term “false news” as their object of study, as distinct from the now-ubiquitous term “fake news,” which involves multiple broad meanings.
The bottom-line findings produce a basic question: Why do falsehoods spread more quickly than the truth, on Twitter? Aral, Roy, and Vosoughi suggest the answer may reside in human psychology: We like new things.
“False news is more novel, and people are more likely to share novel information,” says Aral, who is the David Austin Professor of Management. And on social networks, people can gain attention by being the first to share previously unknown (but possibly false) information. Thus, as Aral puts it, “people who share novel information are seen as being in the know.”
The MIT scholars examined this “novelty hypothesis” in their research by taking a random subsample of Twitter users who propagated false stories, and analyzing the content of the reactions to those stories.
The result? “We saw a different emotional profile for false news and true news,” Vosoughi says. “People respond to false news more with surprise and disgust,” he notes, whereas true stories produced replies more generally characterized by sadness, anticipation, and trust.
So while the researchers “cannot claim that novelty causes retweets” by itself, as they state in the paper, the surprise people register when they see false news fits with the idea that the novelty of falsehoods may be an important part of their propagation.
Directions for further research
While the three researchers all think the magnitude of the effect they found is highly significant, their views on its civic implications vary slightly. Aral says the result is “very scary” in civic terms, while Roy is a bit more sanguine. But the scholars agree it is important to think about ways to limit the spread of misinformation, and they hope their result will encourage more research on the subject.
On the first count, Aral notes, the recognition that humans, not bots, spread false news more quickly suggests a general approach to the problem.
“Now behavioral interventions become even more important in our fight to stop the spread of false news,” Aral says. “Whereas if it were just bots, we would need a technological solution.”
Vosoughi, for his part, suggests that if some people are deliberately spreading false news while others are doing so unwittingly, then the phenomenon is a two-part problem that may require multiple tactics in response. And Roy says the findings may help create “measurements or indicators that could become benchmarks” for social networks, advertisers, and other parties.
The MIT scholars say it is possible that the same phenomenon occurs on other social media platforms, including Facebook, but they emphasize that careful studies are needed on that and other related questions.
In that vein, Aral says, “science needs to have more support, both from industry and government, in order to do more studies.”
For now, Roy says, even well-meaning Twitter users might reflect on a simple idea: “Think before you retweet.”
SHARETHIS NEWS ARTICLE ON:
Prof. Sinan Aral speaks with Marketplace reporter Molly Wood about the proliferation of fake news. “If platforms like Facebook are to be responsible for the spread of known falsities, then they could use policies, technologies or algorithms to reduce or dampen the spread of this type of news, which may reduce the incentive to create it in the first place,” Aral explains.
Researchers from the Media Lab and Sloan found that humans are more likely than bots to be “responsible for the spread of fake news,” writes Paul Chadwick for The Guardian. “More openness by the social media giants and greater collaboration by them with suitably qualified partners in tackling the problem of fake news is essential.”
Jordan Webber of The Guardian addresses the rise of “fake news”, citing research from the Media Lab and Sloan. “I believe that social media is a turning point in human communication,” said Sloan Prof. Sinan Aral. “I believe it is having dramatic effect on our democracies, our politics, even our health.”
The Washington Post
In an op-ed for The Washington Post, Megan McArdle shares her thoughts on research from the Media Lab and Sloan that identifies “fake news” as traveling six times faster than factual news. “The difference between social media and ‘the media’ is that the gatekeeper model…does care more about the truth than ‘the narrative,’” McArdle writes.
Full story via The Washington Post →
The New York Times
Prof. Sinan Aral writes for The New York Times about research he co-authored with Postdoc Soroush Vousaghi and Associate Prof. Deb Roy, which found that false news spreads “disturbingly” faster than factual news. “It could be, for example, that labeling news stories, in much the same way we label food, could change the way people consume and share it,” writes Aral.
Full story via The New York Times →
Larry Greenemeier of Scientific American writes about a study from researchers at Sloan and the Media Lab that finds “false news” is “70% more likely to be retweeted than information that faithfully reports actual events.” “Although it is tempting to blame automated “bot” programs for this,” says Greenemeier, “human users are more at fault.”
Full story via Scientific American →
Researchers from Sloan and the Media Lab examined why false news spreads on Twitter more quickly than factual information. “Twitter bots amplified true stories as much as they amplified false ones,” writes Robinson Meyer for The Atlantic. “Fake news prospers, the authors write, ‘because humans, not robots, are more likely to spread it.’”