We’re now living in an era where, thanks to the Internet, we have easier access to more raw information than at any other time in the planet’s history. Yet, despite these reams of data, the presence of facts has seemingly never mattered less in public discourse. Now, a team at Indiana University is hard at work trying to figure out why, and what can be done to reverse this trend.
The Internet was supposed to be a great, democratizing force when it came to knowledge, learning, and news dissemination. But in addition to creating new educational and journalistic paradigms, we have incidents like the one that happened in Washington, DC, this past weekend, where a disturbed gunman stormed into a pizzeria in search of pedophiles.
The incident was traced back to a leaked fundraising e-mail from Hillary Clinton’s campaign manager that was transmogrified by trolls on 4chan—despite a complete lack of real-world evidence—into reports of a child sex ring run by the Democratic Party. (Vox does a good job of explaining how the conspiracy theory went viral from there.)
Now that it’s becoming clear that fake news influenced the way people voted for president, the trend has been heavy on the national consciousness. Once the annoying provenance of dittoheads and racist uncles, fake news has become commonplace on social media and is being taken as gospel by an increasing number of Americans.
Facebook, where a reported 44 percent of Americans get their news, has been in hot water for how it manages so-called trending topics, and the way its algorithms help promote fake news. In an October report, BuzzFeed found that one of Facebook’s more popular promoted posts was a humdinger about how George W. Bush and Barack Obama conspired to rig the 2008 election. BuzzFeed also found that 38 percent of posts on conservative pages and 19 percent of posts on liberal pages featured false or misleading content.
So, if the website where so many Americans get their news is rife with misinformation, what can be done to combat fake news? Professor Filippo Menczer leads a team of researchers at IU’s Center for Complex Networks and Systems Research that is trying to find out. In May, the center launched the Observatory on Social Media (OSoMe), a set of digital tools and corresponding application program interface (API) for anyone seeking to analyze social media trends.
Giovanni Luca Ciampaglia, an assistant research scientist with the center, says the purpose of OSoMe is to prove that computer techniques can automate part of fact-checking. The tools include Trends, which people can use to study the use of hashtags over time; Networks, which offers visualizations of retweets, mentions, and quotes; Map, which shows where conversations happen; Movies, which shows how conversations about a meme unfold over time; and BotOrNot, which helps determine whether a social media account is run by a human or software.
Ciampagila says the toolkit is designed to help the general public get a better understanding of how information spreads on social media. “Even just using public resources like Wikipedia can extract a lot of knowledge that can be used for fact-checking,” he explains.
When the researchers initially tried to automate fact-checking, they began by programming an algorithm to pull information from the “info boxes” on Wikipedia that summarize the primary facts on any given topic. Those info boxes usually contain links to other information, Ciampagila points out.
“Each concept on Wikipedia is related to other concepts,” Ciampaglia says. “Imagine you didn’t know Barack Obama was married to Michelle Obama. Maybe you can reconstruct that relationship by seeing he has a daughter who is also the daughter of Michelle. The idea is the algorithm walks over a paragraph in search of related concepts.”
That might sound easy, he says, but there are immediate complications. For instance,