IU Researchers Create Tools to Track and Combat Fake News

Obama is a human being and so is Joseph Stalin. Since Stalin was a communist, an algorithm might be tempted to conclude that fellow human being Obama is one, too. “We wanted to make sure stuff like that doesn’t trick the algorithm,” he adds.

So the IU team did further question-answering exercises, like trying to create an algorithm to match U.S. presidents with the correct First Ladies. Ciampaglia said there were many pitfalls to overcome in that exercise, not least of which was teaching the algorithm to figure out the nature of the relationship between Barbara Bush and George W. Bush. Next, they tried matching U.S. states to capitals, and Oscar winners to the correct award category.

“We got very encouraging results then,” he says, adding that the team is optimistic that many facets of fact-checking can eventually be automated, thereby making it easier and faster to detect misinformation. “But we’re still far from a machine that understands natural language.”

IU researchers hope to integrate the tools they’ve created with software used by fact-checking organizations to make the job of verifying information less tedious and more automatic. All of the tools being developed by OSoMe are open source, and, at the moment anyway, purely for research (and not commercial) purposes.

While IU researchers delve deeper into fact-checking, how can people mitigate the effects of fake news and misinformation? Ciampaglia says getting out of your echo chamber is the first step. “Social media amplifies cognitive bias. Modeling helps us understand how misinformation spreads—if you change the parameters, will that affect how it spreads? We’re seeing that if people are bombarded with information, they lose critical-thinking skills.”

He’s happy to see Facebook, Google, and Twitter beginning to crack down on the proliferation of fake news, although he correctly notes the issue is not how many fake articles are out there, but rather how much exposure they get.

“The real concern is the financial incentive to produce misinformation—it’s very easy to build a professional website, and the media landscape is so huge that it’s hard to tell what’s fake,” he says. “Cutting incentives will reduce the problem, but it won’t solve the root cause, which has more to do with polarization of society and the unintended consequences of personalized news. It can’t be solved purely through technology, but I’m moderately optimistic.”

Author: Sarah Schmid Stevenson

Sarah is a former Xconomy editor. Prior to joining Xconomy in 2011, she did communications work for the Michigan Economic Development Corporation and the Michigan House of Representatives. She has also worked as a reporter and copy editor at the Missoula Independent and the Lansing State Journal. She holds a bachelor's degree in Journalism and Native American Studies from the University of Montana and proudly calls Detroit "the most fascinating city I've ever lived in."