Panel examines strategies for detecting, regulating fake news

Tuesday, November 27, 2018 - 8:53pm

College of Engineering Professor Rada Mihalcea discusses the automatic detection of fake news at the Catching Fake News panel discussion presented by the Dissonance Event Series in Rackham Assembly Hall Tuesday.

College of Engineering Professor Rada Mihalcea discusses the automatic detection of fake news at the Catching Fake News panel discussion presented by the Dissonance Event Series in Rackham Assembly Hall Tuesday. Buy this photo
Alec Cohen/Daily

Five speakers shared their research in regulating false information online to a crowd of approximately 50 students and faculty members Tuesday night at Rackham Assembly Hall. The panel, titled Catching Fake News, was part of the University of Michigan Dissonance Event Series, which focuses on the intersection of technology, privacy, policy, security and law.

Panelists included Mark Ackerman, a School of Information professor, Ceren Budak, a School of Information assistant professor, Engineering professor Rada Mihalcea and Fredrik Laurin, a Knight-Wallace fellow. The panel was moderated by Brendan Nyhan, a professor in the Ford School of Public Policy.

Nyhan opened the event by sharing the history of fake news. He said fake news is now more widely read than ever before. Nyhan also said humans struggle to handle the volume of content they are exposed to daily, so machines can be used to help ease the issue of volume.

“‘What should we do about it?’ is the values question implicated here,” Nyhan said. “There is a lot of questions about how we can more effectively identify that dubious content and potentially intervene to limit its reach.”

Budak is collaborating on a new book titled “Words That Matter.” She shared research on the overarching prevalence of fake news. Budak said she utilized tweets, URLs, interviews and domains to see the overall prevalence of fake news, how content has changed over time and its connection to election dynamics.

From this research, Budak said fake and non-fake news are comparable in terms of shares. However, because of the larger domains of non-fake news sources, Budak said overall prevalence was still higher in real news than fake news.

Budak also noted the correlation between fake news and favorability in the 2016 election and her team’s use of the log-odds ratio –– a system that gives each word a continuous measure of fakeness –– to identify words that are more uniquely associated with different types of news. She said one issue she ran into was which lists of news domains to use in research.

“It is still concerning in that we are seeing how behavior seems to be aligned with the campaign dynamics,” Budak said. “How much does the prevalence of fake news matter? The research community doesn’t yet have a good understanding. Until we can do a better job, we have to be careful of what we use in our analysis.”

Similarly, Mihalcea began her portion of the panel by sharing the difficulty of finding fake news sets. Because of this, she said her studies focused on false information naturally occurring in news as well as celebrity news.

Mihalcea said she used a learning classifier to examine linguistic descriptors. Her results showed machines have similar performance when compared to humans in detecting fake news.

She also touched on the qualitative properties of fake news. Mihalcea said fake news is less likely to use first or second-person pronouns and phrases that reflect certainty.

LSA sophomore Connor Cain noted the complexity of the issue of fake news. He said the abilities of machines in comparison to humans in detecting fake news was surprising.

“In general, this is just an important topic that is difficult to tackle,” Cain said. “I knew already that as humans we’re really bad at seeing this, but a machine can do better than us is really interesting.”

Ackerman discussed the use of misinformation and disinformation. Specifically, he shared the complexities of misinformation that makes fake news difficult to detect. While disinformation is meant to deceive, he said it is easier to detect, so the real problem lies in stopping the spread of misinformation.

He said because people who write misinformation are good at it and the public is generally bad at noticing falsities, he focuses on the lens of conspiracy.

However, while he said machine programs have made strides in stopping fake news, one program cannot alone solve the issue. He also noted that a rise in decentering of expertise and epistemic communities have a role in supporting the success of misinformation, which in his opinion is a more timely issue than singularly fake news.

“Fake news is just so 2017 and it now means almost everything so … It’s 14-year-olds in Eastern Europe putting out clickbait, it’s anything that our president doesn’t like, lots of different things,” Ackerman said. “The problem is that it’s a mashup of truth and just a little bit of disinformation leading you down the wrong path.”

After the speakers presented individually, the full panel convened to answer questions from the audience. The panelists touched on social media platforms and their role in housing and regulating misinformation, the scale of fake news and possible solutions.

Nyhan noted the scale of this issue beyond just the U.S.

“We’ve had a very U.S.-focused conversation here, but the evidence suggests that the problems are potentially much worse in countries that don’t have as strong of democratic institution, that don’t have as high of literacy and information technology numbers,” Nyhan said. “It’s being done at a vast, global scale that we can’t observe and so there’s a real social problem.”