What Will It Take to Combat Disinformation in the Digital Age?

Weekly Article
Gts / Shutterstock.com
May 24, 2018

Last week, a Facebook post shared by America’s Last Line of Defense claimed that undocumented immigrants were queuing up to vote in Battsville, Arizona. But the image in the post was actually taken from a news report highlighting election voting in Mexico, and the town referenced in the post does not actually exist. It was just another example of fake news stories that have circulated online recently in the United States with the aim of spreading disinformation and stoking societal tensions. Although disinformation and fake news have existed in societies around the world for hundreds of years, the digital landscape has magnified their potential for impact. So where do we go from here?

On May 10, Future Tense, a partnership of Slate, New America, and Arizona State University; New America’s Education Policy Program and Open Technology Institute; and the First Amendment Coalition hosted a two-panel conversation that explored how policymakers, companies, and users can fight the growing threats of misinformation and fake news.

The first segment of the event focused on what role, if any, technology companies should play in moderating online speech that is considered falsified. For instance, Cambridge Analytica improperly acquired information about Facebook users so it could target potential voters—often with fake news and disinformation campaigns. As a result, policymakers have called on companies to ramp up their efforts to police falsified content on their platforms. Most companies, however, have shied away from adopting responsibility for determining what forms of speech are legitimate and which are fake news. Facebook, for example, partnered with a number of fact-checking organizations that verify content for them. Instead of removing this questionable content, Facebook flagged it as potentially unreliable. However, Facebook has recently adjusted this approach, after finding that it actually resulted in users sharing this content more.

If companies were to begin policing content on their platforms based on concerns that they are falsified, it could lead to serious problems. Kevin Bankston, the director of the Open Technology Institute, pointed out that a large amount of the information that the mainstream media and online users alike call “fake news” is really “political opinion deployed in a difficult way.” Encouraging internet platforms to censor such content could raise serious concerns about freedom of speech and expression online, a perspective that is often overlooked in discussions of disinformation and fake news.

Facebook CEO Mark Zuckerberg’s recent testimony in Washington, D.C., demonstrated that policymakers are becoming increasingly skeptical of the powers major internet platforms wield. However, they are simultaneously also encouraging these companies to utilize these powers in order to tackle issues such as fake news. Going forward, Bankston urged these companies, which act as the private managers of our online speech, to demonstrate increased transparency on how they are managing online speech, and to provide users with greater due process in this space.

Social media platforms may play an integral role in combating fake news and disinformation, but technical solutions alone cannot solve this issue. During the second panel of the event, the speakers emphasized that consumers also need to better educate themselves—which will require media literacy programs that, among other things, help people discern the fake from the legitimate.

During the panel, An-Me Chung, a senior fellow at the Mozilla Foundation, urged schools to focus on teaching students critical thinking skills (The Education Policy Program’s Lisa Guernsey has written for Future Tense about how schools can integrate media literacy lessonsin the classroom.) Chung stressed that there are significant equity and access issues that need to be considered when designing and implementing these literacy curricula at large.
Currently, only some schools are engaging in digital and information literacy education. These are often also the schools that have greater access to technology and related resources.

But it seems that many Americans are still in denial about these issues, which only serves as a further testament to why we need to raise awareness about these issues and potential solutions. This will undoubtedly be an uphill battle, but it is also a necessary one.

This article originally appeared in Future Tense, a collaboration among Arizona State UniversityNew America, and Slate.