Trained for Deception: How Artificial Intelligence Fuels Online Disinformation

A report from the Coalition to Fight Digital Deception
Policy Paper
Sept. 14, 2021

Social media platforms are increasingly relying on artificial intelligence (AI) and machine learning (ML)-based tools to moderate and curate organic content online, and target and deliver advertisements. Many of these tools are designed to maximize engagement, which means they also have the potential to amplify sensationalist and harmful content such as misinformation and disinformation. This memo explores how AI and ML-based tools used for ad-targeting and delivery, content moderation, and content ranking and recommendation systems are spreading and amplifying misinformation and disinformation online.

It also outlines existing legislative proposals in the United States and in the European Union that aim to tackle these issues. It concludes with recommendations for how internet platforms and policymakers can better address the algorithmic amplification of misleading information online. These include: encouraging platforms to provide greater transparency around their policies, processes, and impact; direct more resources towards improving fact-checking, moderation efforts, and the development of effective AI and ML-based tools; provide users with access to more robust controls; and provide researchers with access to meaningful data and robust tools. Although platforms have made some progress in implementing such measures, we as a coalition believe that platforms can do more to meaningfully and effectively combat the spread of misinformation and disinformation online. However, recognizing the financial incentives underlying platforms’ advertising-driven business models—and their influence on platform approaches to misinformation and disinformation—we encourage lawmakers to pursue appropriate legislation and policies in order to promote greater transparency and accountability around online efforts to combat misleading information.

Editorial disclosure: This brief discusses policies by Amazon, Facebook (including Instagram), Google (including YouTube), Microsoft (including LinkedIn, whose co-founder is on New America’s Board), and Twitter all of which are funders of work at New America but did not contribute funds directly to the research or writing of this piece. View our full list of donors at www.newamerica.org/our-funding.

Related Topics
Section 230 Content Moderation Algorithmic Decision-Making Platform Accountability