Report / In Depth

Everything in Moderation

An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content

Moderation.png

Abstract

Internet platforms are increasingly adopting artificial intelligence and machine-learning tools in order to shape the content we see and engage with online. The use of algorithmic decision-making is becoming particularly prevalent in online content moderation, as companies attempt to comply with speech-related legal frameworks while also trying to promote safety, positive user experiences, and free expression on their platforms.

This report is the first in a series of four reports that will explore different issues regarding how automated tools are being used by internet platforms to shape the content we see and influence how this content is delivered to us. The reports will focus on content moderation based on a platform’s content policies, the ranking of content on newsfeeds and in search results, the optimization and targeting of advertisement delivery, and content recommendations to users based on their prior content consumption. These reports will also explore how internet platforms, policymakers, and researchers can better promote fairness, accountability, and transparency around these automated tools and decision-making practices.

Acknowledgments

In addition to the many stakeholders across civil society and industry that have taken the time to talk to us over the years about our work on content moderation and transparency reporting, we would particularly like to thank Nathalie Maréchal from Ranking Digital Rights for her help in drafting this report. We would also like to thank Craig Newmark Philanthropies for its generous support of our work in this area.

More About the Authors

Spandana Singh
Spandana Singh

Policy Analyst, Open Technology Institute

Table of Contents

Close