Existing Frameworks
A variety of frameworks have been developed to understand, categorize, and address how misinformation and disinformation are created, spread, and mitigated. In this chapter, we explore the major existing frameworks, categorizing them based on their focus and approach.
Typology-Based Frameworks
Typology-based frameworks focus on categorizing misinformation and disinformation into distinct types based on various factors such as intent, content, and dissemination method.
Intent-Based Typologies
One of the most common typological approaches is to classify false information based on the intent behind its creation and dissemination. For example, some frameworks distinguish between misinformation (unintentional) and disinformation (intentional) as primary categories. Further subcategories might include malinformation, which refers to the deliberate spread of truthful information with the intent to cause harm, such as doxing or releasing private information.1
Content-Based Typologies
Another approach focuses on the nature of the content itself. These frameworks classify misinformation and disinformation based on the type of falsehood or distortion present in the content. For example, Wardle introduced a typology that categorizes false information into seven types: (1) satire or parody, (2) false connection, (3) misleading content, (4) false context, (5) impostor content, (6) manipulated content, and (7) fabricated content.2 Each type represents a different way in which the truth is distorted, providing a detailed map of the misinformation landscape.
Dissemination Method-Based Typologies
Some frameworks classify misinformation and disinformation based on the methods and channels used to spread them. These might include distinctions between organic spread (e.g., via social media sharing) and coordinated campaigns (e.g., through bot networks or paid advertisements). Understanding the dissemination methods helps in identifying the mechanisms by which false information reaches and influences audiences.3
Process-Oriented Frameworks
Process-oriented frameworks focus on the lifecycle of misinformation and disinformation, examining how these phenomena are created, disseminated, consumed, and ultimately affect audiences. These frameworks often draw from communication and media studies to map out the stages through which false information travels and the factors that influence each stage.
The Information Disorder Framework
This type of framework identifies three key stages in the lifecycle of false information: (1) creation, (2) production, and (3) distribution.4 It also distinguishes between three elements involved: agents (creators, producers, and distributors), messages (the content itself), and interpreters (audiences). This framework is useful for understanding how misinformation and disinformation are constructed and spread across different platforms and contexts.
The Misinformation Lifecycle
Another process-oriented approach is the misinformation lifecycle model, which outlines the stages through which misinformation moves from its initial creation to its eventual impact on public perception. These stages typically include creation, amplification, dissemination, and correction.5 This model emphasizes the role of social media algorithms, news cycles, and audience engagement in the amplification and spread of misinformation.
The Knowledge-Based Approach
This type of framework focuses on how individuals process and interpret misinformation.6 It examines the cognitive processes that occur when people encounter false information, including how they decide whether to believe it or share it. The model suggests interventions at different stages of information processing, such as providing corrective information or promoting critical thinking skills, to reduce the spread and impact of misinformation.
Impact-Oriented Frameworks
Impact-oriented frameworks are concerned with the consequences of misinformation and disinformation on individuals, communities, and societies. They assess the effects of false information and help identify the broader implications on public opinion. Each model below represents a cluster of frameworks rather than a single framework, considered as a many-to-one mapping.
The Trust Erosion Model
This family of frameworks explores how disinformation campaigns erode public trust in institutions, media, and democracy.7 It posits that repeated exposure to false information, especially when it aligns with existing biases or distrust, leads to a gradual decline in trust. The model is particularly relevant for understanding the long-term societal impacts of disinformation and the challenges in restoring trust once it has been damaged.
The Public Health Impact Model
This family of models examines the spread of health-related misinformation (e.g., about vaccines or treatments) and its impact on public health, such as vaccine hesitancy or non-compliance with health guidelines.8 The framework also considers the role of public health communication in countering misinformation and promoting accurate information.
Behavioral Impact Model
This cluster of frameworks looks at how misinformation and disinformation influence individual and collective behavior.9 It considers factors such as cognitive biases, social influence, and emotional responses that lead individuals to accept or act on false information. The framework is useful for designing interventions that address the behavioral drivers of misinformation spread, such as social norms campaigns or behavioral nudges.
Actor-Centric Frameworks
Actor-centric frameworks focus on the roles and motivations of different actors involved in the creation, dissemination, and consumption of misinformation and disinformation. These frameworks analyze the behaviors, strategies, and networks of various stakeholders, including individuals, organizations, governments, and platforms.
The Actor-Network Theory
This sociological framework explores the complex relationships between different actors (both human and non-human, such as algorithms) involved in the spread of misinformation and disinformation.10 The Actor-Network Theory examines how these actors form networks that facilitate the dissemination of false information and how power dynamics within these networks influence the spread and impact of misinformation. The framework is useful for understanding the interconnectedness of different actors and the systemic nature of misinformation ecosystems.
The Political Economy Framework
This approach focuses on the economic and political motivations behind disinformation campaigns.11 It examines how state and non-state actors use disinformation as a tool for political gain, financial profit, or social influence. The framework also considers the role of media ownership, advertising revenue models, and regulatory environments in shaping the spread of misinformation and disinformation. Understanding these motivations is crucial for designing policies and interventions that address the root causes of disinformation.
The Platform Responsibility Framework
With the rise of social media and digital platforms, this framework addresses the responsibilities of these platforms in managing misinformation and disinformation.12 It examines the role of algorithms, content moderation policies, and platform governance in either exacerbating or mitigating the spread of false information. The framework also explores the ethical and legal implications of platform actions, such as content removal or algorithmic transparency.
Citations
- Elinor Carmi, Simeon J. Yates, Eleanor Lockley, and Alicja Pawluczuk, “Data Citizenship: Rethinking Data Literacy in the Age of Disinformation, Misinformation, and Malinformation,” Internet Policy Review 9, no. 2 (2020): 1–22, source.
- “Fake News. It’s Complicated,” First Draft, Accessed September 25, 2024, source.
- Yariv Tsfati, Hajo G. Boomgaarden, Jesper Strömbäck, Rens Vliegenthart, Alyt Damstra, and Elina Lindgren, “Causes and Consequences of Mainstream Media Dissemination of Fake News: Literature Review and Synthesis,” Annals of the International Communication Association 44, no. 2 (2020): 157–173, source.
- Claire Wardle and Hossein Derakhshan, Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking (Strasbourg, France: Council of Europe, 2017), source.
- Alisson Andery Puska and Roberto Pereira, “Exploring Digital Misinformation as a Sociotechnical Phenomenon: Insights from a Small-Scale Study,” in Proceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems (2023), 1–12, source.
- Leticia Bode and Emily K. Vraga, “See Something, Say Something: Correction of Global Health Misinformation on Social Media,” Health Communication 33, no. 9 (2017): 1131–40, source.
- Tosan Atele-Williams and Stephen Marsh, “Information Trust Model,” Cognitive Systems Research 80 (2023): 50–70, source.
- Cristina M. Pulido, Laura Ruiz-Eugenio, Gisela Redondo-Sama, and Beatriz Villarejo-Carballido, “A New Application of Social Impact in Social Media for Overcoming Fake News in Health,” International Journal of Environmental Research and Public Health 17, no. 7 (2020): 2430, source.
- Elie Alhajjar, “Alternate Reality—The Use of Disinformation to Normalize Extremism,” in The Great Power Competition Volume 3: Cyberspace: The Fifth Domain, ed. Adib Farhadi, Ronald P. Sanders, and Anthony Masys (New York: Springer International Publishing, 2022): 157–165.
- Bruno Latour, Reassembling the Social: An Introduction to Actor-Network-Theory (Oxford: Oxford University Press, 2005).
- Kathy Dobson and Jeremy Hunsinger, “The Political Economy of WikiLeaks: Transparency and Accountability through Digital and Alternative Media,” Interactions: Studies in Communication & Culture 7, no. 2 (2016): 217–233, source.
- Varun Ramdas, “Identifying an Actionable Algorithmic Transparency Framework: A Comparative Analysis of Initiatives to Enhance Accountability of Social Media Platforms,” National Law University of Delhi Student Law Journal 4, no. 74 (2022), source.