Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Recommendations

The following recommendations offer high-level guidance on how internet platforms, government entities, and policymakers can promote greater FAT around the development and deployment of high-risk algorithmic systems. In general, all efforts to promote FAT around algorithmic systems should prioritize information that is meaningful, explainable, and comprehensible by the target audience.

Recommendations for Internet Platforms

  1. Provide comprehensible algorithmic system use policies that explain to consumers how the company uses algorithmic systems and for what purposes.
  2. Enable users to determine whether and how their personal data is used to train a company’s algorithmic systems and what data points are used to inform a company’s algorithmic systems. Where possible, users should also be able to opt out of having algorithmic systems shape their online experiences.
  3. Expand transparency efforts to include more quantitative and qualitative information on how platforms use algorithmic tools to deliver their services. Where relevant, this should include more information on how companies use algorithmic systems to moderate content and to target and deliver ads, what the error rates of these systems are, and what impact these systems have had on user speech and experiences.
  4. Submit to regular independent external audits and commit to, at a minimum, publishing a public summary of the findings and making subsequent mitigation efforts.
  5. Supplement potential external auditing efforts by conducting proactive, regular internal audits of algorithmic systems in order to identify potentially harmful outcomes related to privacy, freedom of expression, freedom of information, or cases of discrimination surfaced by community partners, civil society organizations, activists, etc. Companies should take steps to eliminate or mitigate any harms identified. Companies should also share summaries of the audits in a public and explainable manner.
  6. Collaborate with civil society, researchers, and government agencies to create standardized mechanisms for benchmarking ML documentation procedures. At a minimum, these efforts should include the creation of documentation procedures for datasets on which models are trained, intended use cases of models, and the performance characteristics of models. Ideally, documentation procedures will be flexible enough to accommodate the model-specific needs of an AI system while still following a standardized format, and strategically communicate information so that it is valuable to technical and non-technical stakeholders alike.
  7. Establish mechanisms for consumers to provide feedback on adverse outcomes that result from an algorithmic system, such as the appeals process currently available to users subject to content moderation processes. This will encourage greater company accountability to their consumers.

Recommendations for Government Entities and Policymakers

  1. Pass comprehensive federal privacy legislation that requires internet platforms to provide transparency, impact assessments, and regular audits to prevent algorithmic tools from being used in ways that disproportionately impact disadvantaged communities. These rules should also empower the FTC or a Data Protection Authority to enforce requirements.
  2. Require government agencies that develop and/or deploy algorithmic systems to conduct periodic algorithmic audits and impact assessments to identify and mitigate discrimination, bias, and other harms. One way to achieve this would be for the current administration to introduce an EO requiring government agencies to evaluate any high-risk algorithmic systems pre-deployment. These systems should also be subject to continuous and periodic reviews to account for changes in systems and how they operate. The EO should establish a clear definition for high-risk systems to ensure that any legal actions are proportionate. This definition should be developed based on multi-stakeholder consultations and dialogue, broadly applicable, and dynamic enough to account for the fact that the risks an algorithmic system poses can change over time. Government agencies should also conduct regular impact assessments and algorithmic audits for high-risk algorithmic systems, even when not required, to identify and mitigate discrimination, bias, and other potential harms.
  3. Supplement the EO discussed above with clear rules that require companies and government agencies to review their algorithmic systems, particularly their high-risk algorithmic systems, before they are deployed and to mitigate any identified harms. If these entities fail to do so, they can be held liable by a regulator.
  4. Task government agencies, particularly NIST, with establishing a robust set of auditing standards for internet platform use of algorithmic systems. In practice, these standards should result in a distinct professional auditing field that conducts external, independent audits on technology companies’ use of algorithmic systems. Similar to the EO, auditing standards should establish a clear definition of “high-risk systems” based on multi-stakeholder consultations and dialogue. This definition should be broadly applicable and should be dynamic enough to account for the fact that the risks an algorithmic system poses can change over time.
  5. Establish a set of safeguards that enable internet platforms that develop and deploy high-risk algorithmic systems to share data with vetted researchers and auditors in a manner that mitigates privacy and trade secret concerns and facilitates meaningful research and evaluation. Internet platforms should also avoid using the CFAA as a mechanism for penalizing researchers and auditors who are working to promote FAT around algorithmic systems.
  6. Institute mechanisms, incentives, and programs for recruiting and upskilling technical talent to staff government agencies so AI tools can be developed internally, or at least overseen and evaluated for issues by in-house staff.

As conversations around promoting FAT around high-risk algorithmic systems continue, we also recommend that a broad range of stakeholders—including internet platforms, government entities, civil society organizations, researchers, investors, and funders—come together to pursue the following recommendations. These recommendations encourage collaborative efforts with the aim of developing meaningful and actionable standards related to high-risk algorithmic systems.

  1. Develop a clear set of standards and methodologies to guide the implementation of algorithmic audits for internet platforms. These guidelines should include details on procedure, transparency, and strategies to mitigate harms. While these guidelines should offer a degree of standardization to auditors, they should also account for variances between algorithmic systems and harms they can cause and leave room for these unique characteristics to be considered.
  2. Collaborate with standards setting bodies to establish clear standards around how companies can develop and deploy algorithmic systems and clear ESG commitments related to use of these systems. These standards should be accompanied by mechanisms that enable investors to evaluate whether companies are fulfilling their commitments. Standards-setting bodies should also establish mechanisms for soliciting feedback on standards for a broad range of stakeholders, including civil society groups and researchers.
  3. Direct funding and resources toward efforts seeking to establish a robust landscape of actors working on FAT around algorithmic systems. This includes funding for robust technical training for civil society organizations, journalists, government agencies, and auditing entities, as well as awareness and education events for policymakers and the public.

Conclusion

Internet platforms and government agencies are rapidly expanding their development and use of ML and AI-based technologies. As this report outlines, there are a broad range of mechanisms that these entities can use to promote FAT around high-risk algorithmic systems. Currently, however, efforts to promote FAT around high-risk algorithmic systems tend to focus on a selection of mechanisms and do not consider how these mechanisms can be deployed in concert with one another. Going forward, internet platforms, government entities, and other relevant stakeholders must work to develop comprehensive policies, standards, and roadmaps that can generate meaningful FAT around high-risk algorithmic systems.

Table of Contents

Close