Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

FAT Approaches Internet Platforms and Governments Can Implement

This section discusses four approaches to promoting FAT around high-risk algorithmic systems that both internet platforms and governments can implement: Algorithmic Impact Assessments, algorithmic audits, bias impact statements, and labels for algorithmic systems. This section explores the strengths and limitations of each of these approaches, as well as how these mechanisms individually fit into the broader ecosystem of approaches seeking to promote FAT around high-risk algorithmic systems.

Algorithmic Audits

Despite the ubiquitous use of algorithmic systems across industries today, both experts and the general public still have a limited understanding of how these tools work. This is because algorithms operate in an opaque manner. Depending on the algorithm, developers can know what inputs are fed into the system and what outputs are generated. However, even developers themselves often have very little insight into the inner workings of the algorithm. In this way, algorithms can function as black boxes.

Algorithmic audits can help address the opaque nature of algorithmic systems by allowing auditors to evaluate and scrutinize the inner workings of an algorithmic system as it is being deployed or after it has been deployed. Audits can help evaluate specific variables, such as those related to privacy, human rights impacts (e.g., freedom of expression), bias, and fairness. Audits can also help identify unintended consequences,1 examine concerns raised by external stakeholders (e.g., civil rights groups), and/or determine whether a system is aligned with certain company or government policies, industry standards, or regulations.2 Depending on how an audit is conducted, how transparently the results are published, and whether the audited entity takes meaningful steps to mitigate any problems identified during the audit, audits can be an effective tool for promoting greater FAT around algorithmic systems that are being implemented or are already in use.

Algorithmic audits can be useful in both a corporate and government context. The concept of auditing companies is a longstanding one that has transformed over the decades.3 Today, auditing is a critical mechanism for quality control in sectors such as aerospace and healthcare. It is also a common practice for private companies to submit to financial audits.4 The practice of auditing for discrimination has also been in place in the public sector since the 1970’s, when the research unit of Department of Housing and Urban Development (HUD) carried out audits to detect racial discrimination in housing.5 As more companies and government entities rely on algorithms to share information about and make key decisions related to housing, finance, and other consequential areas of life, experts have proposed auditing algorithmic systems as a mechanism for promoting accountability around these tools.

Currently, the biggest issue with all algorithmic audits is there is no established algorithmic auditing structure or landscape. In the financial sector, there are well-established auditing standards. If an auditing company or entity being audited seeks to evade transparency and accountability, both the auditor and entity being audited will face reputational damage and legal liability. However, in the emerging algorithmic auditing space, no clear norms have been established, so no harms exist for either party. This vagueness could serve as a disincentive for companies and government entities to participate in algorithmic audits at the current moment.6 This is an area where thoroughly researched standards or government oversight can help promote greater FAT via algorithmic audits.

Audits on a company or government agency’s algorithms can either be conducted internally (i.e., by the company or agency themselves) or externally (i.e., by an independent third party). Researchers have noted that internal audits can be beneficial mechanisms for promoting FAT around algorithmic systems, as auditors will have access to a robust set of information on the relevant system. Internal auditors will also likely be better versed in the entity’s operations and technical infrastructure compared to external auditors. In order to promote transparency and reliability, companies must address any issues identified during internal audits and should publish at least an explainable summary of the audit’s findings to the public. If performed legitimately, internal audits could supplement external transparency and accountability efforts, including external audits.7 In other industries, such as the financial, chemical, food, and aviation sectors, for example, internal auditing practices related to quality assurance are also coupled with regulatory mechanisms, which guide expectations and standards around internal audits.8 Without external checks like these, it is unlikely that company or government audits of their own internal systems will accrue legitimacy, as they allow the entity to essentially create their own tests and grade their own homework.

In the same vein, external audits conducted by independent third parties are likely to be more reliable and legitimate, but are limited by the current lack of algorithmic auditing standards, which will hamper external parties’ access to and understanding of an entity’s internal processes. When it comes to private companies, external auditors may only be able to access model outputs through alternative avenues, such as application programming interfaces (APIs). They also may not have access to critical information, such as intermediate models or training data, as these are often shielded as trade secrets by intellectual property claims. However, external auditing is widely accepted for practices such as financial auditing, and often relies on the use of non-disclosure agreements (NDAs). As standards for algorithmic auditing are developed, private companies should adopt a similar external auditing structure.

Given the current limitations around conducting external audits on internet platforms’ algorithms, some researchers have proposed alternative avenues for conducting such audits in the short term, particularly in situations where researchers may not have full access to a company’s systems. In many of these instances, researchers do not have consent from companies to conduct these audits. This poses significant legal risk to researchers, limits the effectiveness of the audits, and underscores the need for auditing standards. Some of the alternative auditing methods proposed include:

  1. Code audits, in which a platform discloses its source code to researchers or the public. However, given company concerns including trade secrets, adversarial use of their algorithms, and the privacy of their users, companies would likely only provide such disclosures where they are compelled by the government. In addition, this approach is limited in that reading source code does not immediately facilitate the interpretation of algorithms or the identification of harmful outcomes. Rather, an algorithm’s outputs are reliant on its inputs. As a result, researchers would have to rely on trial and error to identify harms when given just source code. These limitations also underscore the fact that algorithmic systems and flawed datasets lead to harmful outcomes, and must be considered together.
  2. Noninvasive user audits, in which users agree to answer questions about—or provide researchers access to—data on their online behaviors so that inferences can be made about the operations of an algorithm. However, this approach does not involve actually testing the algorithm in any way, and is vulnerable to sampling issues as well as high error rates common with self-reporting mechanisms for data collection.
  3. Scraping audits, in which a researcher could query a platform and evaluate the results. This is often done through an API. However, in the United States, the Computer Fraud and Abuse Act (CFAA) creates significant legal risks for researchers even though its purpose is to criminalize hacking.9 Many platforms also include stipulations that hinder research efforts in their terms of service.
  4. Sock puppet audits, in which researchers rely on computer programs to impersonate users on a platform. Because this approach requires deception, researchers or those creating the programs can incur similar legal consequences under the CFAA. The operator of an algorithm could also claim that the use of sock puppet accounts is harmful as they perturb an algorithm and could undermine its operations. Researchers therefore have to tread carefully if deploying this method.
  5. Crowdsourced/collaborative audits, in which users volunteer or are hired to perform tasks online to test an platform’s algorithms. This approach can be costly, but likely does not incur legal consequences under the CFAA.

In order to understand the role algorithmic audits can play in promoting FAT around algorithmic systems, it is important to consider their strengths and limitations. A sizable amount of an audit’s legitimacy will be derived from if and how the auditing entity and the entity being audited communicates the audits results. If condensed information about the outcomes of audits are published in an explainable manner and met with oversight from a relevant body,10 it could help boost awareness around the potential harms from algorithmic systems and engender trust that they are being mitigated. However, given the current unstructured and non-compulsory auditing environment, companies and government agencies may be reluctant to voluntarily audit their algorithms and share outcomes, as they likely fear negative reactions. But, if the results of audits are kept entirely private, and there are no methods for oversight, there is also no way to ensure that companies and governments are being held accountable.

In order for algorithmic auditing to become a reliable mechanism for promoting FAT, relevant stakeholders—such as policymakers, civil society groups, and standards-setting bodies—need to develop appropriate standards of practice, training and credentialing for auditors, transparency conventions, and other mechanisms that essentially turn this practice into a professional field.11 The creation of standards for algorithmic auditing by relevant stakeholders is important for a number of reasons. Thus far, algorithmic auditing has been carried out by a range of actors, such as investigative journalists. However, without a set of conventions to guide how these audits are conducted, it is difficult to compare, contrast, and verify the results of audits.12 Audits are also dependent upon human judgment, and they can therefore vary in their reliability.13 Standards can help combat this. Additionally, in their current form, algorithmic audits often seek to address different values and concerns (e.g., discrimination, media plurality, etc.) and integrate concepts from different disciplines (e.g., human-centered design, behavioral economics, ethics, etc.14 These concepts and values are varied and can be subjectively defined. In order to mitigate subjectivity, audits must be designed and deployed using a clear, standardized methodology and process. These standards should also guide disclosure and transparency expectations and clearly define high-risk algorithmic systems in a manner that accounts for the fact that the risks an algorithmic system poses over time can vary. Clear standards will also guide companies and governments on how to design their algorithmic systems on the back end so that they are compatible with audit mechanisms.

As policymakers or standards-setting bodies seek to develop standards for algorithmic audits, they should also consider the different types of algorithmic systems that companies and government agencies operate, their use cases, and their potential to cause harm and create high-risk situations. It is also important for these actors to recognize that algorithmic systems are not static. They are constantly being retrained and redeployed. Accordingly, any efforts to encourage reviews of algorithmic systems, such as audits, must include plans for ongoing accountability, not just one evaluation.15

Algorithmic Impact Assessments

Algorithmic Impact Assessments (AIAs) are another mechanism that seeks to promote FAT around algorithmic systems. AIAs evaluate algorithmic systems pre-deployment to help determine whether their use is appropriate in a given context by documenting the potential impact of the system.16 In general, AIAs are meant to be used by government agencies or private companies that intend to deploy AI systems as self-evaluations to identify the potential harms of the system and provide a holistic view of the impacts.

One of the first pieces of legislation attempting to regulate algorithmic systems leans heavily on AIAs to promote AI accountability: the Algorithmic Accountability Act. Proposed in the U.S. Congress in 2019, this legislation would require large companies to conduct impact assessments of the automated decision systems they deploy that may affect sensitive personal information.17 Notably, however, the bill does not provide meaningful details regarding how those AIAs should be structured or implemented. The AI Now Institute, the European Parliament, and the Canadian Government have all proposed versions of AIAs for governments and companies that draw directly from long-standing impact assessment frameworks in other policy domains, such as environmental protection, human rights, privacy, and data protection.18

At present, there is no consensus around what specific elements an AIA should contain. Current proposals generally suggest that AIAs should include a statement explaining the algorithmic system’s potential impacts and share relevant and widely interpretable details of how the algorithmic system operates. The AI Now Institute has proposed a useful framework for government implementation of AIAs, recommending that they be incorporated into the public procurement process so that the government can provide greater accountability to the public around its use of algorithmic systems. Under their proposal, AIAs would consist of five key elements: 1) government agencies would conduct a self-assessment of their existing and proposed automated decision system, evaluating impacts on bias, fairness, and justice; 2) agencies would develop a meaningful external researcher review process; 3) agencies would disclose their definition of automated decision system to the public; 4) agencies would solicit public comments to clarify concerns and address outstanding questions; and 5) the government would provide due process mechanisms for affected individuals and communities to challenge inadequate agency self-assessments or harmful uses that an agency fails to mitigate or correct. The AI Now proposal recommends that AIAs be incorporated into the pre-acquisition stage of procurement processes, so that the agency can evaluate the adoption of an automated decision system and take public input into account before committing to its use.19 Other advocacy groups, including OTI, have suggested that AIA’s should assess elements central to traditional privacy legislation, such as data minimization, retention periods for personal information, and whether users can access, challenge, or correct decisions made by an algorithmic system.20

As some researchers have laid out, AIAs could be valuable for promoting FAT around algorithmic systems that the government or a company seeks to use. As AI Now suggests, placing AIAs at the pre-acquisition stage of procurement would inform the public of the automated decision system’s functions and potential impacts, allowing them to identify concerns that may need to be negotiated or otherwise addressed before a contract is signed. This could allow the government to avoid harms before they can occur. Used in this way, AIAs would give government contractors that prioritize FAT in their algorithmic systems a competitive advantage in the public procurement process, incentivizing AI developers to adhere to FAT principles and practices. Likewise, companies would benefit from analyzing the impact of a proposed algorithmic system. Given the harms that internet companies’ algorithmic systems can cause, stakeholders have called on those companies to conduct risk assessments before an algorithmic system is deployed.21 An AIA framework may allow companies to evaluate their systems’ impact pre-deployment and assuage their stakeholders’ concerns.

Because there is little consensus on a standard AIA framework, it remains unclear how useful AIAs could be as an accountability mechanism.22 Ultimately, their usefulness may be limited because they are merely self-assessment tools. This means potential harms discovered by a company or government agency may go unaddressed. This is especially concerning if an entity is considering deploying a high-risk algorithmic system. As a result, an entity would determine on its own what constitutes an “automated decision system” and only disclose those systems that fall under its own definition. An overly broad definition could burden companies and agencies by having to disclose irrelevant algorithmic systems, but an overly narrow definition could exclude systems that make critical and high-risk decisions about individuals’ lives. Another challenge that arises surrounding AIAs in the private sector is balancing internet platforms’ alleged concerns around protecting trade secrets with the goal of disclosing meaningful information on the potential impacts of algorithmic systems.23

Further, when public or private actors create their impact assessment framework, tensions may arise between the different values that various stakeholders want to prioritize in evaluation practices.24 Another limitation of AIAs is the difficulty for developers to address harms of representation in algorithmic systems (the way a system may unintentionally reinforce the marginalization of some social and cultural groups). Further, the AIA framework introduces the potential for the government and private companies’ reliance on external review to become an unfunded tax on researchers and the affected communities with which they engage, who may be monitoring algorithmic decision systems without resources or compensation.25

Bias Impact Statements

The issue of bias in ML models, datasets, and algorithmic systems more broadly is a well-documented problem, with examples ranging from racially-discriminatory, pretrial-risk assessment tools26 to hiring algorithms that exhibited bias against female applicants.27 Bias can be introduced to the ML process via a number of entry points, including the use of an unrepresentative or incomplete training dataset, the use of a training dataset that reflects historical biases, poor framing of the task or fluid definitions for the model to automate, or weighting model attributes in a manner that, depending on the weights given to certain attributes, may result in bias. Researchers and advocates in the ML space have published a number of proposals to address this thorny issue, including the use of bias impact statements.

Bias impact statements are self-assessments that algorithm designers in both the government and private sector can use to evaluate the levels of bias in their model throughout the ML process. These assessments allow designers to investigate how, when, and why bias may be introduced. Bias impact statements help designers understand how the system might be biased toward certain groups and potentially inflict serious and disproportionate harm—therefore, the higher risk the algorithm, the more crucial the impact statement. Whereas AIAs offer an overall picture of an algorithmic system’s impacts and harms, bias impact statements provide a focused assessment of the potential bias and discriminatory outcomes of an algorithmic model or system. Bias impact statements consist of a template of flexible questions which guide the algorithm designers’ considerations while they make critical design choices throughout the development of the model. Ideally, these evaluations serve to prevent—or, at the very least, mitigate—bias that may be introduced to the model during the algorithmic design and training processes, before deployment. Also, in the event that bias is discovered during testing processes and addressed, bias impact statements also function as a historical documentation of the model’s development that may be helpful in later testing and assessments. In this way, documentation throughout the ML-training life cycle is important because it allows developers to track, revisit, and understand past design decisions. It also enables external reviewers to conduct a substantive audit of the algorithmic system.

The Brookings Institution proposed a bias impact statement framework which suggests automated decisions should be subject to scrutiny, user incentives, and stakeholder engagement.28 The authors advise that operators of algorithms should begin by determining the possibility for unintended or negative outcomes that may result from the model and constantly question the legal, social, and economic effects and potential liabilities associated with designing the automated system. They also recommend that private companies who successfully employ bias impact statements and produce fair algorithmic outcomes be publicly acknowledged for their best practices to set an example for the rest of the industry, and that algorithm developers engage with multiple stakeholders during the design process, including civil society organizations. Civil society might also aid government and internet platforms by providing credible bias impact statement frameworks for them to use and by identifying best practices in this regard.

Bias impact statements can be helpful tools for identifying bias in algorithmic models and achieving fairer outcomes. They could provide valuable documentation of the considerations made by algorithm designers during the ML life cycle, thus aiding in holding decision makers accountable for their design choices. If made public, bias impact statements would offer a mechanism for greater transparency of AI development practices. However, bias impact statements would ultimately be a non-exhaustive self-assessment tool. There are many other factors beyond bias that designers must factor into the consideration of a model’s impact, such as user privacy and safety implications. As a result, bias impact statements would have to be used in conjunction with other assessment methods. The implementation of bias impact statements is also not currently enforced, so their use is purely voluntary. Despite these limitations, however, the discovery of bias in a model is the first step toward understanding and generating solutions. In this way, bias impact statements could be one useful component of a broader solution toward promoting FAT around algorithmic systems developed by the government and internet companies.

Labels for Algorithmic Systems

Government entities and internet companies’ use of algorithmic labels may help promote FAT around algorithmic systems by evaluating how effectively a system operationalizes principles such as fairness, user privacy, and safety. This quality measurement would be reflected in an algorithmic label or rating provided to consumers. A number of recent proposals have detailed frameworks for algorithmic labels or ratings as an approach to ensure FAT around algorithmic systems, and while there is variation among the specifics of each proposal (e.g., the methodology for assigning the rating), the proposed frameworks generally follow the same high-level model above.29

Algorithmic labels express how well an algorithmic system performs on a variety of indicators through an algorithmic rating that is consumer friendly. This transparency approach empowers consumers to make informed decisions regarding the technologies they use. Algorithmic labeling has been inspired by rating systems in other industries, like the EnergyStar Rating, which has become the industry standard for energy efficiency of electronic appliances, the Better Business Bureau’s rating system for businesses, and the Food and Drug Administration’s (FDA) nutrition label.

The notion of algorithmic labels has gained traction in the EU, where the German Data Ethics Commission has recommended a mandatory labeling scheme that would apply to public and commercial algorithmic systems—including those used by internet companies—that pose any potential risk to people’s rights.30 The obligatory labeling scheme would require operators to clearly express whether algorithmic systems are in use, and to what extent. An interdisciplinary team of experts from academic institutions in the EU, known as Bertelsmann Stiftung’s AI Ethics Impact Group, has also developed a comprehensive prototype for an algorithmic labeling framework.31 The group’s 2020 working paper outlines a multi-method framework that includes an AI ethics labeling system with a rating for six key values: transparency, accountability, privacy, justice, reliability, and environmental sustainability.32 In this framework, organizations, like government bodies and internet companies, that develop and deploy AI systems would conduct the standardized labeling process. The label is intended to account for the relevant ethical principles and to be a standardized rating that has value for all stakeholders—regulatory bodies, developers, and consumers alike.

Algorithmic labels are appealing because they feature built-in expressions of how well a model performs on important indicators in a compact standardized rating. This allows for the comparison of algorithmic systems tools via common metrics. The inherent transparency of an algorithmic label puts power into the hands of the consumers of products or platforms that employ algorithmic systems. This information enables consumers to make informed decisions about the algorithmic systems that affect their lives. These consumers have not previously been provided such critical information in a meaningful and understandable way. In the case of high-risk algorithmic systems, algorithmic labels could play a critical role in expressing the potential harms of the system to consumers, and allow consumers to avoid or otherwise mitigate harms that they could have experienced without greater understanding of how the system works.

Algorithmic labels have numerous potential benefits, but there are many difficulties involved in implementing such a system. These challenges may limit the effectiveness of labels in certain situations. Because the process of evaluating and rating algorithms is not something that can be generalized, assessments must be focused on context-specific applications, which may result in inconsistent results. Further, the important indicators of a well-performing model will be different for each algorithm—for instance, explainability will be a much more important indicator in a public-facing algorithm than one used internally by developers. Therefore, the rating system will likely be most helpful to those who know what kind of outcome they are looking for in the algorithm in question, limiting its usefulness for everyday consumers.

We as a society must reach a consensus around which values should be prioritized in different contexts in order to construct and implement a successful algorithmic label framework. In an algorithmic labeling process, two competing values, such as privacy and transparency, may need to be balanced against each other, but determining which to prioritize is difficult and subjective. Additionally, it is currently unclear who is best positioned to function as the algorithmic labeling body, be it a current or new government entity, or an independent committee or other body established for this sole purpose. This remains a significant open question that would affect the ultimate impactfulness of algorithmic labels.

Citations
  1. Alex C. Engler, "Independent Auditors Are Struggling to Hold AI Companies Accountable," Fast Company, January 26, 2021, source.
  2. Inioluwa Deborah Raji et al., "FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency," Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, January 2020, source.
  3. Howard B. Levy, "History of the Auditing World, Part 1," The CPA Journal, source.
  4. Raji et al., "FAT* '20".
  5. Christian Sandvig et al., Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms, May 22, 2014, source.
  6. Engler, "Independent Auditors".
  7. Raji et al., "FAT* '20".
  8. Raji et al., "FAT* '20".
  9. The CFAA prohibits accessing a computer without authorization in order to curb hacking, but does not define “authorized access,” leaving it to website operators to determine, and thwarting research and other legitimate access in the process.
  10. James Guszcza et al., "Why We Need to Audit Algorithms," Harvard Business Review, November 28, 2018, source.
  11. Guszcza et al., "Why We Need”.
  12. Guszcza et al., "Why We Need”.
  13. Raji et al., "FAT* '20".
  14. Guszcza et al., "Why We Need”.
  15. AI Now Institute, Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force, December 4, 2019, source.
  16. AI Now Institute, “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability”, April 2018, source.
  17. “H.R.2231 – Algorithmic Accountability Act of 2019,” The United States Congress, April 2019, source.
  18. European Parliamentary Research Service, A Governance.
  19. Critically, AI Now’s AIA structure allots a due process challenge period, which provides a path for the public to challenge the adoption of the automated system if the agency fails to comply with AIA requirements or performs a substandard self-assessment.
  20. Bannan and Blase, Automated Intrusion.
  21. Ranking Digital Rights, "2019 RDR Index Methodology," Ranking Digital Rights, 2019, source.
  22. “Governing with Algorithmic Impact Assessments: Six Observations,” Data & Society, April 24, 2020, source.
  23. Rieke and Bogen, Leveling the Platform.
  24. One well-known example of an impact assessment framework is a human rights impact assessment (HRIA), which draws from human rights principles, such as those on freedom of expression and privacy, and can be used to measure the impact that technologies have on individuals’ fundamental rights. HRIAs are a valuable mechanism for assessing FAT in a human rights context. However, a persistent challenge related to conducting an HRIA-like evaluation is that these assessments require a large amount of data, some of which may not be disclosed publicly by companies and some of which may be sensitive personal information. It is unclear how to best address this fundamental tension between wanting to implement impact assessments on groups that may be the most harmed by technologies and wanting to uphold strong privacy standards by not collecting huge datasets, which may include sensitive information that could be weaponized against said groups. Ranking Digital Rights, "2019 RDR Index," Ranking Digital Rights.Nora Götzmann, ed., Handbook on Human Rights Impact Assessment (Edward Elgar Publishing, 2019).
  25. AI Now Institute, Confronting Black.
  26. “Machine Bias,” ProPublica, May 23, 2016, source.
  27. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018, source.
  28. “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms,” Brookings Institution, May 22, 2019, source.
  29. “Could Rating Systems Encourage Responsible AI?,” Brookings Institution, October 1, 2020, source.
  30. In 2018, the Chancellor of Germany, Angela Merkel, tasked the German Data Ethics Commission with producing recommendations for rules around AI to protect individual rights, preserve social cohesion, and safeguard and promote prosperity in the information age. The report recommended that algorithmic systems should be designed to protect democracy and people’s rights and freedoms, be secure, and avoid bias and discrimination.“Data Ethics Commission,” Federal Ministry of the Interior, Building, and Community, September 2018, source of the Data Ethics Commission,” Daten Ethik Kommission, October 2019, source.
  31. “From Principles to Practice: How Can We Make AI Ethics Measurable?,” Ethics of Algorithms, April 2, 2020, source.
  32. The AI ethics label is supplemented by a risk matrix and VCIO (Values, Criteria, Indicators, Observables) (VCIO) model. To reach a specific rating level for a given value, the minimum requirements of observable factors must be met (e.g., to receive an ‘A’ rating for privacy, end-to-end encryption must be a feature of the AI system). The value ratings of the ethics label are underpinned by the VCIO model, which identifies a key AI ethics value and specifies criteria that define the fulfillment or violation of that value, acknowledging that values are often in conflict with each other and proposing a process for hierarchizing values against one other when this is the case. The framework’s risk matrix accounts for the context-dependent nature of AI applications with its two-dimensional classification system.
FAT Approaches Internet Platforms and Governments Can Implement

Table of Contents

Close