Creating AI Systems That Take Culture into Account

Blog Post
April 9, 2019

This is part of The Ethical Machine: Big ideas for designing fairer AI and algorithms, an on-going series about AI and ethics, curated by Dipayan Ghosh, a former Public Interest Technology fellow. You can see the full series on the Harvard Shorenstein Center website.

KENNETH D. FORBUS
WALTER P. MURPHY PROFESSOR OF COMPUTER SCIENCE AND PROFESSOR OF EDUCATION AT NORTHWESTERN UNIVERSITY

Understanding the influence of culture on reasoning is important for conflict resolution, prediction, and decision-making. People’s choices are rooted in their environment, upbringing, and experience. Creating AI systems that can take culturally influenced reasoning into account is crucial for creating accurate and effective computational supports for analysts, policymakers, consumers, and citizens. If we want our AI systems to reflect our standards and values, so that their reasoning and decisions are in line with our expectations and desires, how can we create AI systems that take culture into account?

If we want our AI systems to reflect our standards and values, so that their reasoning and decisions are in line with our expectations and desires, how can we create AI systems that take culture into account?

Let us look first at moral decision-making. Psychological studies have shown that people are not strictly utilitarian in moral reasoning. People are influenced by their protected values (sometimes called sacred values), or cultural standards concerning what kinds of actions are not allowable. Trolley problems are a classic way to explore protected values: suppose that, by throwing a switch, you could divert a locomotive, causing one person tied to the tracks to die but saving several others by doing so. Would you do it? Even though this saves more lives, there’s an ethical dilemma attached to throwing the switch, since this intervention would directly cause someone to die.

There are other examples that especially highlight how protected values can vary across cultures. Suppose a wrestler, who has never lost a match, is praying the night before his next contest. He overhears a woman—his opponent’s mother—praying that her son will win the match so that he can use the prize money to get married. The next day, the wrestler deliberately loses the match, and his opponent gets married. Researcher Morteza Dehghani and his collaborators used this story, plus others, with students at Tehran University and Northwestern University to assess whether participants of different cultural backgrounds reason by analogy to identify protected values in a situation. And that indeed is what they found: Iranian students, who knew stories of sacrifice that were analogous, advocated throwing the match, whereas U.S. students, who did not know those stories, did not. However, when the story was changed so that key relationships were not analogous (e.g., the mother wanted the prize money to buy herself nice clothing), the Iranian students no longer advocated sacrifice. One source of a difference in approach to decision-making in certain scenarios were the stories that people had internalized from their cultures. This insight suggests that by using analogy in AI systems, such systems could more accurately capture the influence of culture on people’s choices.

Recent progress in computational modeling of analogy in cognitive science has provided systems that form the basis for a new analogy-based technology for AI systems. That is, given a new problem, a system can use a human-like retrieval process to find a similar prior situation, and ascertain how it applies. In addition to learning by accumulating cases, it can also construct generalizations based on those cases, leading to broader, more transferable knowledge, and ending up with rule-like structures. This has already been done with MoralDM, a computational model of moral decision-making developed by Dehghani during his Ph.D. work at Northwestern [1]. MoralDM takes a decision problem, stated in simple English, and works through what to do. It uses analogies with culturally specific stories and prior problems to make a decision. Its reasoning can be inspected, including the values identified and their source.

Importantly, changing the stories available to MoralDM to reflect those of different cultures (e.g., Iranian versus American) causes its decisions to change accordingly. Without stories of sacrifice, MoralDM suggested that the wrestler should try to win the match, since that provides more utility to the agent. But with stories of sacrifice, the higher good of helping an opponent get married dominated, and the system advocated deliberately losing to achieve this benefit for someone else.

Recently Joe Blass and I have extended this model to use analogical generalization, a learning process that helps lift common patterns out of stories. The advantage of analogical generalization is that the number of examples needed to train systems using it can be very small. Even 10 examples can be enough for robust performance on many tasks, in contrast to statistical machine learning systems, and especially deep learning systems, which can take millions of examples to achieve reasonable performance. This data efficiency arises, we believe, from using more human-like representations than are typically used in machine learning. These human-like representations explicitly encode relationships, including intentions, reasons, and arguments. This provides an additional benefit of being able to inspect the assumptions and reasoning behind any decision that the software suggests. This explainability seems crucial for building AI systems that can be trusted.

Our approach suggests a new methodology for computational social science [2]. Mathematical models for cultural phenomena typically introduce many parameters, whose relationship to obtainable data is at best complicated and at worst nonexistent. Simple agent-based models have been useful for capturing aspects of phenomena that arise from large numbers of simple interactions, again using simple numerical parameters and a set of hand-coded rules to specify an agent’s behavior. Such formalisms do not have the expressive power to represent the complex beliefs that are responsible for human judgements. By contrast, cultural products such as stories, religious texts, and folktales provide a reliable source of data for cultural modeling. Such cultural products are often formed and honed over generations, providing a historical memory and moral framework illustrated by examples that help ground decisions in everyday life. Cultural narratives thereby provide a form of moral compass. This suggests a new way of modeling aspects of a culture: gather its cultural narratives, and make them available to AI systems in forms that they can understand and use.

This suggests a new way of modeling aspects of a culture: gather its cultural narratives, and make them available to AI systems in forms that they can understand and use.

Can this be done? So far there have only been small pilot experiments [3], which indicate that the approach is promising. In these experiments, the stories for MoralDM were hand-translated into simplified English syntax, and a natural language system was used to extract semantic representations of the events, actors, and motivations contained therein. Further progress in natural language understanding will make this process easier and more scalable, ultimately taking in cultural products in their original forms. Thus the pipeline for building a cultural model for moral decision-making would be: (1) gather a set of representative cultural products, (2) translate into whatever natural language form can be currently understood automatically, and (3) feed them to the analogical learning system. Note that the first two steps provide a natural audit trail, since they both involve natural language. Adding interactive dialogue and test-taking facilities would simplify the process of checking if the translation to formal representations was accurate (which currently is done by AI experts inspecting them). Here is an illustration of this pipeline in action, from our experiments:

2.png

This model has several advantages over traditional machine learning or deep learning systems. First, all the reasoning is inspectable. The analogs and the generalizations built from stories are expressed using relational representations that can straightforwardly be translated into natural language. Someone using the model can drill down into each step of every decision, seeing exactly what information was used. Second, the data-efficient nature of analogical learning reduces the number of cultural products required to build a model. It also simplifies carrying out studies to understand why the models are working the way they do. For example, sensitivity analyses re-run the model over the same data, systematically varying parameters to see how the results depend on specific choices. Ablation studies cut out parts of a system, to examine how different aspects of it contribute to its results.

I suspect that this approach—building cultural models via analogical learning from a culture’s narratives—could be used to explore other aspects of cultural reasoning, including making accurate predictions about attitudes, choices, and reactions of cultural groups in a broad range of circumstances. This in turn could lead to systems that help policymakers understand how different groups might react to new regulations, and help negotiators find common ground. Moreover, storytelling is a natural activity for people, so many people can contribute to an AI system’s values who otherwise could not. Thus, future AI systems could fit better with the cultures that they are part of, since they will be guided by that culture’s narratives and values.

As AI systems become more intelligent and flexible, having them become full-fledged partners in our culture seems like a promising way to ensure that they are beneficial in their impacts.

References

  1. Mortez Dehghani, “A Cognitive Model of Recognition-Based Moral Decision Making,” Northwestern University dissertation, December 2009, available at http://www.qrg.northwestern.edu/papers/Files/QRG_Dist_Files/QRG_2009/Dehghani_dissert_09.pdf.
  2. Joseph A. Blass and Kenneth D. Forbus, “Moral Decision-Making by Analogy: Generalizations vs. Exemplars,” Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, Texas, 2015, available at http://www.qrg.northwestern.edu/papers/Files/QRG_Dist_Files/QRG_2015/Blass-Forbus-AAAI%2015.pdf.
  3. Mortez Dehghani et al., “An Integrated Reasoning Approach to Moral Decision-Making,” Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI), Chicago, Illinois, 2008, available at http://www.qrg.northwestern.edu/papers/Files/QRG_Dist_Files/QRG_2008/AAAI08-MoralDM.pdf.