How to Be Good

Article/Op-Ed in Future Tense
April 20, 2016

Adam Elkus wrote for Future Tense explaining why you can't teach human values to artificial intelligence:

If you encountered a robot on the street, you would want it to give you the right of way instead of just rolling over your foot, right? Making room for a passerby is simple, but it’s just one of the many human “values” that we seek to make our increasingly prolific machine creations obey.
Computer scientists like Stuart Russell and technologists in companies building advanced artificial intelligence platforms say that they want to see A.I. “provably aligned with human values.” A scientist at the A.I. startup Anki recently assured Elon Musk and others that A.I. will be “friend”—not “foe.”
At first glance, little of this is objectionable. We have been conditioned ever since Isaac Asimov’s famous Three Laws of Robotics to believe that, without our moral guidance, artificially intelligent beings will make erroneous and even catastrophically harmful decisions. Computers are powerful but frustratingly dumb in their inability to grasp ambiguity and context. Russell and others want to ensure that computers make the “right” decisions when placed in contact with humans; these range from the simple right-of-way norm observed to more complex and fraught issues, such as deciding whose life to prioritize in a car accident.
However, Russell and others ignore the lessons of the last time that we seriously worried about how to interpreting how machines embedded in society reflected human beliefs and values. Twenty years ago, social scientists came to the conclusion that intelligent machines will always reflect the knowledge and experiences of the communities they are embedded within. The question is not whether machines can be made to obey human values but which humans ought to decide those values.