April 3, 2016
Patrick Lin wrote for Quartz about the moral realities of trusting robots:
And this isn’t simply a matter of arguing until we figure out the right answer. Patrick Lin, director of Ethics + Emerging Sciences Group at California Polytechnic State University, says ethics may not be internally consistent, which would make it impossible to reduce to programs. “The whole system may crash when it encounters paradoxes or unresolvable conflicts,” he says.
Anderson and the other professors I spoke to agree that machines should not function in areas where there’s moral controversy. And Lin adds that he questions whether it’s ethical, on principle, to offload the hard work of ethical decisions onto machines.“How can a person grow as a person or develop character without using their moral muscle?,” he says. “Imagine if we had an exoskeleton that could make us move, run, life, and do all other physical things better. Would our lives really be better off if we outsourced physical activity to machines, instead of exercising our own muscles?”