Moral Machines Making Life and Death Decisions

As artificial intelligence and robotics advance, human questions of morality have begun to arise. Consider self-driving cars. When a car driving itself faces an unexpected scenario, like a child chasing a ball into the road, it's easy to agree that the car should should automatically stop or move to avoid causing harm. But what if the situation presents only two bad options? What if the brakes fail and the car must choose between continuing straight ahead and killing the boy or veering into a cement truck and killing the driver? How should the car choose?

As machines take on more human tasks they must have answers to impossible choices programmed into them (or we must accept default choices if there is no programming). MIT is exploring how to answer these questions with a platform called the Moral Machine.

You can visit the Moral Machine and add your perspective on how machines should solve moral dilemmas here.


share this article: facebook