How the ‘trolley problem’ applies to self-driving cars

Our existing social contract around driving should apply to automated vehicles, say researchers, essentially solving the “trolley problem.”

The classic thought experiment asks: Should you pull a lever to divert a runaway trolley so that it kills one person rather than five? Alternatively: What if you’d have to push someone onto the tracks to stop the trolley? What is the moral choice in each of these instances?

For decades, philosophers have debated whether we should prefer the utilitarian solution (what’s better for society; i.e., fewer deaths) or a solution that values individual rights (such as the right not to be intentionally put in harm’s way).

In recent years, automated vehicle designers have also pondered how AVs facing unexpected driving situations might solve similar dilemmas. For example: What should the AV do if a bicycle suddenly enters its lane? Should it swerve into oncoming traffic or hit the bicycle?

According to Chris Gerdes, professor emeritus of mechanical engineering and co-director of the Center for Automotive Research at Stanford (CARS), the solution is right in front of us. It’s built into the social contract we already have with other drivers, as set out in our traffic laws and their interpretation by courts. Along with collaborators at Ford Motor Co., Gerdes recently published a solution to the trolley problem in the AV context in the Journal of Law and Mobility. Here, Gerdes describes that work and suggests that it will engender greater trust in AVs:

Source: Katharine Miller for Stanford University

source

Author Profile

Futurity (National Geo)"Center" Bias Rating
Futurity is a nonprofit website that aggregates news articles about scientific research conducted at prominent universities in the United States, the United Kingdom, Canada, Europe, Asia, and Australia. It is hosted and edited by the University of Rochester.