CBS’s “Bull,” a drama about a jury consultant (played by “NCIS” alum Michael Weatherly) is an ethics mess…but then, so is the former jury consultant Weatherly’s character is loosely based on: “Dr.” Phil McGraw. The show does find some interesting ethics issues, however. A couple of weeks ago the story involved the programming in an experimental self-driving car. The issue: is it ethical for such a car to be programmed to kill its passenger if it has to make a life or death choice?
The ethical conflict involved is the so-called “trolley problem,” which is, as the name suggests, over a hundred years old. British philosopher Philippa Foot developed it into series of hypotheticals in 1967. In 1985, American philosopher Judith Jarvis Thomson scrutinized and expanded on Foot’s ideas in The Yale Law Journal. Here is one of Thompson’s scenarios:
“Suppose you are the driver of a trolley. The trolley rounds a bend, and there come into view ahead five track workmen, who have been repairing the track. The track goes through a bit of a valley at that point, and the sides are steep, so you must stop the trolley if you are to avoid running the five men down. You step on the brakes, but alas they don’t work. Now you suddenly see a spur of track leading off to the right. You can turn the trolley onto it, and thus save the five men on the straight track ahead. Unfortunately,…there is one track workman on that spur of track. He can no more get off the track in time than the five can, so you will kill him if you turn the trolley onto him.”
The problem: Now what, and why?
A. Throw the switch in order to maximize well-being (five people surviving is greater than one).
B. Throw the switch because you are a virtuous person, and saving five lives is the type of charitable and compassionate act a virtuous person performs.
C. Do not throw the switch because that would be a form of killing, and killing is inherently wrong.
D. Do not throw the switch because you are a Christian, and the Ten Commandments teach that killing is against the will of God.
E. Do not throw the switch because you feel aiding in a person’s death would be culturally inappropriate and illegal.
You throw the switch. Either A or B is an ethical answer, and the Ethics Alarms position is that it doesn’t matter why you throw the switch; throwing it is the right thing to do, and leads to the most ethical result. (And if you recognize that worker as someone you have been tracking down to kill anyway? Moral luck. It doesn’t make the choice wrong, just right for a wrong reason among the right ones.
This situation can and will arise with so-called “autonomous vehicles,” or AVs. “Every time [the AV] makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties,” wrote Iyad Rahwan, an MIT cognitive scientist. If a child wanders into the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the young pedestrian, what should it do?
Now, I, being an ethicist and all, might well make the choice to hit the barrier. No really. But what if my son were in the car, and his seat belt was not fastened? What if the car threatened to hit a woman pushing a baby carriage, if I didn’t swerve into the barrier? What if the pedestrian is an ancient homeless person? An escaped fugitive killer, whom I recognize from the evening news? Stephen Hawking, in his automated wheelchair? The President of the United States?
The NEXT President of the United States?
Even the non-life and death choices are difficult. How careful should a vehicle driver be? Does ethics require that the risk to life must always be minimized to the greatest extent? “When you drive down the street, you’re putting everyone around you at risk,” Ryan Jenkins, a philosophy professor at Cal Poly, told The Business Insider , “[W]hen we’re driving driving past a bicyclist, when we’re driving past a jogger, we like to give them an extra bit of space because we think it safer; even if we’re very confident that we’re not about to crash, we also realize that unexpected things can happen and cause us to swerve, or the biker might fall off their bike, or the jogger might slip and fall into the street.” Noah Goodall, a scientist with the Virginia Transportation Research Council, added, “To truly guarantee a pedestrian’s safety, an AV would have to slow to a crawl any time a pedestrian is walking nearby on a sidewalk, in case the pedestrian decided to throw themselves in front of the vehicle.”
Human drivers make these quick judgments behind the wheel using experience, judgment, skill and intuition. AVs, however, have to be programmed to make them. How? “AV programmers must either define explicit rules for each of these situations or rely on general driving rules and hope things work out,” Business Insider concluded.
“Hope things work out”? Life, chaos theory and “Jurassic Park” tell us that such hope is foolish and futile.
Do you want to own a car that chauffeurs you to your destination, but is programmed to sacrifice you, its owner, in a trolley problem situation? I certainly would want a say in the matter, wouldn’t you? Last Fall, a Daimler AG executive told “Car and Driver” that the Mercedes-Benz AV would protect passenger at all costs, causing ethics critics to pounce. “No no!” the company insisted. Denying such programing, it claimed that “neither programmers nor automated systems are entitled to weigh the value of human lives.”
Huh? That’s nonsense. It the AV is driving itself, it has to weigh such values. Daimler went on to say that trolley problems weren’t really an issue at all, as the company “focuses on completely avoiding dilemma situation by, for example, implementing a risk-avoiding operating strategy.”
Authentic Frontier Gibberish! All that means is that the car will try to avoid accidents. Good, but only a fool would believe that any programming will be 100% successful. When a trolley problem arises, it is there; it must be dealt with, and choices must be made.
Here is Google’s solution, so far:
Back in 2014, Google X founder Sebastian Thrun said the company’s cars would choose to hit the smaller of two objects: “If it happens that there is a situation where the car couldn’t escape, it would go for the smaller thing.” A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.
Hitting the smaller object is, of course, an ethical decision: it’s a choice to protect the passengers by minimizing their crash damage. It could also be seen, though, as shifting risk onto pedestrians or passengers of small cars. Indeed, as Patrick Lin, a philosophy professor at Cal Poly, points out in an email, “the smaller object could be a baby stroller or a small child.”
In March 2016, Google’s AV leader at that time, Chris Urmson, described more sophisticated rules to the LA Times: “Our cars are going to try hardest to avoid hitting unprotected road users: cyclists and pedestrians. Then after that they’re going to try hard to avoid moving things.”
Wait, does that mean it will kill me rather than that cyclist?
These programming choices will affect insurance and liability. Consumer Watchdog’s Wayne Simpson, who doubts whether these and other problems will ever be solved sufficiently to make AVs viable, testified before NHTSA,
“The public has a right to know when a robot car is barreling down the street whether it’s prioritizing the life of the passenger, the driver, or the pedestrian, and what factors it takes into consideration. If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”
There is another complication to the ethics calculations, however. Eventually, it will be undeniable that AVs, a.k.a. “robot cars,” will save hundreds of thousands of lives. Users of such cars may have to accept the fact that their insurance and and laws will mandate that their vehicles must drive themselves and will choose to kill them, under certain rare but possible circumstances…for the greater good, of course.
But won’t such laws be vulnerable to Constitutional challenge? Can the government force me to accept that my car will kill me as a condition of traveling on the roads and highways, because other drivers are too dangerous to allow behind the wheel?
You can explore this problem more thoroughly in the scenarios presented at this website.