Self-Driving Car Ethics: Who Do They Decide To Kill? You?

av-choice

CBS’s “Bull,” a drama about a jury consultant (played by “NCIS” alum Michael Weatherly) is an ethics mess…but then, so is the former jury consultant Weatherly’s  character is loosely  based on: “Dr.” Phil McGraw. The show does find some interesting ethics issues, however. A couple of weeks ago the story involved the programming in an experimental self-driving car. The issue: is it ethical for such a car to be programmed to kill its passenger if it has to make a life or death choice?

The ethical conflict involved is the so-called “trolley problem,” which is, as the name suggests, over a hundred years old. British philosopher Philippa Foot developed it into series of hypotheticals in 1967. In 1985, American philosopher Judith Jarvis Thomson scrutinized and expanded on Foot’s ideas in The Yale Law Journal. Here is one of Thompson’s scenarios:

“Suppose you are the driver of a trolley. The trolley rounds a bend, and there come into view ahead five track workmen, who have been repairing the track. The track goes through a bit of a valley at that point, and the sides are steep, so you must stop the trolley if you are to avoid running the five men down. You step on the brakes, but alas they don’t work. Now you suddenly see a spur of track leading off to the right. You can turn the trolley onto it, and thus save the five men on the straight track ahead. Unfortunately,…there is one track workman on that spur of track. He can no more get off the track in time than the five can, so you will kill him if you turn the trolley onto him.”

The problem: Now what, and why?

A. Throw the switch in order to maximize well-being (five people surviving is greater than one).
B. Throw the switch because you are a virtuous person, and saving five lives is the type of charitable and compassionate act a virtuous person performs.
C. Do not throw the switch because that would be a form of killing, and killing is inherently wrong.
D. Do not throw the switch because you are a Christian, and the Ten Commandments teach that killing is against the will of God.
E. Do not throw the switch because you feel aiding in a person’s death would be culturally inappropriate and illegal.

You throw the switch. Either A or B is an ethical answer, and the Ethics Alarms position is that it doesn’t matter why you throw the switch; throwing it is the right thing to do, and leads to the most ethical result. (And if you recognize that worker as someone you have been tracking down to kill anyway? Moral luck. It doesn’t make the choice wrong, just right for a wrong reason among the right ones.

This situation can and will arise with so-called “autonomous vehicles,” or AVs.  “Every time [the AV] makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties,” wrote Iyad Rahwan, an MIT cognitive scientist. If a child wanders into the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the young pedestrian, what should it do?

Now, I, being an ethicist and all, might well make the choice to hit the barrier. No really. But what if my son were in the car, and his seat belt was not fastened? What if the car threatened to hit a woman pushing a baby carriage, if I didn’t swerve into the barrier? What if the pedestrian is an ancient homeless person? An escaped fugitive killer, whom I recognize from the evening news? Stephen Hawking, in his automated wheelchair? The President of the United States?

The NEXT President of the United States?

Even the non-life and death choices are difficult. How careful should a vehicle driver be? Does ethics require that  the risk to life must always be minimized to the greatest extent? “When you drive down the street, you’re putting everyone around you at risk,” Ryan Jenkins, a philosophy professor at Cal Poly, told The Business Insider , “[W]hen we’re driving driving past a bicyclist, when we’re driving past a jogger, we like to give them an extra bit of space because we think it safer; even if we’re very confident that we’re not about to crash, we also realize that unexpected things can happen and cause us to swerve, or the biker might fall off their bike, or the jogger might slip and fall into the street.” Noah Goodall, a scientist with the Virginia Transportation Research Council, added,  “To truly guarantee a pedestrian’s safety, an AV would have to slow to a crawl any time a pedestrian is walking nearby on a sidewalk, in case the pedestrian decided to throw themselves in front of the vehicle.”

Human drivers make these quick judgments behind the wheel using experience, judgment, skill and intuition. AVs, however, have to be programmed to make them. How? “AV programmers must either define explicit rules for each of these situations or rely on general driving rules and hope things work out,” Business Insider concluded.

“Hope things work out”? Life, chaos theory and “Jurassic Park” tell us that such hope is foolish and futile.

Do you want to own a car that chauffeurs you to your destination, but is programmed to sacrifice you, its owner, in a trolley problem situation? I certainly would want a say in the matter, wouldn’t you? Last Fall, a Daimler AG executive told “Car and Driver” that the Mercedes-Benz AV would protect passenger at all costs, causing ethics critics to pounce. “No no!” the company insisted.  Denying such  programing,  it claimed that “neither programmers nor automated systems are entitled to weigh the value of human lives.”

Huh? That’s nonsense. It the AV is driving itself, it has to weigh such values. Daimler went on to say that trolley problems weren’t really an issue at all, as the company “focuses on completely avoiding dilemma situation by, for example, implementing a risk-avoiding operating strategy.”

Authentic Frontier Gibberish! All that means is that the car will try to avoid accidents. Good, but only a fool would believe that any programming will be 100% successful. When a trolley problem arises, it is there; it must be dealt with, and choices must be made.

Here is Google’s solution, so far:

Back in 2014, Google X founder Sebastian Thrun said the company’s cars would choose to hit the smaller of two objects: “If it happens that there is a situation where the car couldn’t escape, it would go for the smaller thing.” A 2014 Google patent involving lateral lane positioning (which may or may not be in use) followed a similar logic, describing how an AV might move away from a truck in one lane and closer to a car in another lane, since it’s safer to crash into a smaller object.

Hitting the smaller object is, of course, an ethical decision: it’s a choice to protect the passengers by minimizing their crash damage. It could also be seen, though, as shifting risk onto pedestrians or passengers of small cars. Indeed, as Patrick Lin, a philosophy professor at Cal Poly, points out in an email, “the smaller object could be a baby stroller or a small child.”

In March 2016, Google’s AV leader at that time, Chris Urmson, described more sophisticated rules to the LA Times: “Our cars are going to try hardest to avoid hitting unprotected road users: cyclists and pedestrians. Then after that they’re going to try hard to avoid moving things.”

Wait, does that mean it will kill me rather than that cyclist?

These programming choices will affect insurance and liability. Consumer Watchdog’s Wayne Simpson, who doubts whether these and other problems will ever be solved sufficiently to make AVs viable, testified before NHTSA,

“The public has a right to know when a robot car is barreling down the street whether it’s prioritizing the life of the passenger, the driver, or the pedestrian, and what factors it takes into consideration. If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”

There is another complication to the ethics calculations, however. Eventually, it will be undeniable that AVs, a.k.a. “robot cars,” will save hundreds of thousands of lives. Users of such cars may have to accept the fact that their insurance and and laws will mandate that their vehicles must drive themselves and will choose to kill them, under certain rare but possible circumstances…for the greater good, of course.

But won’t such laws be vulnerable to Constitutional challenge? Can the government force me to accept that my car will kill me as a condition of traveling on the roads and highways, because other drivers are too dangerous to allow behind the wheel?

You can explore this problem more thoroughly in the scenarios presented at this website.

25 Comments

Filed under Government & Politics, Health and Medicine, Law & Law Enforcement, Rights, Science & Technology

25 responses to “Self-Driving Car Ethics: Who Do They Decide To Kill? You?

  1. First impression: In any given situation where you have “brake failure”, you have to get the car stopped. What good is it to run over people if you can’t get the car stopped and you’ll be faced with that decision again and again. Might as well hit something to disable the vehicle…besides, the vehicle should be designed with safety features to protect the occupants from impact. Programming should be as predictable as possible. 5 MPH under speed limit, prior planning for direction of travel.

    Second impression: Fully autonomous vehicles are worthless. I’d like to see them handle a snow storm. My car has “TCS” or Traction Control System. The purpose of this “Default On” feature is to stop you from accelerating when one tire starts to spin. In the ideal scenario, if you’re over accelerating and your right tire spins without traction, without TCS, you might continue to accelerate and the traction of your left tire will swing your car off the road. When TCS kicks in, it stops power from reaching any of the tires you glide through the event. What typically happens are these two scenarios.

    1) I’m waiting to enter a long line of traffic that’s moving 45MPH. There’s a gap big enough for me and I can get up to speed quickly…but one tire slips on gravel as I pull into the lane of traffic and TCS cuts acceleration power. I have to glide for 1.5 – 2.5 seconds until power re-engages. Meanwhile, I didn’t accelerate as fast as I could have / thought I would and the next car is up my ass.

    2) I’m going slow in a neighborhood during a snow storm, but I’m not stopping anywhere. Continued motion is key to navigating snow storms. If I stop, My tires stick in a rut and I can’t get going again – one tire might have traction, but because the other tire is “slipping”, power is cut to the other tire that has traction.

    (Thank god there’s a way to turn TCS off – I turn it off all of the time, even in great weather.)

    The point is – these geniuses can’t be bothered to figure out how to make that piddly feature from 2004 work properly….there are thousands of other scenarios and features that they won’t ever consider or perfect.

    I wonder how many “Miraculous Saves” people make each day in this country driving? The common refrain is: “If everyone’s car is automated…” but that’s simply not going to happen….not unless these technological wonders are less than $5k brand new. No….these automated vehicles will be the prized possession of the upper class for 10 years and they’ll have to navigate the same traffic scenarios as the rest of us. I take that back. They’ll have to deal with more.

    How’d you like to be the victim of a robbery? How does an AV recognize that someone cut it off intentionally to force it to stop and that the person that exits that vehicle in front is a threat. They’ve got a gun / knife and now the occupants are a sitting duck because the AV won’t get going again until the aggressor’s car moves. A normal person would reverse, get around and get the hell out of there. In an AV – you’re trapped in a little prison and road pirates will just force you to stop and hand over your valuables.

    You think criminals want to be stuck in vehicles that can take orders from a government agency when a warrant hits the wire? I doubt they’d be pleased that their AV redirected to bring them to the police station or pulled off to the side of the road and locked them in to await the arrival of police. So – we know criminals (who probably can’t afford the AV) won’t be driving them. They’ll be among the hold-outs, with the car enthusiasts (collectors) and the libertarian minded.

    • Alex

      >The point is – these geniuses can’t be bothered to figure out how to make that piddly feature from 2004 work properly….there are thousands of other scenarios and features that they won’t ever consider or perfect.

      An one of “these geniuses” (no offense taken) let me tell you that there are tons of this low-hanging fruit to fix before we even get to the point where the system has to decide who to kill.

    • Matthew B

      I’ve seen plenty of badly implemented TCS, ABS, and all wheel drive. That doesn’t mean they’re all bad. Many are done quite well. I’ve yet to hit the TCS off in my Ford Escape and I’ve had it in some really bad traction situations.

    • Anonymous Coward

      >In any given situation where you have “brake failure” ect etc
      I don’t think the point of this is to pick apart the scenario, but explore the problems it represents. Veritasium on Youtube has a video on similar things (This topic is very popular right now, as a lot of car manufacturers brought self driving prototypes to CES this year).

      I also think it’s rather disingenuous to compare TC on your particular car to billions being invested in autonomous technology.

      > one tire might have traction, but because the other tire is “slipping”, power is cut to the other tire that has traction.
      Unless you have locking differentials, this is not how this situation plays out.

      When one tire loses traction in a normal differential, it actually gets all the power; it’s a side affect of how they work. So the wheel without traction would spin spin spin while the wheel with traction just sits there. TC tries to help by cutting power to the spinning wheel, hoping to cause either of them to grip and pull you out, but that’s not really what it was designed for.

      Also, for what it’s worth, TC systems vary by manufacturer, and while I do not know your car/truck, my Toyota with TRAC and TCS does a good job in the rain, especially if I take a turn a little too fast and start to under steer into the next lane. Could I keep control of the car without it? Sure, but that’s not the point.

  2. Alex

    Oh my! Prepare for a rant. This is a topic close and dear to my heart since I have worked in Artificial intelligence in the past and currently work in an industry where two-fault-tolerance (i.e. two credible failures across the complete system will never result in a catastrophic failure – which includes death of a human) is the rule.

    First, fully autonomous vehicles is not gonna happen within our lifetime, or our children’s lifetimes. There are a number of technical reasons for this, but if you really want to know why look at the air transportation industry. Modern airliners are perfectly capable of flying themselves from the moment they’re aligned with the runway until they leave it after landing. Yet, there is always a pilot – actually two – who can override the automated systems if they fail. Most importantly, they are there to give *goals* to the automated systems. Pilots do not get in a plane is Seattle and just enter the coordinates for New York in the flight computer. They create and file a flight plan, manage communications and “program” the airplane to follow the specified route, making adjustments as required and monitoring the outcome. In the self driving car world which is much less sophisticated what we will see is vehicles that can drive autonomously on the freeway, while the driver must still take control in less predictable roads. Then we will get driving assists (braking when an object is detected is already a feature) that get more sophisticated and prevalent, just like ABS is today. Aside, give a twenty-something an old car without ABS on a mildly wet road, fun times!

    Second, the driving force on technology adoption will be insurance. Not the law (which might at first spur progress, but then halt it when it fails to adapt to updated technology), not consumer demand for the latest gadget and not technological progress (which is leaps ahead of what’s being offered in the market). Once enough people have an automated car that never has accidents while on a freeway, insurance companies will offer better rates for those. If you become proficient at taking and handling control of the car to the automated system, your rate goes down (see all those gadgets that can now track your driving habits and adjust your rate accordingly). If you have more of these safety options you will get better rates (say, small object detection and avoidance). It will be slow but steady adoption. Just like driving a stick is now mostly an obsolete skill, merging into the highway will become mostly useless in a couple of decades. A decent automated system can do it much better than any human can (in both cases). Driving will keep changing, but it won’t jump to a fully automated point in the span of a few years.

    Third, self driving systems will be fault tolerant, even multiple fault tolerant. I’ll take one of the examples trolley-problem lovers use to illustrate this: complete and sudden brake failure. “Do you run over the pregnant woman pushing a stroller or the group of teenagers?” Guess what, complete and sudden brake failure is virtually impossible. One brake pad may fail, but you still have three. The electronic braking system may fail, there is still a mechanical backup. Your break pedal gets stuck with the floor mat, you still have your hand brake. Any fault tolerant system will take this into consideration. Probabilities will be computed and analyzed: not all accidents lead to fatalities, injury severity is considered in the decision making process, probabilistic modeling of the physics of the system will be part of the software package. This will minimize actual death risk to the same level of airline flight, if not better. Don’t believe me? Please tell me when was the last time a software error in a mass produced plane caused a fatal crash and how many people died.

    Will it be a smooth ride to get there? No. Will there be high-profile accidents in the meantime? Probably, but it is not a problem with the software making “bad” ethical decisions. It will be much more pedestrian issues, like forcing the technology out before it’s ready, failing to inform users or not providing a clear and safe interface to the new capabilities.

    In summary, focusing on the “ethical decision making” program is a fun philosophical exercise but is highlighting the wrong problem. If this ‘ethics module’ becomes an issue when deciding who to kill in an accident, you have much much much bigger problems that got you there in the first place.

    • Alex

      One further note: and I hate to mention it because it sounds like an ad hominem, but in this case, credentials and experience are important.

      The three guys behind the moral machine design are a Media Arts & Sciences professor, a Psychological Scientist, and a Professor of Psychology. I’m sure this thought experiment will tell you more about the participants and designers than the actual technological and ethical issues.

      Want a more practical and nuanced look? Read Rodney Brooks take: http://rodneybrooks.com/unexpected-consequences-of-self-driving-cars/ (this is one of the guys behind iRobot, the makers of Roomba, and has been doing robotics longer than I’ve been alive)

      • If it sounds like an ad hominem, that’s probably because it is an ad hominem, with a dash of authority appeal as seasoning. You’re implying that the people involved shouldn’t be listened to, because they lack the technical expertise to understand the system.

        It’s also the reason why many people regard AI researchers with a certain level of disdain (see Tim’s sarcastic use of geniuses, earlier), or question if they can be trusted to make life and death decisions (and, by extension, make a machine that might also make those decisions). You’re caught up on the technical aspects, and lecture lay people on why their concerns aren’t valid because of where the technology is currently at… which fundamentally misses the point of the concerns they’re raising, which are about where the technology might be END UP.

        Even the article you link blathers on about the practicality of the questions that are being asked, and how rarely they’re going to be made. THAT DOESN’T MATTER! We’re talking about a machine, and that means that if the question can be asked, there should be a predictable, explicable, and consistent course of action taken by the machine, which can be extrapolated from how it has been instructed to act. As the user of the machines, we have a right, and a vested interest, in knowing, or being able to be informed, about what it will do in response to a hypothetical (or real) situation.

        Also, if you’re going to link an article to an ethics debate where the writer’s conclusion to the trolley problem is that his preferred answer is “[a two year old] moves the singleton to lie on the same track as the other five, then drives his train into all six of them, scatters them all, and declares “oh, oh”!”, because hypotheticals are absurd… I’m going to view his conclusions as being questionable at the best. Many of the questions we deal with in ethical debates are hypothetical (how often will you be in a position to actually defend yourself from a tyrant? Or choose between a convict’s life and a child’s?), but we have the debates and discussions so that we can practice making ethical decisions, and hone the process by which we do so. By practicing, we prepare for the unlikely scenario if one of those situations comes up, and also for the multitudinous, far more likely, scenarios that are lesser in the extremes than what has been discussed.

        Sure, the self driving car killing someone (passenger or driver) is an unlikely and extreme scenario. But by working to figure out what the answer to this question SHOULD be, ethically (and explain why) we are also helping ourselves to prepare for and answer the less extreme and more likely scenarios: Should self-driving cars be hard coded to always follow the speed limit? Even if you’re trying to get to the hospital with someone in need of treatment? Must they always follow traffic laws? What happens if the direction of traffic on a one way street has been reversed for the day, because one end of it is closed due to utility work?

        • Alex

          Well, the people behind Moral Machine should not be listened to because their argument is stupid. You can look at the current and historical state of AI to see why. You can look into actual systems engineering principles (like fault tolerance) to make an argument of why addressing this hypothetical is a priority. Heck, you can address the arguments in my original post instead of rat-holing in the addendum post where I point out the weakness of the argument and encourage you to go learn more about it instead of swallowing it blindly.

          Let’s look at an analogy: If a 20 story building is to be constructed between a school and a hospital should we spend months deciding whether it should fall to the left or the right in case of an earthquake? We should make that part of the city code instead of making an effort to create standards that will PREVENT the building from collapsing in the first place.

          Do we ask electricians to wire houses in such a way that a short will burn the living room first instead of the master bedroom?

          I expect any reasonable architect or engineer to laugh at me if I ask those question.

          Should we be skeptical of the engineers behind the current crop of self-driving cars? Yes: not because their design fail to account for the last resort decision, but because their process does not guarantee an outcome that reduces harm to an acceptable level. We should be spending time pushing for real engineering (think civil or structural) practices in the software and electronics domain. Aviation software is mostly there, and medical devices are making good progress, but the formal preparation for our Computer Science graduates does not even begin to touch on these topics.

          Also, you probably missed the tags in the final section Brooks’ article. 🙂

          PS – Go read about Superintelligence, understand Bostrom’s and Yudkowsky’s arguments, see the amount of effort and top-notch knowledge being spent on a beyond-remote possibility to see what will happen to self-driving cars if we focus on the least-probable/most-scandalous outcome. Or don’t if you won’t read them with a critical eye, because you’ll never sleep again.

    • Alex

      There’s a nuanced conversation to be had about how better design might have helped prevent the accident, but even an amateur pilot will tell you that pulling the nose back is the worst possible thing to do when the stall warning is ringing (this comes straight from the article: http://www.telegraph.co.uk/technology/9231855/Air-France-Flight-447-Damn-it-were-going-to-crash.html)

      • Rick M.

        Sometimes the technology causes a delayed reaction. Critical situations have to be overridden. That is the difference between the Boeing and Airbus approaches. Who has final say? Computer or pilot?

        I would never even consider a self-driven car. I love driving. Have had many sports cars.

        • Alex

          I agree on the Airbus vs. Boeing approach (disclosure: I’m rooting for the hometown team). The flight stick is an abomination. It changes the interface that virtually all pilots learned with (flight wheel) with one that a.) provides no feedback and b) is controlled with the non-dominant hand by the captain.

  3. Errol

    This is the first time I have seen the ‘trolley problem’ where the switch is actually on the trolley. Every other time I’ve read it, the switch has been outside beside the track. Does being the driver of the trolley make a difference from being a bystander? Also the ‘trolley problem’ is usually followed by ‘The fat man’ dilemma. https://en.wikipedia.org/wiki/Trolley_problem
    The fat man dilemma – ‘As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?’

  4. Matthew B

    In the example where someone steps into the path of the AV, the culpability of the person doing the stepping in front has some culpability. Presumably, the AV is going to stop in time if it were possible as it would have a reaction time far faster than any human; the only way it would risk hitting a pedestrian is if the pedestrian stepped into the path of the AV carelessly.

    So the equation does alter some: should the life of the innocent AV occupant die to save the life of a careless pedestrian?

  5. Wendy

    Based on the example pictured, I hope these cars are somehow programmed to know when it’s approaching a crosswalk, and should not be going a speed that it wouldn’t be able to stop. In any event, what I was wondering was why should I have to be killed/injured because someone steps in the road with a car coming? I’m just talking about the person who just thinks (or doesn’t) and oblivious to the world (perhaps texting?) Of course I don’t want to see anyone get hurt, but is it unethical for me not wanting to get hurt as a result of someone else’s stupidity?

  6. isolumikko

    I’m not sure how it relates to ethics, because utilitarianism is in my view not always ethical, but surely the most likely route for AI behaviour likely to be convergence on rules which minimise insurance payouts.

    The whole argument for self-driving cars is that they are on average safer the human-driven cars. The trolley-switch dilemma is irrelevant because it is a very specific and highly unlikely case. Real accidents involve incomplete knowledge and rapidly diminishing o ptions, of the sort steer off the road vs risk a head-on with approaching traffic. How we can quantify “safer” is by the total cost of unsafety, and insurance payouts make a reasonable proxy for this.

    Finally, I think US will be late to adopt self-driving on public roads because the legal/political system will not face up to harsh utilitarianism. Self-driving golf buggies, maybe. Self-driving quarry trucks, too, but the first commercial use on public roads will be elsewhere.

    • Dwayne N. Zechman

      Another example is self-driving farm tractors. I have a farmer friend who tells me that they’re quite effective because they make very straight lines and get the width of each pass as uniform as possible by using GPS.

      –Dwayne

  7. “Self-Driving Car Ethics: Who Do They Decide To Kill? You?”

    But you take this risk willingly every time you let your wife drive.

  8. Isaac

    Subways and trains are already more or less incapable of reacting if someone jumps or falls into their path. So one foreseeable benefit of AVs is that pedestrians will be much more careful around roads, and not assume that drivers will just react to their presence. (This is also why you will never see AVs in India, where all pedestrian crossing is based on throwing yourself into the path of oncoming cars.)

  9. That I, Robot clip Rick posted was in jest, I suspect, but the protagonist of the movie hates robots specifically because one chose to save him from a car crash rather than a little girl, which is similar to the problem we’re discussing here, except that the robot actually factored in their individual chances of survival.

    Tim Hayes is right that the question should be asked and answered deliberately during the programming of cars. Alex is right in that rather than people’s behavior remaining the same while dictating the answers to the questions, it is far more feasible that we answer the question decisively and that individuals adjust their behavior according to what they know about the cars’ decision process and safety protocols. That is how technological change works. That’s why the streets became dominated by cars instead of by pedestrians (see Adam Ruins Everything: Why Jaywalking is a Crime). Why program cars to make human decisions when humans will make decisions based on how the cars work? The decisions about how to program the cars will revolve around how difficult it will be for people to plan around car programming.

    On a more overarching note, during a discussion about the trolley problem I once heard someone draw a parallel between the classic trolley problem, or at least the version where you can push a fat man into the path of the trolley, and a situation where five people each needed a different organ to live, and you could choose to take the organs from a healthy person. In both cases, the question is whether it is right to save one person who is currently on track to live (no pun intended), or save five people who are on track to die.

    After some thought, I concluded that a relevant question here that people often overlook is what kind of world is created by the decision (so yes, the implications this decision has under the Kantian categorical imperative). Do we really want to live in a world where people can never be sure from day to day that they won’t be actively sacrificed against their will to save someone else? It’s one thing to manage your own risks, but managing the risks and choices of others lest you be forced to take their place is another matter entirely. What sort of choices would people be forced to make in the future? If a person loses weight, they may not be considered as a sacrifice to stop the trolley, but would they then possibly be condemning people to death at an unspecified point in the future? Would everyone have a stake in enforcing eugenics and healthy behavior so nobody will have organ failure that could call for a sacrifice?

    Back to the original problem of the ethics quiz, if all six people are working on the tracks, they presumably all accept an equal risk that a trolley might crash into them, and so it does not disrupt society as a whole to sacrifice the one to save the five. Taking organs from healthy people to save the sick, however, annihilates the confidence that a person has that their fellows will protect their current state. If people are willing to turn on each other whenever a less fortunate soul appears, society cannot function, even if the system is designed so that “worthy” souls are the ones who survive. The worthiness of those individual souls is not sufficient compensation for the evaporation of trust between people.

    Pure compassion (chaotic eusocial* behavior) might hold that every person share the burden of everyone else, but humans and human society can’t endure that. At some point, honor (orderly eusocial behavior) has to enforce boundaries that insulate people from obligation to share the misfortune of others.

    Although people need chaotic concepts like compassion to lend life meaning, we need honor in order to merely coexist. Honor is vital enough that societies that are initially in anarchy will inevitably draft rules, created and enforced by those who manage to accumulate power in the chaos that preceded. They make rules because power is easier to hold and wield with rules in place, and things tend to be more when people know what to expect.

    Certainty and security, in addition to its value, has a certain seductive quality to it to those who are already comfortable. Compassion, hope, possibility, and other chaotic concepts, meanwhile, are attractive to those who are currently all but certain that bad things will happen in the future, or that good things will fail to happen. Ethics would tend to be more concerned with honor, since it represents the obligations that we impose on people and ourselves to promote a better society. However, compassion can resolve many ethics conflicts that honor cannot, when someone must yield but no one can be compelled to (as in a noise war, where people are unpleasant to each other within their rights in retaliation for each other’s unpleasantness).

    *Eusocial here means “good”, having the intention of sacrificing from oneself to benefit others. It can be misguided or even outright stupid, but for the purposes of this word it’s really the thought that counts. It’s possible to advocate eusocial behavior in others while remaining merely neutral oneself, only doing the minimum one is compensated for.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s