lego train with two tracks and a lego man holding a lever

Unpublished Draft: The Trolley Won’t Be A Problem

In this baby era of self-driving cars everyone seems to be talking about the trolley problem:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

 

Personal Action

The whole point of the trolley problem is to explore consequentialism, human moral psychology and personal responsibility.   We already know that if an inanimate object or coincidence causes a death we feel kind of fine about it, compared with if we have to take action that leads to a death. People don’t like being the one to pull that lever. Even so, the majority of people surveyed take the utilitarian option and kill one person instead of five.

Self driving cars are a brilliant solution to trolley problem-style situations (should they exist). If the trolley itself is able to see the 5 persons on the track compared to the one person on the other track, and is able to take that ‘decision’ instantly without human intervention (albeit with shadowy far-off programmers writing the software), then human action is completely relieved. The trolley is a perfect utilitarian machine.

Self-driving cars tell us nothing new about the trolley problem because the only argument in the dilemma outside of utilitarianism revolves around personal participation in moral wrong which the self-driving car has removed. Programmers will almost certainly take the utilitarian option when writing software, because this is the obvious choice.

However, there is another reason that the trolley won’t be a problem.

Trolleys Just Don’t Happen

Recently I’ve been reading sites like Meaningness and Metamoderna. In these places the authors are attempting to think beyond post-modernism, and to describe how to function when “nothing means anything” or “everything is relative”.

One of the most interesting points that Meaningness makes about how to make judgements in a time when we know that all systems can be seen as relative, is that in the real world, the only one we inhabit physically, there are no abstract problems devoid of context. Context is inextricable from life. Therefore, while we can invent these abstract thought experiments to explore our intuitions, a trolley problem will never exist for a real self driving car. When there is plenty of context for the car, there will always be a clear “best case” for the car’s programming to follow which will almost certainly involve “no death” as one of the options.

Predictions

My prediction is that trolley-problem style deaths will be very rare, partly because they have been programmed that way, but partly because truly ambiguous circumstances will be very rare. Further I predict that other, different death-by-autonomous vehicle problems will arise that were not described in thought experiments, and their numbers will be greater. I predict all deaths will be less than when other new transport technologies were first introduced – e.g. train and air travel. Further, autonomous vehicles will in fact save many more lives compared to human drivers than they terminate, in both trolley and non-trolley situations.

Real Problems

One of the most disturbing things about people going on about the trolley problem is that

a) they think there is an answer to it

and

b) they let the same people who program self driving cars carry on trying to build AI while we still don’t have the answer to a).

If anyone actually is concerned about the ethics of self-driving cars then why the hell are we ok with AI? Hmm? Shouldn’t we wait til we find the answer? No-one seems to take this view though.

I don’t care as much because I don’t think there is an answer to a) out of context, but there is a fairly obvious answer in context and we can write some simple statements to deal with it. I also don’t think we’re capable of creating AI because of similar, but just as intractable problems with philosophy so DON’T WORRY WE’RE ALL VERY SAFE.

Stop Being Boring

stop-boring-me-and-think

Anyway, the trolley problem is so boring, so let’s dispense with it in favour of:

a) talking more deeply about ethics for programmers in multiple domains

b) talking about other kinds of problems for self driving cars

I personally think the worst problem will be how to enable urban traffic to actually drive fast enough. These cars will likely be programmed to slow right down and stop if a living being that’s probably a human passes in front of them, because we must avoid even one death at all costs. As soon as this becomes common knowledge pedestrians will deliberately step out into traffic whenever they like, knowing that all the vehicles will have to slow down and stop. This will grind traffic to a near-permanent stand-still during busy times.

Maybe the solution will be to program cars to be randomly less safe again, to program a 1/1000 chance of deliberately hitting a pedestrian, because fuck you pedestrians. Maybe maintaining a healthy fear of cars is what will be needed to maintain the supremecy of the individual vehicle. The American Dream. Maybe lengthy discussion of the trolley problem, ie continuously imagining ourselves being horribly killed in our cars is serving this exact purpose for our auto-overlords.

I’m never going to get away from it, am I?

(Featured image by Ryan Howerter)

Comments

Comments are closed.