Self Driving Cars Could Be Programmed To Kill You In Order To Save Other Lives

4

Written by John Vibes at trueactivist.com

 

For the past few years, Google’s self-driving cars have been traveling around the country to test how well they work and how safe they are. The cars have been in a total of 13 accidents, however, the self-driving vehicles were not responsible for any of the accidents that took place.

Most of the accidents occurred when the cars were rear-ended by other drivers who were not paying attention. The cars have logged about 1,011,338 miles since 2009 without a driver, and then nearly another million with a driver.

The cars are maneuvered with complex algorithms, that calculate everything from weather conditions, to red lights, to evasive moves in accident situations.

In the tweaking of these algorithms, some very strange ethical questions have begun to arise. For example, is it ethical for a car to be programmed to kill you if it means saving the lives of many other people?

That is a question that researchers at the University of Alabama at Birmingham are currently considering.

UAB researcher Ameen Barghi said that:

“Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?”

Barghi continued:

“Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former. Deontology, on the other hand, argues that some values are simply categorically always true. For example, murder is always wrong, and we should never do it. Even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one.

Would you drive a car that was programmed to kill you in certain situations? Even if it meant saving the lives on many other people?

CLICK HERE TO SUPPORT US VIA PATREON

Get Your Anonymous T-Shirt / Sweatshirt / Hoodie / Tanktop, Smartphone or Tablet Cover or Mug In Our Spreadshirt Shop! Click Here

 

4 COMMENTS

  1. “One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times…” – Morals and the machine (The Economist)

  2. I think that given the rise in drunk driving, texting, etc. we should embrace this technology. An advanced computer system will analyze and respond far quicker than a human can. I think the priority should be for the safety of the passenger, as logically any driver (human or cpu) can only be responsible for its own decisions. Choosing a safe speed, and other common precautions ensure that the risk is satistically lower for crashes. When unforseen events come into play the driver must avoid harm to those in its care as they have an implied trust in the driver to keep them as safe as possible. The outside party has placed no such trust and therefore cannot expect a sacrficial act of protection. i.e. the school bus that pulled out in front has been driven in a dangerous way and the fault lies in the driver. If the traffic responds by crashing into the sides to avoid killing children, now more cars are involved in the risk. If the cpu instead slams the brakes and hits the bus straight on, it may result in casualties but the chain effect is minimized. The consequences of the bus drivers actions are more closely tied to those consenting to be transported. In other words it’s only the bus passengers that should suffer from the bus driver’s error.

  3. If the optimal outcome is to save the most lives then the logical thinker would be okay with this. I would trust a computer programmed car, more than I would a human driver. If all cars were self driven, then there would probably be almost little incidents from ‘human’ errors. Such as not paying attention.

LEAVE A REPLY

Please enter your comment!
Please enter your name here