Issues In Contemporary Society


Fashion Institute of Design & Merchandising

 

 

 

 

 

 

 

 

 

 

The Ethics Dilemma of Self-Driving Car

Do algorithms kill people acceptable?

 

 

 

 

 

 

 

 

Mungyiu Ma

GNST 3900 Ethics in Business

Kent Hammond

06/12/2023

TABLE OF CONTENTS

 

Definition Section

Genesis

Connotation

Opposing/ Divergent Views

 

Thesis

 

Review of the Literature

Introduction and Overview

 

Personal Perspective

 

Works Cited

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Definition Section

Genesis

With the rapid development of the world’s technology, self-driving cars have become an inevitable trend in the future because they can make human life more convenient and also significantly reduce the probability of traffic accidents. However, with the introduction of new technologies, we are now faced with ethical dilemmas that have never been discussed before the last few years: Is that acceptable if algorithms kill one party in order to prevent the accident of another party? Who should take the responsibility?

To be more clear, there are currently identified six levels of autonomous driving development: 1. Level 0 – No Driving Automation; 2. Level 1 Driving Automation – Driver Assistance; 3. Level 2 Driving Automation – Partial Driving Automation; 4. Level 3 Driving Automation – Conditional Driving Automation; 5. Level 4 Driving Automation – High Driving Automation; 6. Level 5 Driving Automation – Full Driving Automation. (Jessica, Christian, 2021) Now in 2023, we are in the beginning of L3 stage.

In this research paper, I will focus on the full-driving technology that does not require human control at all, which is level 5 driving automation. Although we are still a long way from this level of technology being widely used, and there are still some technical challenges that need to be overcome, the so-called technical issues are only a matter of time. It is believed that in the near future, fully automated AI vehicles will break through technical difficulties and be widely used, just as Chat GPT recently started to burst out.

 

Connotation

  • The relevance of the topic today

While Level 5 full self-driving is an unrealized future application, the ethical dilemmas involved are still significant to be discussed today, as it explores how people should choose to set the rules, write the code, and improve the legal regulation of this technology before it becomes commonplace.

 

  • The universal significance of full driving automation

Humans can get sleepy behind the wheel, make mistakes, be negligent, careless, drive drunk, look at their phones, and even intentionally drive into people on the street. But AI doesn’t have that problem because it can drive cars with fewer errors than humans. Not only that but as self-driving becomes more common, cars will be able to connect to each other and to the road systems. If most of the cars on the road are autonomous, they will form a network that will further reduce the probability of accidents.

 

  • Why is that an important contemporary issue?

While fully autonomous vehicles offer significant benefits to humanity as a whole, we must consider the potential consequences and social-ethics implications of the way algorithms are designed in a few extreme cases. When the subject of authority puts forward a principle, it should conform to the universal human moral intuition. No matter how perfect a principle is in logic, reasoning, and formula derivation, once it is not in line with human nature, then we have reason to reject this set of principles. (Johnson, Robert and Adam Cureton) Because we have to be human in the first place, otherwise in order to enhance the overall utility of human beings, to make us live a life that is anti-humanity, a life that is particularly alien and distorted, we would rather not enhance those utilities.

 

  • Who is discussing it and why?

Those discussing the issue now include legal regulators, car developers, and worried future car owners. Because full driving automation will usher in a new phase of human development, we don’t have any precedent to draw from. Any possible responsible party, and the makers of the regulatory system, are seeking a balanced approach as the ethical basis for the popularization of new technologies.

 

Opposing/ Divergent Views

  • Opponent: Algorithmic killing is unacceptable

The opposition argues that we should stop the development of self-driving cars because of the ethically unacceptable consequences. Although human driving a car may cause traffic casualties, it can be identified as an accident or negligence in more situations. While a self-driving car hits someone, however, it’s absolutely not an accident. It’s an algorithm killing someone on purpose (Emerging Technology).

The classic Trolley Problem can illustrate this point of view. (Arfini, S., Spinelli, D. & Chiffi) Let’s imagine a situation where a speeding AI self-driving car accidentally pulls into a road and finds a pedestrian named Jack in front of it. At this time, the car was too fast to stop and was about to kill Jack. But at this time, the self-driving car’s sensors recognized that there was a fork in the road ahead to turn, but this fork was a dead road, and in front of the road was a wall. If the car hit the wall, the owner of the car would be killed. So, if we’re thinking about fully self-driving cars, should we program them to turn or not turn? The programmer must consider such a scenario when writing and training AI beforehand. Still, no matter how he chooses to design it, the death in this traffic accident is not an accident but an intentional one.

Moreover, let’s imagine another situation. It is still the self-driving car running at high speed, and the speed is still too fast to stop, and it is about to hit the pedestrian in front of it. However, there are five pedestrians in front of it instead of only one person. The self-driving car’s sensors found that there was another fork in the road to turn and that there was only one pedestrian on that fork. So should self-driving cars be programmed to take a turn and save five lives? If we code for utilitarianism in self-driving cars, the results could be dire. For example, in the same situation, what if two roads diverged and one was an old man, and the other was a child? How about one is a man, and the other is a woman? How about one is a black and the other is a white?

This is where programmers give AI a systematic bias when writing code beforehand. Although self-driving cars can greatly reduce the probability of traffic accidents, we should not sacrifice innocent people because there will be fewer traffic deaths and injuries overall. Once we follow a utilitarian society, it comes at the cost of distorting the moral and legal systems of our human society against humanity. (Arfini, S., Spinelli, D. & Chiffi)

Opponents, on the other hand, argue that if a self-driving car hits someone, it will create a responsibility gap, meaning it’s hard to assign blame. (Hansson, S.O., Belin, MÅ. & Lundgren) From the point of view of the car owner, the driving of the car is out of the owner’s control because he is not a driver but a passenger. From the manufacturer’s point of view, when they write the code, they can’t predict exactly what will happen after the car is delivered. It is even more unrealistic to blame the car itself because the car has no free will, it is not a moral agent, and it is simply not capable of taking responsibility for its actions. So what this creates is a situation where once a self-driving car hits someone, there’s no one to blame. This is profoundly at odds with our moral intuitions because, normally, we assume that if an action harms someone else, then someone must be found responsible. Not only that, but it would create a huge dilemma for the existing legal system.  For example, should we classify such accidents as vehicular manslaughter, negligent death, intentional homicide, or manufacturing and selling products that do not meet safety standards?

 

  • Proponents: Algorithmic killing is acceptable

Proponents of self-driving technology cite a theory of self-defense as an ethically acceptable justification for self-driving crashes: the Doctrine of Double Effect. (Arfini, S., Spinelli, D. & Chiffi) The theory was first developed by medieval European theologian Thomas Aquinas, who asserts that killing can be justified if a person kills someone but does so to protect himself. Specifically, if an action produces two effects, one is intentional by the actor, and the other is incidental side effects unrelated to the actor’s intention. It conforms to the theory of double effects. (Arfini, S., Spinelli, D. & Chiffi) The justification of self-defense is justified because there is no intention to harm others but rather a predictable but unintended side effect of defending one’s own life.

To satisfy the double effect, four conditions are required: First, the action envisaged is itself either morally good or morally indifferent; Second, the bad consequences are not immediate; Third, good outcomes are not direct causal consequences of bad outcomes; Fourth, good outcomes are “proportional” to bad outcomes. (Wm. David Solomon) Proponents of the principle argue that the action under consideration is morally permissible in the event that the “double effect” of all these conditions is met, despite the poor outcome.  If these three conditions are met, then the action is defensible.

Suppose we apply this theory of double effects to the scenario of a self-driving car hitting a person. In that case, while it is true that a self-driving car will hit a person, we cannot say that building a self-driving car is using an algorithm to kill people intentionally. That’s because manufacturers design algorithms to reduce traffic fatalities overall, and when they end up killing individuals, that’s a predictable but unintended side effect. (Arfini, S., Spinelli, D. & Chiffi)Therefore, the algorithm of autonomous driving technology has reason to ensure the priority of owners’ interests and also has reason to follow a certain utilitarian.

On the other hand, when it comes to attributing responsibility for AI cars hitting people, supporters believe that we should go beyond the original moral logic framework, that is, to find out the wrongdoers and punish them. In the new technological landscape, we need to propose a new liability attribute model rather than agonize over who is to blame.  For example, a no-fault insurance or victims compensation fund would be set up so that all fully self-driving owners would pay on a regular basis and then be compensated uniformly by the regulatory body in case of an accident.  (Uzair)

 

Thesis

The potential development of autonomous vehicles, especially level 5 autonomous vehicles, has drawn the attention of different stakeholders. However, there are still few international regulations on them and no publicly accepted solutions to address some of their outstanding ethical issues. Therefore, this paper aims to discuss these ethical and practical dilemmas – Do algorithms kill people acceptable?

I will discuss in particular the practice of AVs programming techniques regarding the trolley problem from two different points of view, and who is responsible for these types of decisions. Opponents argue that algorithmic killing violates basic human moral intuition and faces a liability gap. Proponents, on the other hand, argue that we can use the Doctrine of Double Effect to rationalize the moral dilemmas of algorithmic killing and to build a new framework for attribution of responsibility. In fact, as the maturity of autonomous driving technology will significantly reduce the number of deaths and disabilities caused by car accidents, and improve the overall benefit of humanity, its development has been unstoppable. However, we should guard against pure utilitarianism and establish a sound regulatory and regulatory system before the technology is widely used.

 

 

 

 

 

 

Works Cited

Jessica Shea Choksey and Christian Wardlaw, May 05, 202, Levels of Autonomous Driving, Explained, https://www.jdpower.com/cars/shopping-guides/levels-of-autonomous-driving-explained

 

Johnson, Robert and Adam Cureton, “Kant’s Moral Philosophy”, The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), https://plato.stanford.edu/archives/fall2022/entries/kant-moral/.

 

Arfini, S., Spinelli, D. & Chiffi, D. Ethics of Self-driving Cars: A Naturalistic Approach.Minds & Machines 32, 717–734 (2022). https://doi.org/10.1007/s11023-022-09604-y

Hansson, S.O., Belin, MÅ. & Lundgren, B. Self-Driving Vehicles—an Ethical Overview.Philos. Technol. 34, 1383–1408 (2021). https://doi.org/10.1007/s13347-021-00464-5

Emerging Technology, October 22, 2015, Why Self-Driving Cars Must Be Programmed to Kill, https://www.technologyreview.com/2015/10/22/165469/why-self-driving-cars-must-be-programmed-to-kill/

Wm. David Solomon, “Double Effect,” The Encyclopedia of Ethics), Lawrence C. Becker, editor, https://sites.saintmarys.edu/~incandel/doubleeffect.html

Uzair M. Who Is Liable When a Driverless Car Crashes? World Electric Vehicle Journal. 2021; 12(2):62. https://doi.org/10.3390/wevj12020062

 

 

October 22, 2015

 

 

 

 

refer to:

https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/