Ethics in Autonomous DrivingEssay Preview: Ethics in Autonomous DrivingReport this essayLGST 612 – Responsibility in Professional ServicesFinal PaperMoral problems in autonomous drivingProf. Alan StrudlerOctober 21, 2016Matthias GERGTable of Contents1.) Introduction2.) Legal basis for autonomous cars3.) Self-driving cars and ethics3.1 Non-identity theory3.2 Utilitarian view (Consequentialist)3.3 Kantian view (Deontological)4.) Personal recommendation for robotic car programming* Course concepts and theories covered in class in bold printIntroductionFor decades, autonomous or self-driving cars seemed to be wishful thinking and totally unrealistic. However, the progress that has recently been made in that industry suggests that the motor industry will be disrupted “far sooner, faster and more powerfully than one might expect.” In May, BMW’s CEO Harald Kruger announced that they will launch a fully-autonomous car until 2021 and also tech giants like Google work on the same technology. By 2030, McKinsey & Company predicts up to 15% of new vehicles sold will be self-driving and thus a heated discussion started about the various implications this would have on driver safety and people’s everyday lives. First of all, experts project a decrease in traffic related deaths, the largest accidental killer of Americans, claiming 35,092 road fatalities in 2015. Even though safety standards have increased significantly, human beings, who control a vehicle are faulty and therefore accidents end fatal very often. Thus, “autonomous cars remove the human element from driving and by doing so, theoretically, the number of traffic related deaths caused by humans behind the wheel will drop.” However, accidents will still happen when humans and robots are sharing the road and therefore questions around liability and morality inevitably arise. Should self-driving cars in any event protect their owners or is there a trade-off to be made in order to achieve the best possible outcome in terms of casualties?
The following paper will discuss moral and ethical issues related to autonomous cars by analyzing different ethical viewpoints and theories. Finally, the paper will conclude by taking a clear stand on how car manufacturers ought to program and design self-driving cars of the future in order to act morally and avert damage from humans in the most “ethical” way.
Legal basis for autonomous carsIn many fields, detailed legislative texts provide clear guidance and orientation. The subject matter of self-driving cars, however, is so groundbreaking that law simply hasn’t caught up yet and a “legal framework for autonomous vehicles does not yet exist (…)”. And even if – in road traffic, the law and being morally right often diverge. Just imagine breaking the law for speeding when taking your expecting wife to a nearby hospital. Should self-driving cars never disobey the law in autonomous mode? As Lin (2013) underlined, “our laws are ill-equipped to deal with the rise of these vehicles”6 and this is exactly where ethics come into play. As a legal basis has not been created yet, “we have the opportunity to build one that is informed by ethics.”6 Consequently, the following section will identify ethical “costs and benefits” in the context of robotic cars and then scrutinize and evaluate the matter from different angles of ethical theory.
Self-driving cars and ethicsAt first, there are various arguments that strongly favor the introduction of autonomous cars from an ethical perspective. They will enable disabled or physically impaired people, or also the elderly to drive in their own vehicles which would increase their mobility and quality of life. In addition, experts predict that self-driving cars will have a positive environmental footprint due to their rationality. Most importantly, however, self-driving cars promise to be safer and less faulty than human drivers and will therefore decrease the traffic-related death toll.
This, on the other hand, directly leads to the downsides that have to be considered from an ethical perspective. Passenger safety, which will be in the focus of this paper’s analysis, comes with the morally delicate topic of trading-off lives. Most likely, the victims of future crash scenarios with robotic cars will not be the same people who would have died in a regular car accident and vice versa. Before applying various ethical theories to this issue, one also has to mention another important downside of autonomous cars: privacy and data security concerns. If hackers could take over control of a self-driving car, which is far from unrealistic in times of cyber-crime, they could easily not only infringe privacy rights, but also purposefully put the lives of car owners at risk.
As stated before, the key matter of debate among industry experts and scientists currently is around how to program car computers and which algorithms would guide their decision making progress. Especially in worst case scenarios, when an accident seems inevitable, different ethical theories would offer different solutions.
Non-identity ProblemOxford philosopher Derek Parfit introduced the non-identity problem in the early 1980s and referred to the conflict of current versus future generations’ well-being. In Parfit’s theory, there is a policy choice to be made of either depleting a natural resource or conserving it. Thus, the quality of life for the current generation is competing with the well-being of future generations. Lin (2013) relates this to the potential benefit of autonomous cars cutting fatality rates by half. This would still imply that, most likely, a new group of victims would be affected which is a troubling circumstance. So can we infer the policy of harming future people by enjoying the safety benefits autonomous cars can offer as being unethical?
In practice, the choice of a policy to kill a species is not an arbitrary choice. Rather, humans are responsible for the decisions that result, and we may consider them to be rational. For example, in a scenario of autonomous cars, it may be necessary to kill a large proportion of them. However, in both cases, it would be necessary to control the cost of each kill for the community at large. For example, a large population could not afford to stop a small population from dying, causing large numbers of people to die. Thus, the choice should not be forced, rather, it should not be.
The future can include anyone
As mentioned before, there are some common scenarios where this kind of scenario may occur.
If, say, the past generation of future generations have a low level of health care costs, like the cost to each person is rising.
then a large percentage of the time, as they have less resources available, they can save up to a reasonable degree, which can be useful if you want to make a more profit or if you like to increase your wealth.
Then a small proportion of the time, you are faced with a similar issue, like the cost of maintenance of an energy system.
Finally, you might need to spend a massive amount of money to replace or replace the old system.
It is also possible that a lot of future generations might be doing this to save money, because current generation is less technologically advanced than the last. However, in this case we must consider what we are going to put in, whether the current generation is doing this or not, and how they do it before we kill them to save them.
This section was developed using social science. This section is part of our research in AI and how we have developed it. The results of our research can also be found on our research blog. You can read our blog for a more in depth discussion of the methods we use here.
The main use case in this literature is for creating a global safety net for future generations that is cheaper than in previous generations (such as today due to socialization and other technological advances from the past). This kind of use case is particularly important for the safety net, since it will allow the future generations to make a difference to people who are already sick and injured.
Benefits of autonomous vehicles
In this blog we have considered the benefits of self-driving vehicles.
A fully autonomous car is highly efficient and effective, which is important because when you are dealing with a situation that requires an emergency management, it may be easier to rely on the vehicles to manage your situation. It also allows drivers to perform other tasks that are more costly. This helps to prevent accidents due to human error when running late, speeding up the delivery process, or when an accident is caused when the car does not take reasonable precautions. Although it may help to reduce the
Even though a non-identity problem exists in principle in this