Robotic Agency & the Ethics of Automated Driving

Giorgi Vachnadze
3 min readJan 13, 2021

Another article by Sven Nyholm concentrating on ethical debates and controversies that surround self-driving cars. How is human agency different from robotic agency? What is the optimal human-robot coordination schema? The Ethics of automated driving is an emerging field in the philosophy of robotics, traffic automation is no longer a hypothetical scenario, automated vehicles are currently being deployed by major players in the industry and several accidents have already taken place since 2015.

We should keep in mind that full automation is only one of many different versions of how this technology could be deployed. There are also options of partial automation or automation with the possibiltiy of human override.

Another important name in the field is Noah Goodall, a civil engineer at the University of Virginia specializing in traffic operations, intelligent transportation systems, and vehicle communication/automation. We will be exploring some of Goodall’s research and findings in the area as well.

In his article Can You Program Ethics Into a Self-Driving Car? Goodall introduces some interesting topics concerning auromated vehicles and ethical programming. Overall, research suggests that self-driving cars can and have reduced the amount of traffic-accidents, but when accidents do occurr and an automated vehicle is involved, legal and ethical questions of guilt and responsibility are much more difficult to resolve.

With human-operated traffic accidents one may often appeal to concepts like instinct, emotion, fright, reflexive reaction etc. which all help to shed light on how we should proceed legally. With automated vehicles it is usually the manufacturer who has to bear the burden and the whole process could be turned upside down, since there is, strictly speaking, a driverless traffic accident. How does one prosecute an algorithm? Self-Driving cars will take risk-management and questions of liability into new and uncharted territories.

Automation is a partial and gradual process. There’s plenty of gradated peripheral cases between an entirely human-operated vs. a fully automated car. Afterall, the step from manual to automatic transmission is one of the many steps taken in the direction of automation. Similarly, other functions within the vehicle could be automated: Breaks, steering, acceleration etc.

This leads us to another difficulty: The lack of a sharp boundary between a human-operated and a fully automatic vehicle. But more importantly, how do you programm, not just a set of behaviours, but an ethical attitude into a software? Ethics comprises an irreducible level of complexity, there is no algorithm that could match up, translate or re-code morality. There is no ethical “system”, strictly speaking, and that being one of the basic existential problems in philosophy, has been demostrated by many thinkers, both ancient and contemporary.

Algorithms are perfect rule-followers, while humans are much better at bending rules and creating new ones. The question is, which type of autonomy is preferrable at the wheel? But also, could algorthims acquire human-level complexity? How autonomous can they be?

REF

  1. Goodall, N. J. (2014). Ethical Decision Making during Automated Vehicle Crashes. Transportation Research Record, 2424(1), 58–65. https://doi.org/10.3141/2424-07
  2. Nyholm, S., Smids, J. Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic. Ethics Inf Technol 22, 335–344 (2020). https://doi.org/10.1007/s10676-018-9445-9

--

--