Recently there was a tragic accident involving a self-driving Uber vehicle that struck and killed a pedestrian near Phoenix. A Tesla Model X was on autopilot and crashed in March, 2018 in Mountain View, California killing the driver. More people are starting to think about setting realistic expectations for self-driving cars.  The essential question being whether they can be expected to completely avoid fatalities or whether it’s good enough that they reduce them.

According to an article in USA Today – The Journal News on April 8, 2018 by Bob O’Donnell, the ethical implications are far reaching.  What makes the question troublesome is that it ties together computing technology with life and death consequences. The technology built into self-driving cars such as the ones involved in the aforementioned accident generate significant amount of data that are already making the process of determining the course must faster and more definitive than traditional investigative processes.  From a technical perspective, many of the questions about safety have to do with sensors that collect all the data. Most self-driving cars have a collection of traditional cameras, radar and liDAR (a type of sensor that bounces laser light off nearby objects) built into them.  In theory, these components work together to provide the car with all the information it needs to make real-time driving decisions. Radar and liDAR have the ability to essentially see through objects allowing them to provide views and perspectives that cannot be seen by humans.

In the Phoenix Uber accident, the technology should have been able to see that there was a pedestrian on the side of the road even if she was hidden from human view by cars or other objects and slam on the brakes.  These vehicles are supposed to see things that people can’t and read in ways that are faster and better than the human ever could.

Many in the tech industry have focused on convenience. The fundamental benefits most carmakers talk about and most consumers want is safety. As a result of the Uber accident, there have been changes in self-driving cars testing plans by tech companies as well as regulatory, permissions from governmental agencies including the State of Arizona, The article points out that realistically it may be difficult to prevent deaths completely with self-driving cars because both human driven and self-driving cars will co-exist for decades to come. To make the testing process safer, it will reassure different approaches such as using simulated virtual driving environments. While simulated systems can’t completely replace real work tests, they can offer

Critical benefits and reduce accidents. They enable more test miles to be driven and different scenarios to be tested than can happen with real world driving. Challenges for autonomous cars still remain and the realistic time frame for getting them onto the roadway likely will lengthen as a result of these recent accidents.