Recently there was a tragic accident involving a self-driving Uber vehicle that struck and killed a pedestrian near Phoenix. A Tesla Model X was on autopilot and crashed in March, 2018 in Mountain View, California killing the driver. More people are starting to think about setting realistic expectations for self-driving cars. The essential question being whether they can be expected to completely avoid fatalities or whether it’s good enough that they reduce them.
According to an article in USA Today – The Journal News on April 8, 2018 by Bob O’Donnell, the ethical implications are far reaching. What makes the question troublesome is that it ties together computing technology with life and death consequences. The technology built into self-driving cars such as the ones involved in the aforementioned accident generate significant amount of data that are already making the process of determining the course must faster and more definitive than traditional investigative processes. From a technical perspective, many of the questions about safety have to do with sensors that collect all the data. Most self-driving cars have a collection of traditional cameras, radar and liDAR (a type of sensor that bounces laser light off nearby objects) built into them. In theory, these components work together to provide the car with all the information it needs to make real-time driving decisions. Radar and liDAR have the ability to essentially see through objects allowing them to provide views and perspectives that cannot be seen by humans.
In the Phoenix Uber accident, the technology should have been able to see that there was a pedestrian on the side of the road even if she was hidden from human view by cars or other objects and slam on the brakes. These vehicles are supposed to see things that people can’t and read in ways that are faster and better than the human ever could.