Autonomous Vehicles – Why Industry has Every Right to Not Wait for Regulatory and Academic Leadership to Arise

by Christopher Jackson

chrisjackson@saras.ucla.edu

 

What are legislators, regulators and academics doing to help the introduction of Autonomous Vehicles (AVs)? I don’t know either.

Head Shot - Chris (1 Mb).jpg

One of the sessions of the 2017 Autonomous Vehicle Safety Regulation World Congress that was held in Novi, Michigan, was devoted to ethics. The idea is that AVs must be taught what to do when death is unavoidable (hold that thought). That is, if an accident is imminent, does the AV kill the old lady or the three-month-old baby? Does the AV protect the driver or others around it? Many media outlets, journals and blogs emphasize this conundrum. The MIT Review published Why Self Driving Cars Must be Programmed to Kill where it discussed the behaviors that need to be embedded into AVs to control casualties. Some of you may be familiar with MIT’s Moral Machine which is an online survey aimed at understanding what the public thinks AVs should do in the event of an accident that involves fatalities.

But this discussion has conveniently hurdled the question – do AVs need to be programmed to kill? Because the answer is absolutely not. There is no compelling argument for anyone to expect manufacturers to design this sort of capability into their vehicles. In fact, it is likely going to make matters worse.

Why is it unfair to make AVs safer than what already passes for safe?

No doubt some of you believe that AVs should be programmed to at least ‘control the loss of life.’ But let’s think on that for a minute. Let’s say that you own an non-autonomous vehicle manufacturing company. You produce regular old cars with steering wheels. Your competitor lives just down the road, and builds similar vehicles to the ones you do.

How would you feel if the relevant regulatory body required your car to have technology that is able to (for example) undertake facial recognition so that in the event of an imminent accident that will likely involve fatalities, your vehicle can advise the driver to turn left versus right to preferentially hit or miss an 80 year old lady or a 3 month old baby. Let’s further say that the same regulatory body doesn’t require this functionality to be imposed on your competitor. You would likely be irate over the perceived lack of fairness.

As of today, no regulatory body requires current ‘vehicle systems’ to be able to preferentially kill an 80 year old woman or 3 month old baby. A vehicle system in this context is a vehicle and driver. Remember that. The driver is subject to state and local jurisdiction licensing requirements. None of these requirements involve training drivers on what to do in the event of an accident. There is no driving test where a driver will fail if they cannot demonstrate the capacity to minimize the loss of life in a likely fatal crash.

So this is what society deems to be safe. All we do is expect drivers to have the skills to avoid accidents period. Not control the harm they cause.

So why should the AV manufacturers be held to a higher standard? There is no reason apart from highly speculative ‘wouldn’t it be nice’ thinking. But it gets much worse.

And why is it ‘bad’ to make AVs safer than what already passes for safe?

AV manufacturers are effectively putting the finishing touches on self-driving technology that focuses solely on avoiding accidents. That is, vehicles are able to sense the three-dimensional world around them, identify which ones of those things are other vehicles and things like lane lines, and make informed driving decisions. The ability to identify human beings in this three-dimensional carpet, let alone old ones, young ones, those who are doing the right thing versus those who are jaywalking is an astronomical leap in technology. If we seriously require AVs to do this, we have just delayed the deployment of this technology by decades.

If we are having problems trying to regulate or test for vehicles that essentially need to avoid hitting ‘things,’ what are we going to do to test a vehicle that only kills a few versus many in certain scenarios.

And the reason why is a lack of leadership.

Throughout the whole AV journey, regulators, governments, standard bearers and academia have stood by admiring the problem of AV safety and reliability – not doing anything about it. But manufacturers are doing things about it every day as they improve AV technology.

Take (for example) a response from a state Department of Motor Vehicle (DMV) representative when asked a particular question about what Is happening in terms of state jurisdictions. His response was:

… it is better to be fifth than to be first.

And the other DMV representatives from various states all agreed. So we have a set of key regulatory bodies essentially hoping someone else will do it. This is not leadership.

And it is not necessarily their fault. The reason is that the DMVs are exclusively charged (in this context) with traditional metrics of safety. They reap no organizational benefit from the lives that are saved from safer vehicles. But the are conditioned to fear those that aren’t. Regardless of the net or overall benefit to society.

And we have done this before. In a previous article Red Flags, Autonomous System Safety, and the importance of looking back before looking forward, I talk about how lawmakers in the late 1800s got it horribly wrong when they tried to legislate for ‘safe’ automobiles. There attempts were so bad that it was some 60 years before they were replaced with meaningful regulations (the Federal Motor Vehicle Safety Standards). And even then, each of these standards formalized good ideas generated by manufacturers that had been standard in commercial vehicles for at least a decade in each case.

So … manufacturers have always led automobile safety. There is no way to get around it. And presented with an opportunity to show leadership – lawmakers and academia are failing to provide clarity.

Just drive on, manufacturers

It wasn’t that long ago that Tesla introduced its ‘autopilot’ driving mode to much (primarily doomsday) commentary. It has since been shown to reduce the likelihood of vehicle accidents by 40 per cent. We know that human error accounts for more around 19 out of 20 crashes. So we know that removing the human will be the single most important thing for improving vehicle safety.

So what? From here, we all need to demand more from academia and lawmakers … or respectfully ask them to get out of the way. Before we talk about the ethics of striking an 80 year old lady versus a 3 month old baby, lawmakers need to meaningfully address the additional technological cost, how much this will divert focus away from AVs that avoid accidents in the first place, and the hundreds of thousands of lives that will continue to be lost during the corresponding delay in introduction.

The fact is AV systems will not be perfect. But they will be better. Much better.

Failing to learn from history means that you are destined to repeat it. To that end (and to amend a phrase from Nike), AV manufacturers are simply going to have to ‘do it.’ With AV technology on the horizon for as long as it has been, we know that lawmakers will continue to look at each other to be the first one to take a leadership role (meaning that no one will). Academia will focus on what academia wants to focus on without evaluating their premise.

Which means sooner rather than later, lawmakers and academia will become irrelevant to the discussion, only commentating or legislating using the ‘rear vision mirrors’ of the non-autonomous vehicles.

It is sad, but true. Long live industrial innovation.