Autonomous Vehicle Regulation  - Could Less Actually Be More?

by Chris Jackson

(chrisjackson@saras.ucla.edu)

 

Autonomous Vehicles (AVs) are still futuristic – but there are plenty of people are thinking about them and what they would mean – particularly as they relate to safety. And when they do, they invariably think about how vehicles are currently regulated as a starting point. We envisage perhaps more regulation, standards and rules – because AVs are more complex and complicated. But for every regulation, standard and rule, we take responsibility away from the manufacturer. Why? Because all the manufacturer needs to do is ensure that their AV meets each regulation, standard and rule for them to not be liable for subsequent accidents (this is a simplistic interpretation to be sure … but satisfactory for the sake of this article). Is this desirable? Is this possible?

There is a simplistic (and perhaps misguided) belief that AVs need to simply be subjected to a set of checks, tests and inspections before they can safely get on the road. This is ‘simplistic’ as a system that is more like a decision-making computer cannot really be subject to this sort of assurance. And perhaps ‘misguided’ because most people think that is how today’s non-autonomous cars are declared safe – when this is not entirely correct.

But aren’t today’s cars subject to sets of checks, tests and inspections for them to be declared ‘roadworthy’ or safe? Well yes they are, but these checks, tests and inspections are based on what auto manufacturers came up with before the regulations did. Vehicle regulations have forever played catch-up to each and every safety-related innovation. In fact, cars were being driven for almost a hundred years before the first regulatory standard was ever established.

Take a look at the timeline of the United States Federal Motor Vehicle Safety Standards (FMVSS). These standards form part of United States regulations that control the design of vehicles – primarily to make them safe. The first FMVSS was established in 1967 and dealt with seatbelts – a hugely effective safety device. Seatbelts were invented in the mid-19th Century, first patented in 1885, became optional on some cars from 1949, and became standard on others from 1958. Even something as simple as a seatbelt was never ‘predicted’ or ‘mandated in anvance’ by a regulatory agency. It took virtually a hundred years of development by manufacturers before the seatbelt became a mandatory part of a vehicle.

But perhaps you think that cars are a special case – first developed in a time when laws and regulations did not meet the level of ‘societal-technological enlightenment’ we assume we live in today. In case you think this … please read my article which looked at some extraordinary regulatory efforts from 1860 to 1900 to try and ‘get ahead’ of the manufacturers in the name of safety. The 19th century regulators and legislators I talked about probably thought they were themselves in a period of ‘societal-technological enlightenment’ as they watched the industrial revolution change the world around them. But their regulatory efforts appear somewhat farcical from today’s perspective. The rules that these regulators and legislators came up with (in some instances) required vehicles to be disassembled and hidden in bushes if they encountered livestock!

Of the many mistakes these 19th century lawmakers and regulators made, one of them was that they thought vehicles posed the biggest risk to those on the outside – not those on the inside. But perhaps the biggest mistake was the idea that these lawmakers and regulators could successfully understand and predict how these machines would function, how they would be used and the manner in which their underlying technology would be included in design. These assumptions resulted in spectacular regulatory failures.

But maybe you think that we are much more ‘illuminated’ than those lawmakers and regulators, and we can do a much better job now. And by this I mean come up with a more stringent checks, tests and inspections that meaningfully relate to how AVs will operate. Well … we (tried to) do this already for a number of other machines, and the results are not always great.

Those of you involved in governmental or military contracting for advanced physical systems are probably aware of the many things manufacturers are made to do (including testing) with the hope that if followed, the resultant system is safe and reliable. Acquisition contracts can contain an exhaustive (and exhausting) list of ‘safety’ activities. This approach routinely fails - think the F-35 Joint Strike Fighter, British Type 45 Destroyers that break down in warm water, and so on. The F-35 Joint Strike Fighter is a more than useful comparison as it involves $ 50 billion dedicated to research and development on a platform that is hugely dependent on autonomous or ‘autonomous-like’ control software. We are seeing firsthand how problematic it can be to ‘stipulate’ your way to safe operation for something like an AV.

And this doesn’t even consider manufacturers who are willing to pervert checks, tests and inspections for their own selfish aims. Think how Volkswagen designed vehicles to pass (as in ‘cheat’) emissions tests but continue to belch out nitrogen oxides at unacceptably high rates when being driven normally. Toyota tried to divert attention away from is faulty cars when they suddenly accelerated by themselves, killing many people. Just by having a set of checks, tests and inspections creates a battle that needs to be continually waged between the regulator and the regulated.

Well perhaps we can take a step back, and instead of saying ‘the system needs to have this gadget in it,’ we can simply say that ‘the system needs to be safe in this scenario.’ John Simpson from the Consumer Watchdog agrees, saying “What you want is performance standards. You don’t say: ‘This is how you make the car stop within so many feet of having the brakes applied.’ All you say is, ‘It has to be able to stop.”

We can turn the nuclear industry for a little history in this regard. Nuclear plants were built in accordance with a ‘design basis:’ the set of events or conditions that the plant needs to be able to encounter, and successfully deal with. And this approach has worked well – nuclear power is still relatively safe and reliable notwithstanding a few catastrophic counter-examples (we’ll come back to these.)

In 1975, a new approach to nuclear power plant safety was investigated. The United States Nuclear Regulatory Commission (NRC) sponsored the ‘Reactor Safety Study.’ It used a technique known as Probabilistic Risk Assessment (PRA) to quantify nuclear power plant safety – something that the ‘design basis’ approach could never do. PRA incorporates every piece of information ranging from expert opinion through to operational data. It also includes every aspect of system operation – including human error. The study did not initially receive a lot of attention. That changed four years later.

The 1979 ‘Three Mile Island Accident’ was a Loss of Coolant Accident (LOCA) which turned out to be the primary risk contributor identified by the Reactor Safety Study. The people who did not pay attention to it in 1975 suddenly became more enamored with PRA. The LOCA scenario was somewhat overlooked in the ‘design basis’ approach. And the accidents at Chernobyl and Fukushima involved substantial human error (and even negligence) which the ‘design basis’ approach struggles to deal with. We can design something based on every conceivable scenario and make it relatively safe - but we must not kid ourselves that ‘all conceivable scenarios’ equals ‘all scenarios.’

So what does AV regulation need to look like in the future? Perhaps less is more. As mentioned at the start of this article, the FMVSS limits manufacturer liability. That means if a vehicle meets the standards, the manufacturer is (generally) not liable for subsequent vehicle accident consequences. The driver becomes liable, which is why we have insurance. So if we create a set of checks, tests and inspections for AVs, we may simply be giving manufacturers a ‘get of jail free card’ when it comes to designing safe cars. If the regulators can’t keep up with and predict technological development (think the 19th century ‘red flags’ mentioned above), then there is no such thing as safe. In fact, having a set of standards for AVs may hamper the development of safe cars given there is so much we can never know about any emerging technology.

So if we have fewer standards, the manufacturers have to accept more liability. This will require current liability laws to be amended. But this may not be a bad thing. Why? Because it is all about assurance through motivation.

‘Assurance’ literally means a state of certainty. The terms ‘safety assurance,’ ‘reliability assurance’ and ‘quality assurance’ traditionally refer to a list of things one does for something to be considered ‘safe,’ ‘reliable’ or ‘high quality.’ But as hinted at above these things (or activities) can never guarantee a good outcome. For those of you been involved with reliability demonstration tests, you know there is always a risk (consumer’s risk) that something is ‘unreliable’ even if it passes that test designed to ‘demonstrate’ otherwise. Many acquisition contracts allow this risk to be as high as 20 per cent. So assurance and certainty are misnomers.

A better way to think of ‘assurance’ is as something that motivates system designers to create safe and reliable things. A product doesn’t magically become better if you test it or impose a standard. The reason we have tests is to motivate designers to create a system that will pass that test. So we need to think about how designers are motivated, and what we can do to help them. And this may mean making them do ‘less’ things.

If you make the designer do too many things, too much of their time is spent in meetings and videoconferences going through reports and tables to convince you that they have done what you have asked for (or finding ways to get around them). This can reduce reliability as the design team is focused on ‘compliance’ and not ‘performance.’ Many of you know this to be true from experience … and those that need a little more convincing would do well to read the National Academies of Science, Engineering and Medicine’s report ‘Beyond Compliance’ which looks at this very issue, and how it contributed to the Deepwater Horizon-Macondo blowout, explosion and spill in 2010.

If we blindly apply a regulatory certification, standards and compliance approach to AVs, we will see all the historical problems discussed above. One can only imagine how many ‘FMVSS-like’ standards would need to be in place for a brand new AV system whose safety revolves around algorithmic decision making processes. And we cannot escape the fact that Chernobyl, Fukushima, the Deepwater Horizon et cetera were all declared ‘safe’ by their respective regulators the day these accidents happened.

Perhaps we need to take the ‘scary’ path of actually imposing fewer standards on AVs, and just making manufacturer’s liable for everything. And that reality may already be here. Volvo Car Group President and CEO Håkan Samuelsson said that the company will accept full liability whenever one of its cars is in autonomous mode. That means the driver won’t need insurance (or at least insurance in the way we currently know it). It also means that Volvo will be motivated to continually improve the safety of the car. They are effectively providing their own insurance, and their ‘premium’ will go down the safer their vehicles get. It is worth noting that AV proponents such as Samuelsson see the biggest risk to an AV future will be regulation – not technology.

To be clear, there will always be a place for checks, tests and inspection underpinned by a set of standards. There is a lot of commentary to that end – and the fact I haven’t dedicated a lot of this article to talking about the good aspects of regulatory oversite should not be interpreted as a suggestion that there is no place for it. For example, a future standard may preclude an AV from travelling if any passenger does not have his or her seatbelt on.

But for what it is worth, I would be much happier driving an AV where the manufacturer is liable for its performance, versus another AV which has passed some checks, tests and inspections with liability then passed to me. I would be much happier again for an optimal mixture of both.

Liability and the way it is accepted by manufacturers could be the single most important thing that makes AVs safe. Scary for some right now. Maybe less scary when viewed in hindsight.

 

Red Flags, Autonomous System Safety, and the importance of looking back before looking forward.

by Chris Jackson (chrisjackson@saras.ucla.edu)

Have we gone through the introduction of autonomous vehicles before? In other words, have we gone through the introduction of a new, potentially hazardous but wonderfully promising technology?

Of course we have. Many times. And we make many of the same mistakes each time.

When the first automobiles were introduced in the 1800s, mild legislative hysteria ensued. A flurry of ‘red flag’ traffic acts were passed in both the United Kingdom and the United States. Many of these acts required self-propelled locomotives to have at least three people operating them, travel no greater than four miles per hour, and have someone (on foot) carry a red flag around 60 yards in front.

The red flag is a historical symbol for impending danger. Perhaps the most famous red flag traffic act was one in Pennsylvania that required the entire vehicle to be disassembled with all parts hidden in bushes should it encounter livestock (this bill was passed by legislators in both houses but was ultimately vetoed by the governor).

These acts seem humorous now, but to miss their key lessons those from and other technological revolutions would be ill-advised.

The first red flag lesson is that society instinctively hoists red flags in the absence of information – we are seeing this now with autonomous vehicles. Why? Perhaps it because without information, people tend to focus on specific narratives and not on net benefit. Steven Overly’s article in the Washington Post talks about the reaction people will likely have to autonomous systems noting that humans are not ‘rational when it comes to fear based decision making.’ Overly quotes Harvard University’s Calestous Juma, who writes in his book Innovation and Its Enemies: Why People Resist New Technologies about how consumers were initially wary of refrigerators when they were first introduced in the 1920s. The prospect of refrigerators catching fire weighed heavily on people’s minds regardless of the obvious health benefits of storing food safely.

So what happened? Three things. The Agriculture Department publicly advocated the health benefits of refrigeration. And then once refrigerators became ubiquitous as a result of their efforts they became safer as manufacturers learnt from their mistakes. And third thing deals with experience (which is the next lesson to be learnt).

The second red flag lesson is the consumers don’t trust experts. Take the current issue of drunk driving. Autonomous vehicle proponents argue that autonomous vehicles will effectively eliminate crashes (and deaths) caused by drunk driving. And this makes theoretical sense - with 94 per cent of current vehicular crashes caused by human error, surely autonomous vehicles (which remove the ‘human’) would mean that these crashes should effectively be eliminated. But the broader population is not so sure.

A 2015 Harris Poll survey found that 53% of United States drivers believe that autonomous vehicles will reduce the prevalence of drunk driving. The same figure applies to distracted driving. This means that 47% of people can’t see the link between autonomous vehicles and fewer crashes caused by inebriated or distracted drivers. To be clear, 47% of the population ‘is not stupid,' so the experts simply have not or cannot sell the safety message – yet.

The third red flag lesson is that governments (and the regulators they appoint) will control the deployment of new safety-critical technology. Politicians are not scientists. They are a special subset of society who are inherently conservative in their thought process and are inherently inclined to demand red flags. They have their collective strengths and weaknesses, but what most of the voting public cannot empathize with is the responsibility they have for virtually everything. But perhaps today’s governments are more open-minded for autonomous vehicle technology, no doubt because they are hoping for commensurate economic benefits. Some are no doubt waiting for other governments to take the plunge, and set a precedent they can follow. But there is no doubt that some lawmakers are looking at the tangible economic benefits their city will hopefully reap if theirs is amongst the first to deploy this technology.

The fourth red flag lesson is that we tend to incorrectly gauge the performance of new technology using perspectives of the old. In the 1800s, the main safety concern of self-propelled locomotives focused on those outside of the vehicle. So safety became a measure in which this technology would not induce panic from man, woman and beast alike. But we quickly learned that instead of looking outwards, automobile safety needed to look inwards. That is, we needed to focus on passenger safety in the event of a crash. As it turns out, livestock and pedestrians could easily live in a self-propelled locomotive world. Irish author and scientist Mary Ward became the first automobile fatality when she was ejected from the steam powered vehicle her cousin built. And when vehicles became more popular, it became clear that the passengers and drivers were more likely to be killed or injured than anyone else.

In the early 1900s, vehicles were built with hydraulic brakes and safety glass. Crash tests started in the 1920s. General Motors performed the first barrier test in 1934. So today, vehicle safety is largely about people inside it - not outside it.

Why is there limited focus on those outside vehicles? Because we have human drivers. Drivers who are assumed to be trained, licensed and able to avoid hazards and people. But this is about to change.

There are many more red flag lessons to be learnt, but for now we will stop at four.

So where to from here. Perhaps the most relevant red flag lesson is the last. The first two lessons are largely societal, and can be resolved by better communication with the driving population. And because autonomous vehicles are yet to hit the marketplace, we can assume that the (virtually every) car maker that is now investing in autonomous vehicles is yet to unleash their full marketing arsenal. Which they will. And we are seeing governments at all levels leaning further forward than others, probably because they think this will make more financial sense as mentioned above.

But we need to (much) better understand how we will create safe autonomous vehicles in a way that can be certified. Take the Tesla Model S that crashed into the side of a tractor trailer in Florida while in “Autopilot” mode, killing its driver. This was an event that many in the industry feared - the first public autonomous vehicle related fatality. The National Highway Traffic Safety Administration (NHTSA) Report into the situation surrounding the accident determined that the driver had seven seconds to do something about the tractor trailer in its path, but was clearly distracted. The driver is required to be attentive when autopilot mode is enabled.

But isn’t autopilot going to make drivers less attentive and cause more crashes? The answer is no - as the NHTSA report found that the Tesla Model S saw a 40% reduction in accidents with autopilot on (noting that some of the statistics that autonomous vehicle makers have used in the past to demonstrate safety have attracted widespread criticism). So if we don’t focus on the unfortunate narrative of the crash above, we can see a clear safety improvement with even a basic level of autonomy (assuming the figure in the report is right.)

But it is what Tesla did next that is telling. Tesla updated their vehicles on-board software to essentially enable it to better identify tractor trailers cutting the driving path. So a safe autonomous vehicle will be more like an i-OS or Windows operating system - one that is constantly maintained from afar in the same way Apple and Microsoft do. We won’t be able to slap a sticker of certification on an autonomous vehicle as it rolls out the factory door. The manufacturer’s ongoing support system will be as much a part of safety as the braking system. Moving from one-time certification to ongoing safety demonstration will likely be the most challenging aspect of autonomous vehicle reliability. And while the National Transportation Safety Board (NTSB) is still investigating the specifics of the Tesla Model S crash, there is a chance that any issue they identify has already been resolved without an expensive recall (both of which are good.)

And as we continue to experience autonomy, we must brace ourselves. The name of the driver killed in the Tesla crash was Joshua Brown. He has a family who mourn his loss. We cannot list people who are alive because of Tesla’s Autopilot Mode. And we won’t be able to list those who are alive in the future based on what Tesla learned from the crash. But we know they exist, even if they don’t themselves. We need to be thinking of them when we decide what red flags we choose to raise in the future.

Machines Making Decisions Based on Risk

Risk Assessment of Autonomous System Control Software

Autonomous systems such as (autonomous vehicles and ships) will change the characteristics of traffic and transportation. Several organizations (public and private) develop and test such systems. Regulatory bodies require proof that these systems are safe to be operated in public. A set of tools that has previously been used in other industries is the probabilistic risk assessment (PRA). PRAs were successfully applied to nuclear, chemical and oil and gas installations for verification of safety and (perhaps more importantly) the identification of risk reduction measures.

The brain of autonomous systems are their control systems - specifically designed for each applications. Autonomous control systems executes four tasks:

  1. collect information (such as the data from sensors);

  2. orient themselves based on the observed information;

  3. decide on actions based on the current situation or state of knowledge; and

  4. implement these actions, through control signals to actuators and other subsystems.

This process is analagous to the famous 'OODA' loop (observe, orient, decide then act) developed by United States Air Force Colonel John Boyd in the 1960s and 1970s. The OODA loop has been used extensively in many fiels beyond military strategy, including operations, business and law.

The control system is the cornerstone of any autonomous system. To essentially determine if an autonomus system is safe, we focus on the control system. It becomes important to ensure that the control system is assessed in terms of risk and possible contribution to accidents ... otherwise you cannot make that same assessment for the autonomous system more broadly.

Control systems are mainly made up of software. Software behaves differently than hardware components in respect to failure patterns. Software does not fail randomly or through ageing effects. Software failures are within the software from the beginning of operation or introduced during operation through updates. Software failures are caused by insufficient specifications, erroneous specifications, or errors introduced during programming and implementation. It can be concluded that software faults are already in the software, which makes them deterministic. Through testing and verification procedures it is attempted to remove these errors that might lead to faults. PRA can be used to quantify the uncertainty associated with the remaining faults in the software.

Ongoing SARAS research aims at synthesizing and enhancing existing methods to analyze software and its implicit risk contribution. These methods will assist system designers and operators verify autonomous systems’ safety. The method we are developing will be embedded within PRA software to make it both practical and applicable.

The research is carried out in cooperation with the Norwegian Centre of Excellence for Autonomous Marine Operations and Systems (AMOS) of the Norwegian University of Science and Technology (NTNU). AMOS intends to initially apply the method to underwater robots and autonomous ships, but the applications for autonomous systems (such as autonomous vehicles) are clear.

This research is supported as part of a student exchance. We are proud to welcome NTNU's Christoph Thieme to help us. Christoph is a PhD student whose research specifically focuses on the safety of autonomous marine systems. He has been awarded an MSc from the NTNU's Department of Marine Technology. He will be working at SARAS within the B. John Garrick Institute Risk Sciences throughout 2017.

SARAS and the 'Guiding Hands'

by Chris Jackson (chrisjackson@saras.ucla.edu)

It became apparent in 2016 to several key staff at UCLA that something was missing in the field of autonomous systems. Autonomous vehicles are poised to revolutionize transport, and the apparent safety and reliability benefits are considerable. But how do we ensure that an autonomous system is safe and reliable before we use it? How do we test or demonstrate autonomous system safety and reliability? How do we measure this? What design frameworks need to be implemented to realize robust systems? Is there some sort of guidebook for the regulators and regulated to meet these challenges? We want to be the ones to provide these frameworks and guidelines.

Everything about autonomous systems is safe and reliable. Sensors are designed to detect people and obstacles to make sure the vehicle can avoid it. An often quoted statistic is that around 94% of vehicle accidents are caused by human error, a mechanism that is by definition removed from something that it autonomous. But if presented with an autonomous system, how does a regulatory body make an assessment that it is ‘safe and reliable’ enough? How does the manufacturer drive its design team to make sure they do what is necessary to ensure that the horror scenarios associated with autonomous systems do not materialize? This was the catalyst for us forming the Center for the Safety And Reliability of Autonomous System (SARAS).

We realized that there was something we could do. We have access to some of the leading experts and organizations regarding autonomous system technology. And we hope in the near future we will start contributing to the area in very real and tangible ways. But we first had to establish a presence and communicate what we are about. Which brings us to the ‘guiding hands’ logo.

When we tried to illustrate a representative image, we understood that there were some key things we need to communicate. The right hand of the logo is robotic, and represents the ‘machine’ that will become our autonomous system. The right hand was chosen to be robotic as it is through the right hand that we typically interact with the world around us, and importantly control the things we want to. It is the right hand that guides the system and this will not be ‘human’ in an autonomous system.

The left hand is human. There will always be a human element in every autonomous system – and this should never be forgotten. Autonomous systems are used to achieve human goals. Without human guidance, systems can never truly be autonomous. Their pseudo-decision making ability is learned from us for as long as they are used.

The last element of our logo is the networked globe. Autonomous systems may be somewhat of a misnomer. They will generally need to be connected to many systems around them. Systems that we think are autonomous may actually gain their ‘autonomy’ from other systems that transmit this capability on an ongoing basis. Further, autonomous systems can actually provide substantial benefit to us all by being networked. Knowing where other systems are, where accidents have occurred, what local weather conditions are (and so on) allows autonomous systems to adapt in ways that we humans cannot. And they can learn how to do this.

The creation of SARAS is based on a holistic approach to autonomous system reliability and safety. We need to understand that the ‘system’ is actually ‘everything’ - people, public infrastructure, the environment and so on. This is where we start, and hopefully where we end is safe and reliable.

SARAS Logo (#3).JPG