Machines Making Decision Based on Risk

Risk Assessment of Autonomous System Control Software

Autonomous systems such as (autonomous vehicles and ships) will change the characteristics of traffic and transportation. Several organizations (public and private) develop and test such systems. Regulatory bodies require proof that these systems are safe to be operated in public. A set of tools that has previously been used in other industries is the probabilistic risk assessment (PRA). PRAs were successfully applied to nuclear, chemical and oil and gas installations for verification of safety and (perhaps more importantly) the identification of risk reduction measures.

The brain of autonomous systems are their control systems - specifically designed for each applications. Autonomous control systems executes four tasks:

  1. collect information (such as the data from sensors);

  2. orient themselves based on the observed information;

  3. decide on actions based on the current situation or state of knowledge; and

  4. implement these actions, through control signals to actuators and other subsystems.

This process is analagous to the famous 'OODA' loop (observe, orient, decide then act) developed by United States Air Force Colonel John Boyd in the 1960s and 1970s. The OODA loop has been used extensively in many fiels beyond military strategy, including operations, business and law.

The control system is the cornerstone of any autonomous system. To essentially determine if an autonomus system is safe, we focus on the control system. It becomes important to ensure that the control system is assessed in terms of risk and possible contribution to accidents ... otherwise you cannot make that same assessment for the autonomous system more broadly.

Control systems are mainly made up of software. Software behaves differently than hardware components in respect to failure patterns. Software does not fail randomly or through ageing effects. Software failures are within the software from the beginning of operation or introduced during operation through updates. Software failures are caused by insufficient specifications, erroneous specifications, or errors introduced during programming and implementation. It can be concluded that software faults are already in the software, which makes them deterministic. Through testing and verification procedures it is attempted to remove these errors that might lead to faults. PRA can be used to quantify the uncertainty associated with the remaining faults in the software.

Ongoing SARAS research aims at synthesizing and enhancing existing methods to analyze software and its implicit risk contribution. These methods will assist system designers and operators verify autonomous systems’ safety. The method we are developing will be embedded within PRA software to make it both practical and applicable.

The research is carried out in cooperation with the Norwegian Centre of Excellence for Autonomous Marine Operations and Systems (AMOS) of the Norwegian University of Science and Technology (NTNU). AMOS intends to initially apply the method to underwater robots and autonomous ships, but the applications for autonomous systems (such as autonomous vehicles) are clear.

This research is supported as part of a student exchance. We are proud to welcome NTNU's Christoph Thieme to help us. Christoph is a PhD student whose research specifically focuses on the safety of autonomous marine systems. He has been awarded an MSc from the NTNU's Department of Marine Technology. He will be working at SARAS within the B. John Garrick Institute Risk Sciences throughout 2017.