We know that around 94 per cent of all United States road deaths (and similarly high percentages of road deaths worldwide) are caused by human error. We know that humans are currently engaged in risky endeavors in all fields and industries where autonomous systems could help make things safer. But what we might not know is the extent to which autonomy is already making the world better … be it on aircraft or vehicles with electronic stability control. The next step is to cede control completely to a machine – which appears inevitable in many domains.
We at the University of California, Los Angeles, saw a need for academic leadership on autonomous system safety and reliability at a level beyond technology development. Autonomous technologies are ‘inherently’ trying to be safe. The challenge then becomes how do we design these technologies to be ‘safe enough?’ And importantly, how to we make these systems reliable so that the ‘inherent’ safety of the autonomous system is always present?
SARAS was formed in 2016 to assist the regulators and the regulated in assuring these autonomous systems are safe and reliable. Our team comprises a multi-discipline team that from areas ranging from machine learning lane recognition algorithms to public planning. Importantly, we leverage our expertise in the risk and reliability sciences to provide a coherent paradigm within which these disciplines can effectively combine to realize our goals.
Being a Center within the B. John Garrick Institute for the Risk Sciences gives us a unique advantage in terms of experience. Our hierarchy and senior fellows have all helped nuclear, aviation, defense and other industries to work out how to make systems ‘safe and reliable’ enough. These industries were themselves or inherently involved disruptive technologies where the questions we ask ourselves today about the safety and reliability of autonomous systems have already been asked (and answered) in various contexts in the past.
Our mission is to help manufacturers, regulators and planners to design and enforce the ongoing safety and reliability of autonomous systems.
What is the difference between a ‘vision’ and a ‘mission’? The mission refers to what we want to do. The vision refers to what we want the world to look like when we have done it.
Our vision is necessarily futuristic. We see a world where many of the tasks and activities we undertake today are more effectively and efficiently undertaken by autonomous systems. We see global productivity increasing as a result. We see an environment where emerging ideas about how autonomy can help society more broadly quickly become realities.
Our vision is that the decisions about whether an autonomous system is safe and reliable is not a barrier to its successful implementation.
This does not mean that autonomous system safety and reliability is not paramount or (potentially) resource intensive. What it does mean is that the engineers of tomorrow need to be provided with clear guidance and architectures that allow them to fundamentally know how to design safe and reliable autonomous systems. It also means that the regulators and certifiers of tomorrow are provided with clear guidance and architectures that allow them to decide if an autonomous system is safe and reliable enough.
Want to get involved?
We provide services to all autonomous system stakeholders including governments, regulatory bodies, manufacturers, planners and research organizations. We see an important element of our role being the development of guide and handbooks that are based on collaborative, industry-wide efforts. We are interested in forming partnerships and undertaking tailored research.