Machines Learning to be Reliable
Machine learning is not a new concept - machines are very good at quickly going through data and (when taught how to do so) can fairly easily identify patterns. If a machine identifies a pattern in failure data - it may have identified a causal relationship. This is - it may have found out the cause for a particular form of failure. But in the field of reliability engineering,we already have many models of failure mechanisms. These are better than 'patterns' as they are based on science. Can we combine the benefits of machine learning with our 'human' understanding of why things fail? Can we motivate a machine to work out how to better 'operate itself' to be more reliable? These are questions that we are trying to answer at SARAS.
Autonomous Systems Control Software - Continuous Risk Assessment
How do we assure that an autonomous system does what it is supposed to do? Autonomous systems are controlled by software, which is supposed to always make “right” decisions. Probabilistic risk assessment is one way to assess and verify that a system is safe enough to operate. Software behaves unlike physical system components. Its assessment therefore requires adapted methods. Aim of the ongoing research is to develop a method for assessing the impact of software control systems on the risk level of operation of autonomous systems