By Davy Pissoort
Autonomous systems offer humankind tremendous opportunities, like freeing us from mundane tasks, carrying out risky procedures and generally giving us more time to enjoy the things we like doing. As long as these systems are operating in a human-less environment, like enclosed surroundings in a factory, they are readily accepted. However, people lack trust in autonomous systems if their own safety is dependent on a machine’s correct operation. Whenever someone mentions self-driving cars, the question of safety is raised immediately. Even though the widespread introduction of autonomous vehicles would almost eliminate the more-than 20.000 deaths on European roads each year, it will not happen until we can provide the assurance that these systems will be safe and perform as intended, no matter what.
Unfortunately, existing safety assurance approaches and standards are developed primarily for systems where a human can take over in case of emergency and, hence, do not extend to fully autonomous systems. What’s more, current safety assurance approaches generally assume that once the system is deployed, it will not learn or evolve by itself. This ignores many of the new safety challenges that come along with autonomous systems: unsafe software behavior, open-context nature, functional insufficiencies, human-machine interaction, just to name a few.