Archived: Governance and Regulation Seminar 1 – Lorenzo Strigini City, University of London

This content has been archived. It may no longer be relevant

Wednesday 9th June, 2pm

Hosted by Stuart Anderson.

“How safe is this autonomous vehicle? Difficulties of probabilistic predictions and some ideas for improving trust in them”

Abstract: Claiming that a system is “safe enough” implies a prediction that it will cause accidents infrequently enough. These quantitative predictions are especially hard for some autonomous systems, like self-driving road vehicles for unrestricted use. I will briefly discuss the possible quantitative requirements and the difficulties of demonstrating that they are satisfied. These difficulties may arise from insufficient evidence (we may simply not have enough relevant experience with these novel systems) and/or from weak arguments (statistical reasoning that is undermined by relying on doubtful assumptions). I will outline research using forms of probabilistic argument that try to avoid the latter problem. These formal probabilistic arguments mimic common, informal arguments for system safety but allow one to study to what extent they should bolster confidence and what are their limits. These forms of mathematical argument might support more modest safety claims than desired; but they support risk-aware decisions about these novel systems, and indicate directions for progress towards greater confidence in greater safety.

Lorenzo Strigini has worked for approximately forty years in the dependable computing area. He is a professor and the director of the Centre for Software Reliability at City, University of London. He joined City in 1995, after ten years with the National Research Council of Italy. His work has mostly addressed fault tolerance against design faults, as well as against human error and against attacks; and probabilistic assessment of system dependability attributes to provide insight, steer design and support acceptance decisions. He has published on theoretical advances as well as application to areas including safety systems for nuclear reactors, computer-aided medical decisions, the decision making about acceptance of safety-critical systems, autonomous vehicles.