Seminars and Events

Governance and Regulation Seminar 7 – Robin Bloomfield, Adelard

Wednesday 8th September 2021, 2pm

Assurance 2.0 and impact of AI/ML on regulation

In this talk I will address two themes: one is the impact on AI/ML on regulation based on an analysis we have done of the UK Nuclear Safety Regime and the other is the approach we have dubbed Assurance 2.0.

Assurance 2.0 has been developed driven by the need for assuring new AI/ML based systems, the need for security informed safety assurance, and the potential for automating assurance. “Assurance 2.0,” as an enabler that supports innovation and continuous incremental assurance. Perhaps unexpectedly, it does so by making assurance more rigorous, with increased focus on the reasoning and evidence employed, and explicit identification of defeaters and counterevidence. I will argue that it also provides a framework within which to focus and assess the work of the TAS nodes.

In terms of the impact on of AI/ML regulation, I will summarize some detailed analysis we have done of the UK ONR’s safety assessment principles to identify areas that may be affected by, or support, AI/ML assurance. Changes to regulation could take the form of augmenting the safety assessment principles, creating separate principles that account for, and clarify, AI/ML specific topics, or through the inclusion of an additional technical assessment guide to cover topics such as data, security, and autonomy frameworks.

Governance and Regulation Seminar 6 – Svetlin Penkov & Daniel Angelov, Efemarai

Wednesday 25th August 2021, 2pm

Extending the industrial MLOps pipeline with the needs for robustness and continual improvement

The industrial machine learning operations pipeline needs to meet demands that go beyond the standard (1) experiment management; (2) model development and training; (3) deployment; (4) monitoring. The deployment of safety critical applications has shown that the current evaluation strategies lack the abilities to correctly evaluate the robustness of the models and their expected behaviour with their deployment in the wild.

Drawing parallels from the software engineering industry, electronic circuit board design to aircraft safety, the need for an independent quality assurance step is long overdue. We take inspiration from the latest in property based testing for software and translate it to the world of machine learning.

During this talk we’ll walk through the Efemarai Continuum platform and how it aids with measuring the robustness of the machine learning models within an operational domain.
We’ll highlight results from our public tests and show what industrial applications may expect as an outcome of using our platform.

Governance and Regulation Seminar 5 – Helen Hastie, Heriot-Watt University

Wednesday, 11th August 2021, 2pm

Trustworthy Autonomous Systems- The Human Perspective 

In this talk, the UKRI “Node on Trust” will be presented, which is part of the UKRI Trustworthy Autonomous Systems UK-wide Programme. The Node on Trust is a collaboration between Heriot-Watt University, Imperial College London and The University of Manchester and will explore how to best establish, maintain and repair trust by incorporating the subjective view of human trust towards autonomous systems. I will present our multidisciplinary approach, which is grounded in psychology and cognitive science and consists of three “pillars of trust”: 1) computational models of human trust in autonomous systems including Theory of Mind; 2) adaptation of these models in the face of errors and uncontrolled environments; and 3) user validation and evaluation across a broad range of sectors in realistic scenarios.

Bio

Helen Hastie is a Professor of Computer Science at Heriot-Watt University, Director of the EPSRC Centre for Doctoral Training in Robotic and Autonomous Systems at the Edinburgh Centre of Robotics, and Academic Lead for the National Robotarium, opening in 2022 in Edinburgh. She is currently PI on both the UKRI Trustworthy Autonomous Systems Node on Trust and the EPSRC Hume Prosperity Partnership, as well as being HRI theme lead for the EPSRC ORCA Hub. She recently held a Royal Academy of Engineering/Leverhulme Senior Research Fellowship. Her field of research is multimodal and spoken dialogue systems, human-robot interaction and trustworthy autonomous systems. She was co-ordinator of the EU project PARLANCE, has over 100 publications and has held positions on many scientific committees and advisory boards, including recently for the Scottish Government AI Strategy. 

http://www.macs.hw.ac.uk/~hh117/

Governance and Regulation Seminar 4 – Jack Stilgoe, UCL

Wednesday, 28th July 2021, 2pm

How can we know a self-driving car is safe? 

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. How safe is safe enough? Using interviews with more than 50 people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’ 
 https://link.springer.com/article/10.1007/s10676-021-09602-1

https://www.ucl.ac.uk/sts/people/dr-jack-stilgoe

Governance and Regulation Seminar 3 – Joel Fischer, University of Nottingham

Wednesday 14th July 2021, 2pm

What we talk about when we talk about Trustworthy Autonomous Systems

In this talk I will provide an overview of the research within the UKRI Trustworthy Autonomous Systems (TAS) Hub, providing some motivating real-world examples and perspectives to frame a hopefully productive sense of the term TAS. We take a broad view on Autonomous Systems (AS); we view AS as systems involving software applications, machines, and people, that are able to take actions with little or no human supervision (see https://www.tas.ac.uk/our-definitions/). Some Autonomous Systems are already pervasive in society (e.g., algorithmic decision-making) while others are nascent (e.g., autonomous vehicles); and while there are many potential benefits, we unfortunately too often witness the wide-ranging negative consequences when AS ‘go wrong’ from downgrading A-Level results, to spreading hate speech, to wrongful conviction, to fatal accidents. We need expertise crossing a wide area of disciplines to tackle the challenges societies face, including in computer science and engineering disciplines, social sciences and humanities, and law and regulation. I will present some of the research within the TAS programme that is starting to address some of these challenges.

http://www.cs.nott.ac.uk/~pszjf1/

Governance and Regulation Seminar 2 – Ekaterina Komendantskaya, Heriot Watt University

Wednesday 23rd June, 2pm

Analyse, search, reason, interpret, verify: the “Maslow Pyramid” of AI modelling.

Speakers: E.Komendantskaya, A.Hill, M.Daggitt.

Abstract:  In this talk, we will consider the “pyramid” of methods deployed in modelling and implementing complex autonomous systems, from the low-level perception methods in the style of statistical machine learning, to the algorithms that do search, planning and reasoning. We will focus in particular on the more enigmatic group of methods situated at the top of the pyramid and responsible for interpretation and verification of the outputs of the algorithms deployed on the lower levels of the pyramid. Our recent work (https://arxiv.org/pdf/2105.11267.pdf) has been focused on developing a safe and correct-by-construction embedding of the outputs of AI planners such as STRIPS into higher-order reasoning and proof environments (such as the dependently-typed language Agda), with a view of  establishing a general methodology for higher-order reasoning, interpretation and verification of complex AI systems. In the second half of the talk, we will discuss some illustrative examples and technical details of this methodology at work.

Governance and Regulation Seminar 1 – Lorenzo Strigini City, University of London

Wednesday 9th June, 2pm

Hosted by Stuart Anderson.

“How safe is this autonomous vehicle? Difficulties of probabilistic predictions and some ideas for improving trust in them”

Abstract: Claiming that a system is “safe enough” implies a prediction that it will cause accidents infrequently enough. These quantitative predictions are especially hard for some autonomous systems, like self-driving road vehicles for unrestricted use. I will briefly discuss the possible quantitative requirements and the difficulties of demonstrating that they are satisfied. These difficulties may arise from insufficient evidence (we may simply not have enough relevant experience with these novel systems) and/or from weak arguments (statistical reasoning that is undermined by relying on doubtful assumptions). I will outline research using forms of probabilistic argument that try to avoid the latter problem. These formal probabilistic arguments mimic common, informal arguments for system safety but allow one to study to what extent they should bolster confidence and what are their limits. These forms of mathematical argument might support more modest safety claims than desired; but they support risk-aware decisions about these novel systems, and indicate directions for progress towards greater confidence in greater safety.

Lorenzo Strigini has worked for approximately forty years in the dependable computing area. He is a professor and the director of the Centre for Software Reliability at City, University of London. He joined City in 1995, after ten years with the National Research Council of Italy. His work has mostly addressed fault tolerance against design faults, as well as against human error and against attacks; and probabilistic assessment of system dependability attributes to provide insight, steer design and support acceptance decisions. He has published on theoretical advances as well as application to areas including safety systems for nuclear reactors, computer-aided medical decisions, the decision making about acceptance of safety-critical systems, autonomous vehicles.

https://www.city.ac.uk/about/people/academics/lorenzo-strigini