Governance and Regulation Seminar 6 – Svetlin Penkov & Daniel Angelov, Efemarai

Wednesday 25th August 2021, 2pm

Extending the industrial MLOps pipeline with the needs for robustness and continual improvement

The industrial machine learning operations pipeline needs to meet demands that go beyond the standard (1) experiment management; (2) model development and training; (3) deployment; (4) monitoring. The deployment of safety critical applications has shown that the current evaluation strategies lack the abilities to correctly evaluate the robustness of the models and their expected behaviour with their deployment in the wild.

Drawing parallels from the software engineering industry, electronic circuit board design to aircraft safety, the need for an independent quality assurance step is long overdue. We take inspiration from the latest in property based testing for software and translate it to the world of machine learning.

During this talk we’ll walk through the Efemarai Continuum platform and how it aids with measuring the robustness of the machine learning models within an operational domain.
We’ll highlight results from our public tests and show what industrial applications may expect as an outcome of using our platform.

Governance and Regulation Seminar 5 – Helen Hastie, Heriot-Watt University

Wednesday, 11th August 2021, 2pm

Trustworthy Autonomous Systems- The Human Perspective 

In this talk, the UKRI “Node on Trust” will be presented, which is part of the UKRI Trustworthy Autonomous Systems UK-wide Programme. The Node on Trust is a collaboration between Heriot-Watt University, Imperial College London and The University of Manchester and will explore how to best establish, maintain and repair trust by incorporating the subjective view of human trust towards autonomous systems. I will present our multidisciplinary approach, which is grounded in psychology and cognitive science and consists of three “pillars of trust”: 1) computational models of human trust in autonomous systems including Theory of Mind; 2) adaptation of these models in the face of errors and uncontrolled environments; and 3) user validation and evaluation across a broad range of sectors in realistic scenarios.

Bio

Helen Hastie is a Professor of Computer Science at Heriot-Watt University, Director of the EPSRC Centre for Doctoral Training in Robotic and Autonomous Systems at the Edinburgh Centre of Robotics, and Academic Lead for the National Robotarium, opening in 2022 in Edinburgh. She is currently PI on both the UKRI Trustworthy Autonomous Systems Node on Trust and the EPSRC Hume Prosperity Partnership, as well as being HRI theme lead for the EPSRC ORCA Hub. She recently held a Royal Academy of Engineering/Leverhulme Senior Research Fellowship. Her field of research is multimodal and spoken dialogue systems, human-robot interaction and trustworthy autonomous systems. She was co-ordinator of the EU project PARLANCE, has over 100 publications and has held positions on many scientific committees and advisory boards, including recently for the Scottish Government AI Strategy. 

http://www.macs.hw.ac.uk/~hh117/

Governance and Regulation Seminar 4 – Jack Stilgoe, UCL

Wednesday, 28th July 2021, 2pm

How can we know a self-driving car is safe? 

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. How safe is safe enough? Using interviews with more than 50 people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’ 
 https://link.springer.com/article/10.1007/s10676-021-09602-1

https://www.ucl.ac.uk/sts/people/dr-jack-stilgoe