Workshop on regulation of software as medical device/AI as medical device

We are hosting a “Chatham House” workshop on Governance and regulation of software as medical device/AI as medical device.

The goal of the workshop will be to produce a structured account of the key issues and lines of debate.

Tuesday 25th January, 2022 – 2-5pm – online.

Governance and Regulation Seminar 6 – Svetlin Penkov & Daniel Angelov, Efemarai

Wednesday 25th August 2021, 2pm

Extending the industrial MLOps pipeline with the needs for robustness and continual improvement

The industrial machine learning operations pipeline needs to meet demands that go beyond the standard (1) experiment management; (2) model development and training; (3) deployment; (4) monitoring. The deployment of safety critical applications has shown that the current evaluation strategies lack the abilities to correctly evaluate the robustness of the models and their expected behaviour with their deployment in the wild.

Drawing parallels from the software engineering industry, electronic circuit board design to aircraft safety, the need for an independent quality assurance step is long overdue. We take inspiration from the latest in property based testing for software and translate it to the world of machine learning.

During this talk we’ll walk through the Efemarai Continuum platform and how it aids with measuring the robustness of the machine learning models within an operational domain.
We’ll highlight results from our public tests and show what industrial applications may expect as an outcome of using our platform.

Governance and Regulation Seminar 5 – Helen Hastie, Heriot-Watt University

Wednesday, 11th August 2021, 2pm

Trustworthy Autonomous Systems- The Human Perspective 

In this talk, the UKRI “Node on Trust” will be presented, which is part of the UKRI Trustworthy Autonomous Systems UK-wide Programme. The Node on Trust is a collaboration between Heriot-Watt University, Imperial College London and The University of Manchester and will explore how to best establish, maintain and repair trust by incorporating the subjective view of human trust towards autonomous systems. I will present our multidisciplinary approach, which is grounded in psychology and cognitive science and consists of three “pillars of trust”: 1) computational models of human trust in autonomous systems including Theory of Mind; 2) adaptation of these models in the face of errors and uncontrolled environments; and 3) user validation and evaluation across a broad range of sectors in realistic scenarios.

Bio

Helen Hastie is a Professor of Computer Science at Heriot-Watt University, Director of the EPSRC Centre for Doctoral Training in Robotic and Autonomous Systems at the Edinburgh Centre of Robotics, and Academic Lead for the National Robotarium, opening in 2022 in Edinburgh. She is currently PI on both the UKRI Trustworthy Autonomous Systems Node on Trust and the EPSRC Hume Prosperity Partnership, as well as being HRI theme lead for the EPSRC ORCA Hub. She recently held a Royal Academy of Engineering/Leverhulme Senior Research Fellowship. Her field of research is multimodal and spoken dialogue systems, human-robot interaction and trustworthy autonomous systems. She was co-ordinator of the EU project PARLANCE, has over 100 publications and has held positions on many scientific committees and advisory boards, including recently for the Scottish Government AI Strategy. 

http://www.macs.hw.ac.uk/~hh117/

Governance and Regulation Seminar 4 – Jack Stilgoe, UCL

Wednesday, 28th July 2021, 2pm

How can we know a self-driving car is safe? 

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. How safe is safe enough? Using interviews with more than 50 people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’ 
 https://link.springer.com/article/10.1007/s10676-021-09602-1

https://www.ucl.ac.uk/sts/people/dr-jack-stilgoe

Biomedical AI CDT seminar 26 May: Kevin Wiggert

Seminar co-organised by Node on Governance and Regulation

Biomedical AI CDT Seminar series

Wednesday 26 May, 3pm

Title:Scripting the Use of Medical Technology – The Case of Data-based Clinical Decision Support Systems

Speaker: Kevin Wiggert, Department of Sociology of Technology and Innovation, Technical University of Berlin.

Register to join via Zoom Webinar

Abstract: Newly developed Clinical Decision Support Systems (CDSSs), which are supposed to provide information or support decisions, are increasingly systemically opaque and thus less comprehensible for the physician. It becomes more and more unclear to the clinical user and to the developers themselves what data sources and what information are the fundaments of these technologies and how the technology is computing its reasoning on this data. To better comprehend the reasoning of these technologies for the actors engaged it is not only necessary to better understand the inner processes of the technology but also to get more insights into the assumptions the technology is built on. This, again, requires a better understanding of the imagination and ideas about the ways of use which are inscribed into the technology during its development. To exemplify this, I used a case study of the development of a CDSS meant to support treatment decisions in cardiology, a CDSS that is at least partly not based on expert knowledge, that means knowledge from medical experts and/or clinical guidelines but uses techniques of machine learning or simulation. It receives its way of reasoning from recognizing patterns within the data this engine is operating with. This can possibly lead to new findings and discoveries that are not based on pre-existing knowledge and not represented by clinical guidelines but on (potential) correlations between data points in a data corpus. In combining and extending the theoretical concepts of situational scenarios in technology development as well as the notion of scripts written into technology I am going to show that the building of the CDSS-prototype and the negotiation about the components of future situations of use co-evolved and led to specific scripts influencing the envisioned user to use the technology in particular ways and receive recommendations based on a so-called virtual patient, a data corpus “representing” the real patient under treatment. In the case study, it was particularly the engineers’ perspective and less the clinicians’ one about the application context that got implemented, which ended in less recognition of the patients’ role in the consultation as well as an assumed passivity on the part of the clinician as recipient of information delivered by the technology.

Speaker’s Bio
I am a Doctoral Researcher at the Department of Sociology of Technology and Innovation at the Technical University of Berlin. I have a special interest in the development and application processes of artificially intelligent machines, in particular within medical and health contexts. At present, I am part of a research project that focusses on the social construction of human-robot co-work by means of prototype work settings in the care and healthcare sector. My doctor’s thesis is about trust in artificially intelligent machines in health work contexts. Here, I am using the examples of assisting medical robots and systems of decision support.

Ram Ramamoorthy to talk Multi-Agent Systems Group

Prof Ramamoorthy will give a talk in the Multi-Agent Systems seminar series within the Turing Institute, at 10:30 am on 20th May 2021. This talk will give an overview of the UKRI Research Node on TAS Governance and Regulation, along with Prof Helen Hastie from Heriot Watt University who will speak about the Research Node on Trust.

https://www.turing.ac.uk/research/interest-groups/multi-agent-systems