Governance and Regulation Seminar – Glenn McGarry, University of Nottingham

Wednesday 6th July 2022, 2pm

Risk-based Approaches to Trustworthiness in Medical Devices

Our fieldwork explores the practices surrounding the development and regulation of trustworthy medical devices. Processes in this domain take a risk-based approach that involves tightly defined product specifications, testing, and clinical trial, which is further complemented by quality management procedures throughout the product’s life span. The certified accuracy of a device must be reproducible in manufacture and testing, and sustained in service, meaning that software components, where they exist, are routinely subjected to change-freeze and locked-in to a corresponding hardware specification. Similarly, the safety integrity of a device’s design must also be assured through documented risk-assessment, mitigation, testing, and post-market surveillance and reporting. Our findings reflect on the response of this well-established regulatory approach to the prospective use of medical AI and ‘software as a medical device’, and the implications this has for trustworthiness of autonomous systems in other high-risk domains.

Governance and Regulation Seminar – Andre Burgess, NPL

Wednesday 20th April 2022, 2pm

An assurance Framework for maritime autonomous systems 

NPL in collaboration with leading international and national maritime organizations, has identified the importance of developing an internationally accepted assurance framework for maritime autonomy. The presentation will show how such an assurance framework must bring together a diverse range of competencies and expertise, including the development of reliable virtual testing environments alongside the associated tools, test methodologies, data standards, curated assets and infrastructure to assure MAS at scale. Such a framework is required to ensure the safe and reliable operation of maritime autonomous systems as well as enabling the integration and interoperability of these systems with those of international partners and operators.

https://www.npl.co.uk/digital

Governance and Regulation Seminar – Zoë Porter and Ibrahim Habli, University of York

Wednesday 22nd June, 2pm

A Principled Ethical Assurance Argument for the Use of AI and Autonomous Systems
Assurance cases are arguments, supported by evidence, that are typically used by safety teams to establish and communicate confidence in a system’s safety properties. One emerging proposal within the trustworthy or ‘responsible’ AI/AS research community is to extend the tool of assurance cases to a much broader range of ethical and normative properties, such as respect for human autonomy and fairness. This looks to be a promising method to achieve justified confidence that a system’s use in its intended context will be ethically acceptable. We aim to bring the idea of ethical assurance arguments to life. First, we ground the enterprise of ethical assurance in four ethical principles, adapted from biomedical ethics. Second, we propose the approach of using assurance cases to translate these ethical principles into practice. Third, we provide a worked example of a principled ethical assurance argument pattern, presented graphically in Goal Structuring Notation.

Governance and Regulation Seminar 11 – Marina Jirotka, University of Oxford

Wednesday 8th December 2021

Responsible Robotics: What could possibly go wrong?

Researchers and developers in the Digital Economy (DE) face significant challenges thrown up by uncertainty about where responsibilities lie for tackling harmful outcomes of technological innovation. There are controversies about who should, for example, monitor the actions of algorithms, or remedy unfairness and bias in machine learning. These questions of responsibility are often driven by economic considerations, but the issues run much deeper and are implicated in a wider reshaping of institutional, cultural and democratic responses to governing technology. How to divide responsibilities in the DE between developers, civic authorities, users of the technology and automated systems, represents a series of as yet unsolved problems. And, challenges remain over how to embed responsibility into the processes of technological design and development. Furthermore, as the pace of innovation continues to grow, the tension between profit and responsibility also grows stronger; users can feel increasingly ill at ease in the digital economy, with values of trust, identity, privacy and security potentially undermined. 

In this talk, I will discuss the RoboTIPS project which is entwining socio-legal and technical responses in its development of an ethical black box (EBB). Black box ‘flight recorders’ are familiar from aviation and RoboTIPS is developing its EBB for use in social robots to provide data for use in the investigation of accidents and incidents. Crucially, it is not the data alone which is to be relied on in such investigations – the work draws on the important social aspects of investigation surrounding the data. This approach to new notions of responsibility that can emerge through novel configurations of technology, society and governance is being expanded throughout the newly launched Responsible Technology Institute (RTI) at Oxford. I will discuss a few research projects carried out at the newly launched Responsible Technology Institute at Oxford, as examples of embedding responsibility in the design and development of new technologies. The work of all these projects draws on interdisciplinary understandings to establish new ways of considering ‘responsibility’ that can address the challenges outlined above.

Workshop on regulation of software as medical device/AI as medical device

We are hosting a “Chatham House” workshop on Governance and regulation of software as medical device/AI as medical device.

The goal of the workshop will be to produce a structured account of the key issues and lines of debate.

Tuesday 25th January, 2022 – 2-5pm – online.