Governance and Regulation Seminar – Luke Moffat, Joe Deville, Catherine Easton – Lancaster University (Security Node)

Wednesday 8th June 2022, 2pm

TAS-S and National Highways: Creative Methods for Collaboration

Abstract: As the complexities of autonomous systems (AS) rapidly amplify. As part of the Security node, our research strand is collaboration with organisations to co-create ethical, legal and social issues (ELSI) guidance in working with autonomous systems developers, legislators, and users. In this talk we summarise some recent work with National Highways, in relation to their growing focus on autonomous systems across diverse public and private domains. In particular, we highlight some of the creative and arts based methods that can help open spaces for critical and circumspect reflection on difficult and multifaceted issues around things like public engagement, third party contacting, and customer protection. We conclude with some initial insights from this work and their potential wider relevance for governance.

Governance and Regulation Seminar – Andre Burgess, NPL

Wednesday 20th April 2022, 2pm

An assurance Framework for maritime autonomous systems 

NPL in collaboration with leading international and national maritime organizations, has identified the importance of developing an internationally accepted assurance framework for maritime autonomy. The presentation will show how such an assurance framework must bring together a diverse range of competencies and expertise, including the development of reliable virtual testing environments alongside the associated tools, test methodologies, data standards, curated assets and infrastructure to assure MAS at scale. Such a framework is required to ensure the safe and reliable operation of maritime autonomous systems as well as enabling the integration and interoperability of these systems with those of international partners and operators.

Governance and Regulation Seminar – Zoë Porter and Ibrahim Habli, University of York

Wednesday 22nd June, 2pm

A Principled Ethical Assurance Argument for the Use of AI and Autonomous Systems
Assurance cases are arguments, supported by evidence, that are typically used by safety teams to establish and communicate confidence in a system’s safety properties. One emerging proposal within the trustworthy or ‘responsible’ AI/AS research community is to extend the tool of assurance cases to a much broader range of ethical and normative properties, such as respect for human autonomy and fairness. This looks to be a promising method to achieve justified confidence that a system’s use in its intended context will be ethically acceptable. We aim to bring the idea of ethical assurance arguments to life. First, we ground the enterprise of ethical assurance in four ethical principles, adapted from biomedical ethics. Second, we propose the approach of using assurance cases to translate these ethical principles into practice. Third, we provide a worked example of a principled ethical assurance argument pattern, presented graphically in Goal Structuring Notation.

Governance and Regulation Seminar 11 – Marina Jirotka, University of Oxford

Wednesday 8th December 2021

Responsible Robotics: What could possibly go wrong?

Researchers and developers in the Digital Economy (DE) face significant challenges thrown up by uncertainty about where responsibilities lie for tackling harmful outcomes of technological innovation. There are controversies about who should, for example, monitor the actions of algorithms, or remedy unfairness and bias in machine learning. These questions of responsibility are often driven by economic considerations, but the issues run much deeper and are implicated in a wider reshaping of institutional, cultural and democratic responses to governing technology. How to divide responsibilities in the DE between developers, civic authorities, users of the technology and automated systems, represents a series of as yet unsolved problems. And, challenges remain over how to embed responsibility into the processes of technological design and development. Furthermore, as the pace of innovation continues to grow, the tension between profit and responsibility also grows stronger; users can feel increasingly ill at ease in the digital economy, with values of trust, identity, privacy and security potentially undermined. 

In this talk, I will discuss the RoboTIPS project which is entwining socio-legal and technical responses in its development of an ethical black box (EBB). Black box ‘flight recorders’ are familiar from aviation and RoboTIPS is developing its EBB for use in social robots to provide data for use in the investigation of accidents and incidents. Crucially, it is not the data alone which is to be relied on in such investigations – the work draws on the important social aspects of investigation surrounding the data. This approach to new notions of responsibility that can emerge through novel configurations of technology, society and governance is being expanded throughout the newly launched Responsible Technology Institute (RTI) at Oxford. I will discuss a few research projects carried out at the newly launched Responsible Technology Institute at Oxford, as examples of embedding responsibility in the design and development of new technologies. The work of all these projects draws on interdisciplinary understandings to establish new ways of considering ‘responsibility’ that can address the challenges outlined above.

Governance and Regulation Seminar 10 – Jane Fenn, BAE Systems

Wednesday 27 October 2021, 2pm

Evolution in Standards and Regulation in the Aerospace Domain

What it feels like to work in a highly regulated industry such as Aerospace and the radical changes in philosophical approach that have emerged over the last 20 years.  Personal experiences from an industrial perspective, but also anecdotal feedback from those in the regulator community gleaned through work on cooperative initiatives to develop the standards, regulations and associated guidance for those applying them in both the regulator and regulated communities.

Governance and Regulation Seminar 9 – Carina Prunkl, University of Oxford

Wednesday 13th October 2021, 2pm

Human Autonomy and the Governance of Autonomous Systems

What does it take to protect human autonomy? Principles of `human autonomy’ abound in current guidelines on the responsible development of AI. Yet, these guidelines exhibit substantial differences in what they take the risks to human autonomy to be. In this talk, I argue that we need to distinguish between different dimensions of autonomy before we can begin addressing these risks. Each dimension will require a distinct set of governance requirements to be implemented by developers and policy bodies.

Governance and Regulation Seminar 8 – John Downer, University of Bristol

Wednesday 22 September 2021, 2pm

Overview of TAS Node in Functionality

This talk will give an overview of the work and goals of the TAS node in Functionality, based at the University of Bristol and the Bristol Robotics Laboratory. It will involve an outline of the node’s members, projects, and their related work packages, followed by a more extended discussion of its ‘regulation’ work package.