Wednesday 8th June 2022, 2pm
TAS-S and National Highways: Creative Methods for Collaboration
Abstract: As the complexities of autonomous systems (AS) rapidly amplify. As part of the Security node, our research strand is collaboration with organisations to co-create ethical, legal and social issues (ELSI) guidance in working with autonomous systems developers, legislators, and users. In this talk we summarise some recent work with National Highways, in relation to their growing focus on autonomous systems across diverse public and private domains. In particular, we highlight some of the creative and arts based methods that can help open spaces for critical and circumspect reflection on difficult and multifaceted issues around things like public engagement, third party contacting, and customer protection. We conclude with some initial insights from this work and their potential wider relevance for governance.
Wednesday 20th April 2022, 2pm
An assurance Framework for maritime autonomous systems
NPL in collaboration with leading international and national maritime organizations, has identified the importance of developing an internationally accepted assurance framework for maritime autonomy. The presentation will show how such an assurance framework must bring together a diverse range of competencies and expertise, including the development of reliable virtual testing environments alongside the associated tools, test methodologies, data standards, curated assets and infrastructure to assure MAS at scale. Such a framework is required to ensure the safe and reliable operation of maritime autonomous systems as well as enabling the integration and interoperability of these systems with those of international partners and operators.
Wednesday 22nd June, 2pm
A Principled Ethical Assurance Argument for the Use of AI and Autonomous Systems
Assurance cases are arguments, supported by evidence, that are typically used by safety teams to establish and communicate confidence in a system’s safety properties. One emerging proposal within the trustworthy or ‘responsible’ AI/AS research community is to extend the tool of assurance cases to a much broader range of ethical and normative properties, such as respect for human autonomy and fairness. This looks to be a promising method to achieve justified confidence that a system’s use in its intended context will be ethically acceptable. We aim to bring the idea of ethical assurance arguments to life. First, we ground the enterprise of ethical assurance in four ethical principles, adapted from biomedical ethics. Second, we propose the approach of using assurance cases to translate these ethical principles into practice. Third, we provide a worked example of a principled ethical assurance argument pattern, presented graphically in Goal Structuring Notation.
Wednesday 26th January 2022
Simulators: where they fit, where they do not and why
Wednesday 8th December 2021
Responsible Robotics: What could possibly go wrong?
Researchers and developers in the Digital Economy (DE) face significant challenges thrown up by uncertainty about where responsibilities lie for tackling harmful outcomes of technological innovation. There are controversies about who should, for example, monitor the actions of algorithms, or remedy unfairness and bias in machine learning. These questions of responsibility are often driven by economic considerations, but the issues run much deeper and are implicated in a wider reshaping of institutional, cultural and democratic responses to governing technology. How to divide responsibilities in the DE between developers, civic authorities, users of the technology and automated systems, represents a series of as yet unsolved problems. And, challenges remain over how to embed responsibility into the processes of technological design and development. Furthermore, as the pace of innovation continues to grow, the tension between profit and responsibility also grows stronger; users can feel increasingly ill at ease in the digital economy, with values of trust, identity, privacy and security potentially undermined.
In this talk, I will discuss the RoboTIPS project which is entwining socio-legal and technical responses in its development of an ethical black box (EBB). Black box ‘flight recorders’ are familiar from aviation and RoboTIPS is developing its EBB for use in social robots to provide data for use in the investigation of accidents and incidents. Crucially, it is not the data alone which is to be relied on in such investigations – the work draws on the important social aspects of investigation surrounding the data. This approach to new notions of responsibility that can emerge through novel configurations of technology, society and governance is being expanded throughout the newly launched Responsible Technology Institute (RTI) at Oxford. I will discuss a few research projects carried out at the newly launched Responsible Technology Institute at Oxford, as examples of embedding responsibility in the design and development of new technologies. The work of all these projects draws on interdisciplinary understandings to establish new ways of considering ‘responsibility’ that can address the challenges outlined above.
Wednesday 27 October 2021, 2pm
Evolution in Standards and Regulation in the Aerospace Domain
What it feels like to work in a highly regulated industry such as Aerospace and the radical changes in philosophical approach that have emerged over the last 20 years. Personal experiences from an industrial perspective, but also anecdotal feedback from those in the regulator community gleaned through work on cooperative initiatives to develop the standards, regulations and associated guidance for those applying them in both the regulator and regulated communities.
Wednesday 13th October 2021, 2pm
Human Autonomy and the Governance of Autonomous Systems
What does it take to protect human autonomy? Principles of `human autonomy’ abound in current guidelines on the responsible development of AI. Yet, these guidelines exhibit substantial differences in what they take the risks to human autonomy to be. In this talk, I argue that we need to distinguish between different dimensions of autonomy before we can begin addressing these risks. Each dimension will require a distinct set of governance requirements to be implemented by developers and policy bodies.
Wednesday 22 September 2021, 2pm
Overview of TAS Node in Functionality
This talk will give an overview of the work and goals of the TAS node in Functionality, based at the University of Bristol and the Bristol Robotics Laboratory. It will involve an outline of the node’s members, projects, and their related work packages, followed by a more extended discussion of its ‘regulation’ work package.
Wednesday 8th September 2021, 2pm
Assurance 2.0 and impact of AI/ML on regulation
In this talk I will address two themes: one is the impact on AI/ML on regulation based on an analysis we have done of the UK Nuclear Safety Regime and the other is the approach we have dubbed Assurance 2.0.
Assurance 2.0 has been developed driven by the need for assuring new AI/ML based systems, the need for security informed safety assurance, and the potential for automating assurance. “Assurance 2.0,” as an enabler that supports innovation and continuous incremental assurance. Perhaps unexpectedly, it does so by making assurance more rigorous, with increased focus on the reasoning and evidence employed, and explicit identification of defeaters and counterevidence. I will argue that it also provides a framework within which to focus and assess the work of the TAS nodes.
In terms of the impact on of AI/ML regulation, I will summarize some detailed analysis we have done of the UK ONR’s safety assessment principles to identify areas that may be affected by, or support, AI/ML assurance. Changes to regulation could take the form of augmenting the safety assessment principles, creating separate principles that account for, and clarify, AI/ML specific topics, or through the inclusion of an additional technical assessment guide to cover topics such as data, security, and autonomy frameworks.
Wednesday 14th July 2021, 2pm
What we talk about when we talk about Trustworthy Autonomous Systems
In this talk I will provide an overview of the research within the UKRI Trustworthy Autonomous Systems (TAS) Hub, providing some motivating real-world examples and perspectives to frame a hopefully productive sense of the term TAS. We take a broad view on Autonomous Systems (AS); we view AS as systems involving software applications, machines, and people, that are able to take actions with little or no human supervision (see https://www.tas.ac.uk/our-definitions/). Some Autonomous Systems are already pervasive in society (e.g., algorithmic decision-making) while others are nascent (e.g., autonomous vehicles); and while there are many potential benefits, we unfortunately too often witness the wide-ranging negative consequences when AS ‘go wrong’ from downgrading A-Level results, to spreading hate speech, to wrongful conviction, to fatal accidents. We need expertise crossing a wide area of disciplines to tackle the challenges societies face, including in computer science and engineering disciplines, social sciences and humanities, and law and regulation. I will present some of the research within the TAS programme that is starting to address some of these challenges.