Wednesday, 18th May 2022, 2pm
Talk details to follow.
Wednesday, 18th May 2022, 2pm
Talk details to follow.
Wednesday 6th July 2022, 2pm
Risk-based Approaches to Trustworthiness in Medical Devices
Our fieldwork explores the practices surrounding the development and regulation of trustworthy medical devices. Processes in this domain take a risk-based approach that involves tightly defined product specifications, testing, and clinical trial, which is further complemented by quality management procedures throughout the product’s life span. The certified accuracy of a device must be reproducible in manufacture and testing, and sustained in service, meaning that software components, where they exist, are routinely subjected to change-freeze and locked-in to a corresponding hardware specification. Similarly, the safety integrity of a device’s design must also be assured through documented risk-assessment, mitigation, testing, and post-market surveillance and reporting. Our findings reflect on the response of this well-established regulatory approach to the prospective use of medical AI and ‘software as a medical device’, and the implications this has for trustworthiness of autonomous systems in other high-risk domains.
Wednesday 8th June 2022, 2pm
Creative Methods for More Secure, More Ethical, Autonomous Systems Design
Abstract to follow.
Wednesday 20th April 2022, 2pm
An assurance Framework for maritime autonomous systems
NPL in collaboration with leading international and national maritime organizations, has identified the importance of developing an internationally accepted assurance framework for maritime autonomy. The presentation will show how such an assurance framework must bring together a diverse range of competencies and expertise, including the development of reliable virtual testing environments alongside the associated tools, test methodologies, data standards, curated assets and infrastructure to assure MAS at scale. Such a framework is required to ensure the safe and reliable operation of maritime autonomous systems as well as enabling the integration and interoperability of these systems with those of international partners and operators.
Wednesday 22nd June, 2pm
A Principled Ethical Assurance Argument for the Use of AI and Autonomous Systems
Assurance cases are arguments, supported by evidence, that are typically used by safety teams to establish and communicate confidence in a system’s safety properties. One emerging proposal within the trustworthy or ‘responsible’ AI/AS research community is to extend the tool of assurance cases to a much broader range of ethical and normative properties, such as respect for human autonomy and fairness. This looks to be a promising method to achieve justified confidence that a system’s use in its intended context will be ethically acceptable. We aim to bring the idea of ethical assurance arguments to life. First, we ground the enterprise of ethical assurance in four ethical principles, adapted from biomedical ethics. Second, we propose the approach of using assurance cases to translate these ethical principles into practice. Third, we provide a worked example of a principled ethical assurance argument pattern, presented graphically in Goal Structuring Notation.
Wednesday 26th January 2022
Simulators: where they fit, where they do not and why
Wednesday 8th December 2021
Responsible Robotics: What could possibly go wrong?
Researchers and developers in the Digital Economy (DE) face significant challenges thrown up by uncertainty about where responsibilities lie for tackling harmful outcomes of technological innovation. There are controversies about who should, for example, monitor the actions of algorithms, or remedy unfairness and bias in machine learning. These questions of responsibility are often driven by economic considerations, but the issues run much deeper and are implicated in a wider reshaping of institutional, cultural and democratic responses to governing technology. How to divide responsibilities in the DE between developers, civic authorities, users of the technology and automated systems, represents a series of as yet unsolved problems. And, challenges remain over how to embed responsibility into the processes of technological design and development. Furthermore, as the pace of innovation continues to grow, the tension between profit and responsibility also grows stronger; users can feel increasingly ill at ease in the digital economy, with values of trust, identity, privacy and security potentially undermined.
In this talk, I will discuss the RoboTIPS project which is entwining socio-legal and technical responses in its development of an ethical black box (EBB). Black box ‘flight recorders’ are familiar from aviation and RoboTIPS is developing its EBB for use in social robots to provide data for use in the investigation of accidents and incidents. Crucially, it is not the data alone which is to be relied on in such investigations – the work draws on the important social aspects of investigation surrounding the data. This approach to new notions of responsibility that can emerge through novel configurations of technology, society and governance is being expanded throughout the newly launched Responsible Technology Institute (RTI) at Oxford. I will discuss a few research projects carried out at the newly launched Responsible Technology Institute at Oxford, as examples of embedding responsibility in the design and development of new technologies. The work of all these projects draws on interdisciplinary understandings to establish new ways of considering ‘responsibility’ that can address the challenges outlined above.
Wednesday 27 October 2021, 2pm
Evolution in Standards and Regulation in the Aerospace Domain
What it feels like to work in a highly regulated industry such as Aerospace and the radical changes in philosophical approach that have emerged over the last 20 years. Personal experiences from an industrial perspective, but also anecdotal feedback from those in the regulator community gleaned through work on cooperative initiatives to develop the standards, regulations and associated guidance for those applying them in both the regulator and regulated communities.
Wednesday 13th October 2021, 2pm
Human Autonomy and the Governance of Autonomous Systems
What does it take to protect human autonomy? Principles of `human autonomy’ abound in current guidelines on the responsible development of AI. Yet, these guidelines exhibit substantial differences in what they take the risks to human autonomy to be. In this talk, I argue that we need to distinguish between different dimensions of autonomy before we can begin addressing these risks. Each dimension will require a distinct set of governance requirements to be implemented by developers and policy bodies.
Wednesday 22 September 2021, 2pm
Overview of TAS Node in Functionality
This talk will give an overview of the work and goals of the TAS node in Functionality, based at the University of Bristol and the Bristol Robotics Laboratory. It will involve an outline of the node’s members, projects, and their related work packages, followed by a more extended discussion of its ‘regulation’ work package.