This content has been archived. It may no longer be relevant
Wednesday 8th September 2021, 2pm
Assurance 2.0 and impact of AI/ML on regulation
In this talk I will address two themes: one is the impact on AI/ML on regulation based on an analysis we have done of the UK Nuclear Safety Regime and the other is the approach we have dubbed Assurance 2.0.
Assurance 2.0 has been developed driven by the need for assuring new AI/ML based systems, the need for security informed safety assurance, and the potential for automating assurance. “Assurance 2.0,” as an enabler that supports innovation and continuous incremental assurance. Perhaps unexpectedly, it does so by making assurance more rigorous, with increased focus on the reasoning and evidence employed, and explicit identification of defeaters and counterevidence. I will argue that it also provides a framework within which to focus and assess the work of the TAS nodes.
In terms of the impact on of AI/ML regulation, I will summarize some detailed analysis we have done of the UK ONR’s safety assessment principles to identify areas that may be affected by, or support, AI/ML assurance. Changes to regulation could take the form of augmenting the safety assessment principles, creating separate principles that account for, and clarify, AI/ML specific topics, or through the inclusion of an additional technical assessment guide to cover topics such as data, security, and autonomy frameworks.