This content has been archived. It may no longer be relevant
Wednesday 22nd June, 2pm
A Principled Ethical Assurance Argument for the Use of AI and Autonomous Systems
Assurance cases are arguments, supported by evidence, that are typically used by safety teams to establish and communicate confidence in a system’s safety properties. One emerging proposal within the trustworthy or ‘responsible’ AI/AS research community is to extend the tool of assurance cases to a much broader range of ethical and normative properties, such as respect for human autonomy and fairness. This looks to be a promising method to achieve justified confidence that a system’s use in its intended context will be ethically acceptable. We aim to bring the idea of ethical assurance arguments to life. First, we ground the enterprise of ethical assurance in four ethical principles, adapted from biomedical ethics. Second, we propose the approach of using assurance cases to translate these ethical principles into practice. Third, we provide a worked example of a principled ethical assurance argument pattern, presented graphically in Goal Structuring Notation.