Authors:
Paul M. Salmon, Chris Baber, Scott McLean, Neville A. Stanton, Tony Carden, Brandon J. King, Gemma J.M. Read
Abstract:
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity;
however, as AGI does not yet exist it is difficult to prospectively identify risks and develop requisite controls. We
applied the Work Domain Analysis Broken Nodes (WDA-BN) and Event Analysis of Systemic Teamwork-Broken
Links (EAST-BL) methods to identify potential risks in a future ‘envisioned world’ AGI-based uncrewed combat
aerial vehicle system. The findings suggest five main categories of risk in this context: sub-optimal performance
risks, goal alignment risks, super-intelligence risks, over-control risks, and enfeeblement risks. Two of these
categories, goal alignment risks and super-intelligence risks, have not previously been encountered or dealt with
in conventional safety management systems. Whereas most of the identified sub-optimal performance risks can
be managed through existing defence design lifecycle processes, we propose that work is required to develop
controls to manage the other risks identified. These include controls on AGI developers, controls within the AGI
itself, and broader sociotechnical system controls.