Regulating for ‘normal AI accidents’: Operational lessons for the responsible governance of artificial intelligence deployment

Abstract

New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting ‘safe operating space’. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to ‘normal accident’-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such ‘normal accidents’. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.
Original languageEnglish
Title of host publicationProceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans
PublisherAssociation for Computing Machinery (ACM)
Publication statusAccepted/In press - 2018

Fingerprint

Dive into the research topics of 'Regulating for ‘normal AI accidents’: Operational lessons for the responsible governance of artificial intelligence deployment'. Together they form a unique fingerprint.

Cite this