TY - GEN
T1 - Regulating for ‘normal AI accidents’
T2 - Operational lessons for the responsible governance of artificial intelligence deployment
AU - Maas, Matthijs Michiel
N1 - Matthijs M. Maas. 2018. Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of artificial intelligence deployment. In Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans, LA. https://doi.org/10.1145/3278721.3278766
PY - 2018
Y1 - 2018
N2 - New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting ‘safe operating space’. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to ‘normal accident’-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such ‘normal accidents’. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.
AB - New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting ‘safe operating space’. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to ‘normal accident’-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such ‘normal accidents’. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.
M3 - Article in proceedings
BT - Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans
PB - Association for Computing Machinery (ACM)
ER -