Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence

1 Citation (Scopus)

Abstract

New technologies which offer potential strategic or military advantages to state principals can disrupt previously stable power distributions and global governance arrangements. Artificial intelligence is one such critical technology, with many anticipating ‘arms races’ amongst major powers to weaponize AI for widespread use on the battlefield. This raises the question of if, or how, one may design stable and effective global governance arrangements to contain or channel the militarization of AI. Two perceptions implicit in many debates are that (i) “AI arms races are inevitable,” and (ii) “states will be deaf to calls for governance where that conflicts with perceived interests.” Drawing a parallel with historical experiences in nuclear arms control, I argue that this history suggests that (1) horizontal proliferation and arms races are not inevitable, but may be slowed, channeled or even averted; and that (2) small communities of experts, appropriately mobilized, can catalyze arms control and curb vertical proliferation of AI weapons.

Original languageEnglish
Title of host publicationEnvisioning Robots in Society - Power, Politics, and Public Space : Proceedings of Robophilosophy 2018 / TRANSOR 2018
Place of PublicationAmsterdam
PublisherIOS Press
Publication date2018
Pages347-356
ISBN (Print)978-1-61499-930-0
DOIs
Publication statusPublished - 2018
SeriesFrontiers in Artificial Intelligence and Applications
Number11
ISSN0922-6389

Fingerprint

Dive into the research topics of 'Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence'. Together they form a unique fingerprint.

Cite this