TY - GEN
T1 - Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence
AU - Maas, Matthijs Michiel
N1 - Maas, Matthijs M., “Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence” in Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt & Marco Nørskov (eds), Envisioning Robots in Society - Power, Politics, and Public Space: Proceedings of Robophilosophy 2018 / TRANSOR 2018, IOS Press, Frontiers in Artificial Intelligence and Applications (2018), p. 347 - 356.
PY - 2018
Y1 - 2018
N2 - New technologies which offer potential strategic or military advantages to state principals can disrupt previously stable power distributions and global governance arrangements. Artificial intelligence is one such critical technology, with many anticipating ‘arms races’ amongst major powers to weaponize AI for widespread use on the battlefield. This raises the question of if, or how, one may design stable and effective global governance arrangements to contain or channel the militarization of AI. Two perceptions implicit in many debates are that (i) “AI arms races are inevitable,” and (ii) “states will be deaf to calls for governance where that conflicts with perceived interests.” Drawing a parallel with historical experiences in nuclear arms control, I argue that this history suggests that (1) horizontal proliferation and arms races are not inevitable, but may be slowed, channeled or even averted; and that (2) small communities of experts, appropriately mobilized, can catalyze arms control and curb vertical proliferation of AI weapons.
AB - New technologies which offer potential strategic or military advantages to state principals can disrupt previously stable power distributions and global governance arrangements. Artificial intelligence is one such critical technology, with many anticipating ‘arms races’ amongst major powers to weaponize AI for widespread use on the battlefield. This raises the question of if, or how, one may design stable and effective global governance arrangements to contain or channel the militarization of AI. Two perceptions implicit in many debates are that (i) “AI arms races are inevitable,” and (ii) “states will be deaf to calls for governance where that conflicts with perceived interests.” Drawing a parallel with historical experiences in nuclear arms control, I argue that this history suggests that (1) horizontal proliferation and arms races are not inevitable, but may be slowed, channeled or even averted; and that (2) small communities of experts, appropriately mobilized, can catalyze arms control and curb vertical proliferation of AI weapons.
U2 - 10.3233/978-1-61499-931-7-347
DO - 10.3233/978-1-61499-931-7-347
M3 - Article in proceedings
SN - 978-1-61499-930-0
T3 - Frontiers in Artificial Intelligence and Applications
SP - 347
EP - 356
BT - Envisioning Robots in Society - Power, Politics, and Public Space
PB - IOS Press
CY - Amsterdam
ER -