Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence

1 Citationer (Scopus)

Abstract

New technologies which offer potential strategic or military advantages to state principals can disrupt previously stable power distributions and global governance arrangements. Artificial intelligence is one such critical technology, with many anticipating ‘arms races’ amongst major powers to weaponize AI for widespread use on the battlefield. This raises the question of if, or how, one may design stable and effective global governance arrangements to contain or channel the militarization of AI. Two perceptions implicit in many debates are that (i) “AI arms races are inevitable,” and (ii) “states will be deaf to calls for governance where that conflicts with perceived interests.” Drawing a parallel with historical experiences in nuclear arms control, I argue that this history suggests that (1) horizontal proliferation and arms races are not inevitable, but may be slowed, channeled or even averted; and that (2) small communities of experts, appropriately mobilized, can catalyze arms control and curb vertical proliferation of AI weapons.

OriginalsprogEngelsk
TitelEnvisioning Robots in Society - Power, Politics, and Public Space : Proceedings of Robophilosophy 2018 / TRANSOR 2018
UdgivelsesstedAmsterdam
ForlagIOS Press
Publikationsdato2018
Sider347-356
ISBN (Trykt)978-1-61499-930-0
DOI
StatusUdgivet - 2018
NavnFrontiers in Artificial Intelligence and Applications
Nummer11
ISSN0922-6389

Fingeraftryk

Dyk ned i forskningsemnerne om 'Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence'. Sammen danner de et unikt fingeraftryk.

Citationsformater