Innovation-Proof Governance for Military AI? how I learned to stop worrying and love the bot

Abstract

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems (laws). Yet ‘killer robots’ hardly exhaust the potentially problematic capabilities that innovation in military AI (mai) is set to unlock. Governance initiatives narrowly focused on preserving ‘meaningful human control’ over laws therefore risk being bypassed by the technological state-of-the-art. This paper departs from the question: how can we formulate ‘innovation-proof governance’ approaches that are resilient or adaptive to future developments in military AI? I develop a typology for the ways in which mai innovation can disrupt existing international legal frameworks. This includes ‘direct’ disruption – as new types of mai capabilities elude categorization under existing regimes – as well as ‘indirect’ disruption, where new capabilities shift the risk landscape of military AI, or change the incentives or values of the states developing them. After discussing two potential objections to ‘innovation-proof governance’, I explore the advantages and shortcomings of three possible approaches to innovation-proof governance for military AI. While no definitive blueprint is offered, I suggest key considerations for governance strategies that seek to ensure that military AI remains lawful, ethical, stabilizing, and safe.

OriginalsprogEngelsk
TidsskriftJournal of International Humanitarian Legal Studies
Vol/bind10
Udgave nummer1
Sider (fra-til)129-157
Antal sider29
ISSN1878-1373
DOI
StatusUdgivet - jun. 2019

Fingeraftryk

Dyk ned i forskningsemnerne om 'Innovation-Proof Governance for Military AI? how I learned to stop worrying and love the bot'. Sammen danner de et unikt fingeraftryk.

Citationsformater