Innovation-Proof Governance for Military AI? how I learned to stop worrying and love the bot

Abstract

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems (laws). Yet ‘killer robots’ hardly exhaust the potentially problematic capabilities that innovation in military AI (mai) is set to unlock. Governance initiatives narrowly focused on preserving ‘meaningful human control’ over laws therefore risk being bypassed by the technological state-of-the-art. This paper departs from the question: how can we formulate ‘innovation-proof governance’ approaches that are resilient or adaptive to future developments in military AI? I develop a typology for the ways in which mai innovation can disrupt existing international legal frameworks. This includes ‘direct’ disruption – as new types of mai capabilities elude categorization under existing regimes – as well as ‘indirect’ disruption, where new capabilities shift the risk landscape of military AI, or change the incentives or values of the states developing them. After discussing two potential objections to ‘innovation-proof governance’, I explore the advantages and shortcomings of three possible approaches to innovation-proof governance for military AI. While no definitive blueprint is offered, I suggest key considerations for governance strategies that seek to ensure that military AI remains lawful, ethical, stabilizing, and safe.

Original languageEnglish
JournalJournal of International Humanitarian Legal Studies
Volume10
Issue number1
Pages (from-to)129-157
Number of pages29
ISSN1878-1373
DOIs
Publication statusPublished - Jun 2019

Keywords

  • military
  • artificial intelligence
  • arms control
  • Governance

Fingerprint

Dive into the research topics of 'Innovation-Proof Governance for Military AI? how I learned to stop worrying and love the bot'. Together they form a unique fingerprint.

Cite this