Scholars have increasingly discussed the legal status(es) of robots and artificial intelligence (AI) systems over the past three decades; however, the 2017 resolution of the EU parliament on the 'electronic personhood' of AI robots has reignited and even made current debate ideological. Against this background, the aim of the paper is twofold. First, the intent is to show how often today's discussion on the legal status(es) of AI systems leads to different kinds of misunderstanding that regard both the legal personhood of AI robots and their status as accountable agents establishing rights and obligations in contracts and business law. Second, the paper claims that whether or not the legal status of AI systems as accountable agents in civil-as opposed to criminal-law may make sense is an empirical issue, which should not be 'politicized'. Rather, a pragmatic approach seems preferable, as shown by methods of competitive federalism and legal experimentation. In the light of the classical distinction between primary rules and secondary rules of the law, examples of competitive federalism and legal experimentation aim to show how the secondary rules of the law can help us understanding what kind of primary rules we may wish for our AI robots. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.

Apples, oranges, robots: Four misunderstandings in today's debate on the legal status of AI systems

Pagallo U.
2018-01-01

Abstract

Scholars have increasingly discussed the legal status(es) of robots and artificial intelligence (AI) systems over the past three decades; however, the 2017 resolution of the EU parliament on the 'electronic personhood' of AI robots has reignited and even made current debate ideological. Against this background, the aim of the paper is twofold. First, the intent is to show how often today's discussion on the legal status(es) of AI systems leads to different kinds of misunderstanding that regard both the legal personhood of AI robots and their status as accountable agents establishing rights and obligations in contracts and business law. Second, the paper claims that whether or not the legal status of AI systems as accountable agents in civil-as opposed to criminal-law may make sense is an empirical issue, which should not be 'politicized'. Rather, a pragmatic approach seems preferable, as shown by methods of competitive federalism and legal experimentation. In the light of the classical distinction between primary rules and secondary rules of the law, examples of competitive federalism and legal experimentation aim to show how the secondary rules of the law can help us understanding what kind of primary rules we may wish for our AI robots. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.
2018
376
2133
20180168
20180176
http://rsta.royalsocietypublishing.org/
Accountability; Artificial intelligence; Legal experimentation; Liability; Robotics
Pagallo U.
File in questo prodotto:
File Dimensione Formato  
Pagallo_applesorangesrobot_articolo.pdf

Accesso riservato

Tipo di file: PDF EDITORIALE
Dimensione 288.01 kB
Formato Adobe PDF
288.01 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Royal Society (the proofs)_clean.pdf

Accesso aperto

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 295.06 kB
Formato Adobe PDF
295.06 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1728333
Citazioni
  • ???jsp.display-item.citation.pmc??? 2
  • Scopus 15
  • ???jsp.display-item.citation.isi??? 12
social impact