A considerable number of studies have been devoted over the past years, to stress risks, threats and challenges brought on by the breath-taking advancements of technology in the fields of artificial intelligence (AI), and robotics. The intent of this chapter is to address this set of risks, threats, and challenges, from a threefold legal perspective. First, the focus is on the aim of the law to govern the process of technological innovation, and the different ways or techniques to attain that aim. Second, attention is drawn to matters of legal responsibility, especially in the civilian sector, by taking into account methods of accident control that either cut back on the scale of the activity via, e.g., strict liability rules, or aim to prevent such activities through the precautionary principle. Third, the focus here is on the risk of legislation that may hinder research in AI and robotics. Since there are several applications that can provide services useful to the well-being of humans, the aim should be to prevent this threat of legislators making individuals think twice before using or producing AI and robots. The overall idea is to flesh out specific secondary legal rules that should allow us to understand what kind of primary legal rules we may need. More particularly, the creation of legally de-regulated, or special, zones for AI and robotics appears a smart way to overcome current deadlocks of the law and to further theoretical frameworks with which we should better appreciate the space of potential systems that avoid undesirable behavior.

LegalAize: Tackling the normative challenges of artificial intelligence and robotics through the secondary rules of law

Ugo Pagallo
2017-01-01

Abstract

A considerable number of studies have been devoted over the past years, to stress risks, threats and challenges brought on by the breath-taking advancements of technology in the fields of artificial intelligence (AI), and robotics. The intent of this chapter is to address this set of risks, threats, and challenges, from a threefold legal perspective. First, the focus is on the aim of the law to govern the process of technological innovation, and the different ways or techniques to attain that aim. Second, attention is drawn to matters of legal responsibility, especially in the civilian sector, by taking into account methods of accident control that either cut back on the scale of the activity via, e.g., strict liability rules, or aim to prevent such activities through the precautionary principle. Third, the focus here is on the risk of legislation that may hinder research in AI and robotics. Since there are several applications that can provide services useful to the well-being of humans, the aim should be to prevent this threat of legislators making individuals think twice before using or producing AI and robots. The overall idea is to flesh out specific secondary legal rules that should allow us to understand what kind of primary legal rules we may need. More particularly, the creation of legally de-regulated, or special, zones for AI and robotics appears a smart way to overcome current deadlocks of the law and to further theoretical frameworks with which we should better appreciate the space of potential systems that avoid undesirable behavior.
2017
Perspectives in Law, Business and Innovation
Springer
281
300
978-981-10-5037-4
978-981-10-5038-1
Accident control; Artificial intelligence; Liability; Robotics; Secondary rules
Ugo Pagallo
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1892794
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact