Over the past years, scholars have increasingly debated over the reasons why we should, or should not, deploy specimens of AI technology, such as robots, on the battlefields, in the market, or at our homes. Amongst the moral theories that discuss what is right, or what is wrong, about a robot's behaviour, virtue ethics, rather than utilitarianism and deontologism, offers a fruitful approach to the debate. The context sensitivity and bottom-up methodology of virtue ethics fits like hand to glove with the unpredictability of robotic behaviour, for it involves a trial-anderror learning of what makes the behaviour of that robot good, or bad. However, even advocates of virtue ethics admit the limits of their approach: All in all, the more societies become complex, the less shared virtues are effective, the more we need rules on rights and duties. By reversing the Kantian idea that a nation of devils can establish a state of good citizens, if they "have understanding," we can say that even a nation of angels would need the law in order to further their coordination and collaboration. Accordingly, the aim of this paper is not only to show that a set of perfect moral agents, namely a bunch of angelic robots, need rules. Also, no single moral theory can instruct us as to how to legally bind our artificial agents through AI research and robotic programming.

Even angels need the rules: AI, roboethics, and the law

Pagallo U.
2016-01-01

Abstract

Over the past years, scholars have increasingly debated over the reasons why we should, or should not, deploy specimens of AI technology, such as robots, on the battlefields, in the market, or at our homes. Amongst the moral theories that discuss what is right, or what is wrong, about a robot's behaviour, virtue ethics, rather than utilitarianism and deontologism, offers a fruitful approach to the debate. The context sensitivity and bottom-up methodology of virtue ethics fits like hand to glove with the unpredictability of robotic behaviour, for it involves a trial-anderror learning of what makes the behaviour of that robot good, or bad. However, even advocates of virtue ethics admit the limits of their approach: All in all, the more societies become complex, the less shared virtues are effective, the more we need rules on rights and duties. By reversing the Kantian idea that a nation of devils can establish a state of good citizens, if they "have understanding," we can say that even a nation of angels would need the law in order to further their coordination and collaboration. Accordingly, the aim of this paper is not only to show that a set of perfect moral agents, namely a bunch of angelic robots, need rules. Also, no single moral theory can instruct us as to how to legally bind our artificial agents through AI research and robotic programming.
2016
Twenty-second European Conference on Artificial Intelligence
L'Aia (The Hague, Netherlands)
29 agosto 2016 - 2 settembre 2016
Proceedings of the Twenty-second European Conference on Artificial Intelligence
IOS Press
285
209
215
9781614996712
http://www.iospress.nl/loadtop/load.php?isbn=19057415
Pagallo U.
File in questo prodotto:
File Dimensione Formato  
978-1-61499-672-9-209.pdf

Accesso aperto

Descrizione: Proceedings
Tipo di file: PDF EDITORIALE
Dimensione 169.58 kB
Formato Adobe PDF
169.58 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1728347
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 4
social impact