Paper category: Conceptual paper. Purpose (mandatory) The paper analyses the limitations of the mainstream definition of Artificial Intelligence (AI) as a rational agent, which currently drives the development of most AI systems. The authors advocate the need of a wider range of driving ethical principles for designing more socially responsible AI agents. Design/methodology/approach (mandatory) The authors follow an experience-based line of reasoning by argument to identify the limitations of the mainstream definition of AI, which is based on the concept of rational agents that select, among their designed actions, those which produce the maximum expected utility in the environment in which they operate. Then, taking as an example the problem of biases in the data used by AI, a small proof of concept with real datasets is provided. Findings (mandatory) The authors observe that biases measurements on the datasets are sufficient to demonstrate potential risks of discriminations when using those data in AI rational agents. Starting from this example, the authors discuss other open issues connected to AI rational agents and provide a few general ethical principles derived from the experience of the White Paper Artificial Intelligence at the service of the citizen (Agid 2018). Originality/value (mandatory) The paper contributes to the scientific debate on the governance and the ethics of Artificial Intelligence with a novel perspective, which is taken from an analysis of the mainstream definition of AI.

AI: from rational agents to socially responsible agents

Vetrò, Antonio;Santangelo, ANTONIO DANTE;Beretta, Elena;
2019-01-01

Abstract

Paper category: Conceptual paper. Purpose (mandatory) The paper analyses the limitations of the mainstream definition of Artificial Intelligence (AI) as a rational agent, which currently drives the development of most AI systems. The authors advocate the need of a wider range of driving ethical principles for designing more socially responsible AI agents. Design/methodology/approach (mandatory) The authors follow an experience-based line of reasoning by argument to identify the limitations of the mainstream definition of AI, which is based on the concept of rational agents that select, among their designed actions, those which produce the maximum expected utility in the environment in which they operate. Then, taking as an example the problem of biases in the data used by AI, a small proof of concept with real datasets is provided. Findings (mandatory) The authors observe that biases measurements on the datasets are sufficient to demonstrate potential risks of discriminations when using those data in AI rational agents. Starting from this example, the authors discuss other open issues connected to AI rational agents and provide a few general ethical principles derived from the experience of the White Paper Artificial Intelligence at the service of the citizen (Agid 2018). Originality/value (mandatory) The paper contributes to the scientific debate on the governance and the ethics of Artificial Intelligence with a novel perspective, which is taken from an analysis of the mainstream definition of AI.
2019
291
304
https://www.emerald.com/insight/content/doi/10.1108/DPRG-08-2018-0049/full/html
Artificial Intelligence, Data Ethics, Digital technologies and society, Moral philosophy
Vetrò, Antonio; Santangelo, ANTONIO DANTE; Beretta, Elena; De Martin, Juan Carlos
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1732484
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 21
  • ???jsp.display-item.citation.isi??? 14
social impact