The philosophical and legal literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human-Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, has shown that deception is a more structural component of AI than it is usually acknowledged. AI systems that enter in communication with users, in fact, forcefully invite reactions such as attributions of gender, personality and empathy, even in the absence of malicious intent and often also with potentially positive or functional impacts on the interaction. This paper aims to operationalise the Human-Centred AI (HCAI) framework to develop the implications of this body of work for practical approaches to AI ethics in HCI and design. In order to achieve this goal, we take up the analytical distinction between “banal” and “strong” deception, originally proposed in theoretical and historical scholarship on AI, as a starting point to develop ethical reflections that will empower designers and developers with practical ways to solve the problems raised by the complex relationship between deception and communicative AI. The paper considers how HCAI can be applied to conversational AI (CAI) systems in order to design them to develop banal deception for social good and, at the same time, to avoid its potential risks.

Reframing Deception for Human-Centered AI

Umbrello, Steven
Co-first
;
Natale, Simone
Co-first
2024-01-01

Abstract

The philosophical and legal literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human-Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, has shown that deception is a more structural component of AI than it is usually acknowledged. AI systems that enter in communication with users, in fact, forcefully invite reactions such as attributions of gender, personality and empathy, even in the absence of malicious intent and often also with potentially positive or functional impacts on the interaction. This paper aims to operationalise the Human-Centred AI (HCAI) framework to develop the implications of this body of work for practical approaches to AI ethics in HCI and design. In order to achieve this goal, we take up the analytical distinction between “banal” and “strong” deception, originally proposed in theoretical and historical scholarship on AI, as a starting point to develop ethical reflections that will empower designers and developers with practical ways to solve the problems raised by the complex relationship between deception and communicative AI. The paper considers how HCAI can be applied to conversational AI (CAI) systems in order to design them to develop banal deception for social good and, at the same time, to avoid its potential risks.
2024
1
19
deception, artificial intelligence, AI, human-centered AI, HCAI, design, applied ethics
Umbrello, Steven; Natale, Simone
File in questo prodotto:
File Dimensione Formato  
s12369-024-01184-4.pdf

Accesso riservato

Descrizione: PDF
Tipo di file: PDF EDITORIALE
Dimensione 1.47 MB
Formato Adobe PDF
1.47 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2028321
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact