Large Language Models (LLMs) have shown impressive capabilities in producing texts of quality and fluency that are similar to those created by humans. Despite their increasing use, however, the broader population's experience of many aspects of interaction with LLMs remains underexplored. This study investigates how diverse individuals perceive and account for “nonsensical hallucinations”, namely, an LLM's unpredictable and meaningless behavior provided as a response to a user's request. We asked 20 participants to interact with ChatGPT 3.5 and experience its hallucinations. Through semi-structured interviews, we found that participants with a computer science background or consistent previous use of LLMs interpret unpredictable nonsensical responses as an error, while novices perceive them as model's autonomous behaviors. Moreover, we discovered that such responses produce an abrupt modification of participants’ perceptions and understandings of the LLM's nature. From a soothing and polite entity, ChatGPT becomes either an obscure and unfamiliar “alien”, or a human-like being potentially hostile to humankind, making also emerge unsettling feelings, which may unveil an underlying fear of Artificial Intelligence. The study contributes to literature on how people react to the unfamiliarity of a technology that may be perceived as alien and yet extremely human-like, generating “uncanny effects,” as well as to research on the anthropomorphizing of technology.

How do people react to ChatGPT's unpredictable behavior? Anthropomorphism, uncanniness, and fear of AI: A qualitative study on individuals’ perceptions and understandings of LLMs’ nonsensical hallucinations

Rapp A.
First
;
Di Lodovico C.;Di Caro L.
2025-01-01

Abstract

Large Language Models (LLMs) have shown impressive capabilities in producing texts of quality and fluency that are similar to those created by humans. Despite their increasing use, however, the broader population's experience of many aspects of interaction with LLMs remains underexplored. This study investigates how diverse individuals perceive and account for “nonsensical hallucinations”, namely, an LLM's unpredictable and meaningless behavior provided as a response to a user's request. We asked 20 participants to interact with ChatGPT 3.5 and experience its hallucinations. Through semi-structured interviews, we found that participants with a computer science background or consistent previous use of LLMs interpret unpredictable nonsensical responses as an error, while novices perceive them as model's autonomous behaviors. Moreover, we discovered that such responses produce an abrupt modification of participants’ perceptions and understandings of the LLM's nature. From a soothing and polite entity, ChatGPT becomes either an obscure and unfamiliar “alien”, or a human-like being potentially hostile to humankind, making also emerge unsettling feelings, which may unveil an underlying fear of Artificial Intelligence. The study contributes to literature on how people react to the unfamiliarity of a technology that may be perceived as alien and yet extremely human-like, generating “uncanny effects,” as well as to research on the anthropomorphizing of technology.
2025
198
1
21
AI; Anthropomorphizing; Generative AI; Humanness; LLM; Uncanny valley
Rapp A.; Di Lodovico C.; Di Caro L.
File in questo prodotto:
File Dimensione Formato  
2025-IJHCS-b.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 2.99 MB
Formato Adobe PDF
2.99 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2096014
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 7
social impact