This paper investigates the perspectives of four large language models – i.e., Llama, ChatGPT, Gemini, and Claude - on the European Union's regulation on Artificial Intelligence (AI Act). Through a series of semi-structured interviews, the study aims to uncover the concerns, sentiments, and viewpoints exhibited by these AI entities regarding the regulatory landscape outlined in the AI Act. The analysis employs text analysis techniques, including word-cloud visualization and sentiment analysis, to explore the prevalent themes, normative considerations, apprehensions, and perceived implications of the proposed regulation across the different artificial intelligence models. The findings suggest a spectrum of responses, with some AIs expressing reservations about potential constraints on innovation and development, while others see the regulation as a constructive guide promoting responsible AI practices. The paper provides a unique lens into the AI community's perspectives, highlighting areas of alignment with regulatory principles as well as factors that provoke reservations or opposition. The paper contributes to the broader understanding of AI entities' perspectives on emerging regulatory efforts and the complex interplay between innovation and societal safeguards in the rapidly evolving field of artificial intelligence.
What do AIs think about the AI Act?
Umberto Nizza
In corso di stampa
Abstract
This paper investigates the perspectives of four large language models – i.e., Llama, ChatGPT, Gemini, and Claude - on the European Union's regulation on Artificial Intelligence (AI Act). Through a series of semi-structured interviews, the study aims to uncover the concerns, sentiments, and viewpoints exhibited by these AI entities regarding the regulatory landscape outlined in the AI Act. The analysis employs text analysis techniques, including word-cloud visualization and sentiment analysis, to explore the prevalent themes, normative considerations, apprehensions, and perceived implications of the proposed regulation across the different artificial intelligence models. The findings suggest a spectrum of responses, with some AIs expressing reservations about potential constraints on innovation and development, while others see the regulation as a constructive guide promoting responsible AI practices. The paper provides a unique lens into the AI community's perspectives, highlighting areas of alignment with regulatory principles as well as factors that provoke reservations or opposition. The paper contributes to the broader understanding of AI entities' perspectives on emerging regulatory efforts and the complex interplay between innovation and societal safeguards in the rapidly evolving field of artificial intelligence.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.