Explainability is becoming a key requirement of AI applications. Theavailability of meaningful explanations of decisions is seen as crucial to ensure awide range of system properties such as trustability, transparency, robustness, andinnovation. Our claim is that thisneed for explanationis part of a broader problemrelated to the fact that most of the current architectures lack properly devisedchannels for collecting and for propagating feedback about decisions and actions:that is, they do not envisage nor supportaccountability. The aim of this paper is toclarify the differences between the concepts of explainability and accountability,which are often (and wrongly) used interchangeably. We draw a line of thoughtseeing in accountability a key factor for innovation in AI applications, and wesuggest a paradigm shift from aneed for explanationto aneed for accountability.

Is Explanation the Real Key Factor for Innovation?

Matteo Baldoni;Cristina Baroglio;Roberto Micalizio;Stefano Tedeschi
2020-01-01

Abstract

Explainability is becoming a key requirement of AI applications. Theavailability of meaningful explanations of decisions is seen as crucial to ensure awide range of system properties such as trustability, transparency, robustness, andinnovation. Our claim is that thisneed for explanationis part of a broader problemrelated to the fact that most of the current architectures lack properly devisedchannels for collecting and for propagating feedback about decisions and actions:that is, they do not envisage nor supportaccountability. The aim of this paper is toclarify the differences between the concepts of explainability and accountability,which are often (and wrongly) used interchangeably. We draw a line of thoughtseeing in accountability a key factor for innovation in AI applications, and wesuggest a paradigm shift from aneed for explanationto aneed for accountability.
2020
Italian Workshop on Explainable Artificial Intelligence 2020
Torino
November 25-26
Proceedings of the Italian Workshop on Explainable Artificial Intelligence
CEUR-WS
2742
87
95
http://ceur-ws.org/Vol-2742/short2.pdf
Matteo Baldoni, Cristina Baroglio, Roberto Micalizio, Stefano Tedeschi
File in questo prodotto:
File Dimensione Formato  
2020_XAI.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 267.25 kB
Formato Adobe PDF
267.25 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1762726
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact