Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system's fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as "Formal Fairness", look at various aspects of the system's advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature level) to visual explanations (saliency maps). Another aspect, equally important, represents the perception of the users regarding the system's fairness. Despite a decision system being provably "Fair", if the users find it difficult to understand how the decisions were made, they will refrain from trusting, accepting, and ultimately using the system altogether. This raised the issue of "Perceived Fairness"which looks at means to reassure users of a system's trustworthiness. In that sense, providing users with some form of explanation on why and how certain outcomes resulted, is highly relevant, especially nowadays as the reasoning mechanisms increase in complexity and computational power. Recent studies suggest a plethora of explanation types. The current work aims to review the recent progress in explaining systems' reasoning and outcome, categorize and present it as a reference for the state-of-the-art fairness-related explanations review.
Recent Studies of XAI - Review
Hu Z. F.;Kuflik T.;
2021-01-01
Abstract
Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system's fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as "Formal Fairness", look at various aspects of the system's advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature level) to visual explanations (saliency maps). Another aspect, equally important, represents the perception of the users regarding the system's fairness. Despite a decision system being provably "Fair", if the users find it difficult to understand how the decisions were made, they will refrain from trusting, accepting, and ultimately using the system altogether. This raised the issue of "Perceived Fairness"which looks at means to reassure users of a system's trustworthiness. In that sense, providing users with some form of explanation on why and how certain outcomes resulted, is highly relevant, especially nowadays as the reasoning mechanisms increase in complexity and computational power. Recent studies suggest a plethora of explanation types. The current work aims to review the recent progress in explaining systems' reasoning and outcome, categorize and present it as a reference for the state-of-the-art fairness-related explanations review.File | Dimensione | Formato | |
---|---|---|---|
3450614.3463354.pdf
Accesso riservato
Tipo di file:
PDF EDITORIALE
Dimensione
1.8 MB
Formato
Adobe PDF
|
1.8 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
FairUMAP2021_Recent_studies_of_XAI___a_review(3).pdf
Accesso aperto
Tipo di file:
POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione
1.77 MB
Formato
Adobe PDF
|
1.77 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.