Ranking algorithms based on Neural Networks have been a topic of recent research. Ranking is employed in everyday applications like product recommendations, search results, or even in finding good candidates for hire. However, Neural Networks are mostly opaque tools, and it is hard to evaluate why a specific candidate, for instance, was not considered. Therefore, for neural-based ranking methods to be trustworthy, it is crucial to guarantee that the outcome is fair and that the decisions are not discriminating people according to sensitive attributes such as gender, sexual orientation, or ethnicity. In this work, we present a family of fair pairwise learning to rank approaches based on Neural Networks, which are able to produce balanced outcomes for underprivileged groups and, at the same time, build fair representations of data, i.e. new vectors having no correlation with regard to a sensitive attribute. We compare our approaches to recent work dealing with fair ranking and evaluate them using both relevance and fairness metrics. Our results show that the introduced fair pairwise ranking methods compare favorably to other methods when considering the fairness/relevance trade-off.

Fair Pairwise Learning to Rank

Mattia Cerrato
;
Roberto Esposito;
2020-01-01

Abstract

Ranking algorithms based on Neural Networks have been a topic of recent research. Ranking is employed in everyday applications like product recommendations, search results, or even in finding good candidates for hire. However, Neural Networks are mostly opaque tools, and it is hard to evaluate why a specific candidate, for instance, was not considered. Therefore, for neural-based ranking methods to be trustworthy, it is crucial to guarantee that the outcome is fair and that the decisions are not discriminating people according to sensitive attributes such as gender, sexual orientation, or ethnicity. In this work, we present a family of fair pairwise learning to rank approaches based on Neural Networks, which are able to produce balanced outcomes for underprivileged groups and, at the same time, build fair representations of data, i.e. new vectors having no correlation with regard to a sensitive attribute. We compare our approaches to recent work dealing with fair ranking and evaluate them using both relevance and fairness metrics. Our results show that the introduced fair pairwise ranking methods compare favorably to other methods when considering the fairness/relevance trade-off.
2020
The 7th IEEE International Conference on Data Science and Advanced Analytics
Sydney, Australia
6-9 October 2020
2020 IEEE 7th International Conference on Data Science and Advanced Analytics DSAA 2020
IEEE
CFP20DSB-ART
729
738
978-1-7281-8206-3
Fairness, Neural Networks, Ranking
Mattia Cerrato, Marius Köppel†, Alexander Segner†, Roberto Esposito, Stefan Kramer†
File in questo prodotto:
File Dimensione Formato  
fair_ranking.pdf

Accesso aperto

Descrizione: Articolo principale
Tipo di file: PREPRINT (PRIMA BOZZA)
Dimensione 442.38 kB
Formato Adobe PDF
442.38 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1763894
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact