Representation learning algorithms offer the opportunity to learn invariant representations of the input data with regard to nuisance factors. Many authors have leveraged such strategies to learn fair representations, i.e., vectors where information about sensitive attributes is removed. These methods are attractive as they may be interpreted as minimizing the mutual information between a neural layer’s activations and a sensitive attribute. However, the theoretical grounding of such methods relies either on the computation of infinitely accurate adversaries or on minimizing a variational upper bound of a mutual information estimate. In this paper, we propose a methodology for direct computation of the mutual information between neurons in a layer and a sensitive attribute. We employ stochastically-activated binary neural networks, which lets us treat neurons as random variables. Our method is therefore able to minimize an upper bound on the mutual information between the neural representations and a sensitive attribute. We show that this method compares favorably with the state of the art in fair representation learning and that the learned representations display a higher level of invariance compared to full-precision neural networks.

Invariant Representations with Stochastically Quantized Neural Networks

Esposito R.;
2023-01-01

Abstract

Representation learning algorithms offer the opportunity to learn invariant representations of the input data with regard to nuisance factors. Many authors have leveraged such strategies to learn fair representations, i.e., vectors where information about sensitive attributes is removed. These methods are attractive as they may be interpreted as minimizing the mutual information between a neural layer’s activations and a sensitive attribute. However, the theoretical grounding of such methods relies either on the computation of infinitely accurate adversaries or on minimizing a variational upper bound of a mutual information estimate. In this paper, we propose a methodology for direct computation of the mutual information between neurons in a layer and a sensitive attribute. We employ stochastically-activated binary neural networks, which lets us treat neurons as random variables. Our method is therefore able to minimize an upper bound on the mutual information between the neural representations and a sensitive attribute. We show that this method compares favorably with the state of the art in fair representation learning and that the learned representations display a higher level of invariance compared to full-precision neural networks.
2023
AAAI Conference on Human Computation and Crowdsourcing
usa
2023
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
AAAI Press
37
6962
6970
Cerrato M.; Koppel M.; Esposito R.; Kramer S.
File in questo prodotto:
File Dimensione Formato  
Invariant Representations with Stochastically Quantized NN.pdf

Accesso aperto

Descrizione: Main manuscript
Tipo di file: PDF EDITORIALE
Dimensione 185.97 kB
Formato Adobe PDF
185.97 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1948081
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact