To achieve near-zero training error in a classification problem, the layers of a feed-forward network have to disentangle the manifolds of data points with different labels to facilitate the discrimination. However, excessive class separation can lead to overfitting because good generalization requires learning invariant features, which involve some level of entanglement. We report on numerical experiments showing how the optimization dynamics finds representations that balance these opposing tendencies with a non-monotonic trend. After a fast segregation phase, a slower rearrangement (conserved across datasets and architectures) increases the class entanglement. The training error at the inversion is stable under subsampling and across network initializations and optimizers, which characterizes it as a property solely of the data structure and (very weakly) of the architecture. The inversion is the manifestation of tradeoffs elicited by well-defined and maximally stable elements of the training set called 'stragglers', which are particularly influential for generalization.Feed-forward neural networks have become powerful tools in machine learning, but their behaviour during optimization is still not well understood. Ciceri and colleagues find that during optimization, class representations first separate and then rejoin, prompted by specific elements of the training set.

Inversion dynamics of class manifolds in deep learning reveals tradeoffs underlying generalization

Osella M.;Valle F.;
2024-01-01

Abstract

To achieve near-zero training error in a classification problem, the layers of a feed-forward network have to disentangle the manifolds of data points with different labels to facilitate the discrimination. However, excessive class separation can lead to overfitting because good generalization requires learning invariant features, which involve some level of entanglement. We report on numerical experiments showing how the optimization dynamics finds representations that balance these opposing tendencies with a non-monotonic trend. After a fast segregation phase, a slower rearrangement (conserved across datasets and architectures) increases the class entanglement. The training error at the inversion is stable under subsampling and across network initializations and optimizers, which characterizes it as a property solely of the data structure and (very weakly) of the architecture. The inversion is the manifestation of tradeoffs elicited by well-defined and maximally stable elements of the training set called 'stragglers', which are particularly influential for generalization.Feed-forward neural networks have become powerful tools in machine learning, but their behaviour during optimization is still not well understood. Ciceri and colleagues find that during optimization, class representations first separate and then rejoin, prompted by specific elements of the training set.
2024
6
1
40
47
Ciceri S.; Cassani L.; Osella M.; Rotondo P.; Valle F.; Gherardi M.
File in questo prodotto:
File Dimensione Formato  
s42256-023-00772-9.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 3.18 MB
Formato Adobe PDF
3.18 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2020830
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact