To achieve near-zero training error in a classification problem, the layers of a feed-forward network have to disentangle the manifolds of data points with different labels to facilitate the discrimination. However, excessive class separation can lead to overfitting because good generalization requires learning invariant features, which involve some level of entanglement. We report on numerical experiments showing how the optimization dynamics finds representations that balance these opposing tendencies with a non-monotonic trend. After a fast segregation phase, a slower rearrangement (conserved across datasets and architectures) increases the class entanglement. The training error at the inversion is stable under subsampling and across network initializations and optimizers, which characterizes it as a property solely of the data structure and (very weakly) of the architecture. The inversion is the manifestation of tradeoffs elicited by well-defined and maximally stable elements of the training set called 'stragglers', which are particularly influential for generalization.Feed-forward neural networks have become powerful tools in machine learning, but their behaviour during optimization is still not well understood. Ciceri and colleagues find that during optimization, class representations first separate and then rejoin, prompted by specific elements of the training set.
Inversion dynamics of class manifolds in deep learning reveals tradeoffs underlying generalization
Osella M.;Valle F.;
2024-01-01
Abstract
To achieve near-zero training error in a classification problem, the layers of a feed-forward network have to disentangle the manifolds of data points with different labels to facilitate the discrimination. However, excessive class separation can lead to overfitting because good generalization requires learning invariant features, which involve some level of entanglement. We report on numerical experiments showing how the optimization dynamics finds representations that balance these opposing tendencies with a non-monotonic trend. After a fast segregation phase, a slower rearrangement (conserved across datasets and architectures) increases the class entanglement. The training error at the inversion is stable under subsampling and across network initializations and optimizers, which characterizes it as a property solely of the data structure and (very weakly) of the architecture. The inversion is the manifestation of tradeoffs elicited by well-defined and maximally stable elements of the training set called 'stragglers', which are particularly influential for generalization.Feed-forward neural networks have become powerful tools in machine learning, but their behaviour during optimization is still not well understood. Ciceri and colleagues find that during optimization, class representations first separate and then rejoin, prompted by specific elements of the training set.File | Dimensione | Formato | |
---|---|---|---|
s42256-023-00772-9.pdf
Accesso aperto
Tipo di file:
PDF EDITORIALE
Dimensione
3.18 MB
Formato
Adobe PDF
|
3.18 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.