In deep learning, the conventional transfer learning paradigm involves fine-tuning a model pre-trained on a complex source task to adapt it to a simpler target task, capitalizing on abundant training data. Concurrently, the paradigm of neural network pruning has emerged as a powerful strategy for enhancing model efficiency, reducing complexity, and optimizing resource utilization. This paper focuses on pruned model transferability estimation for resource-constraint scenarios, where the goal is to rank the performance of pruned pre-trained models on a downstream task without fine-tuning. To this end, from a formal analysis of the intra-class mutual information between samples belonging to the same target class, we observe that, as pruning increases, a sweet phase naturally arises, where the model benefits from better features at the encoder's output. From this, we derive a Transferability Estimation for Pruned Backbones (TEP-ones) that eases the choice of which pruned model (without the need to train the classifier) is the best candidate for transfer learning. We publicly released the code and pre-trained pruned models at https://github.com/EIDOSLAB/TEP-ones.

TEP-ones: A simple yet effective approach for transferability estimation of pruned backbones

Spadaro G.
;
Bragagnolo A.;Renzulli R.;Grangetto M.;Fiandrotti A.;
2026-01-01

Abstract

In deep learning, the conventional transfer learning paradigm involves fine-tuning a model pre-trained on a complex source task to adapt it to a simpler target task, capitalizing on abundant training data. Concurrently, the paradigm of neural network pruning has emerged as a powerful strategy for enhancing model efficiency, reducing complexity, and optimizing resource utilization. This paper focuses on pruned model transferability estimation for resource-constraint scenarios, where the goal is to rank the performance of pruned pre-trained models on a downstream task without fine-tuning. To this end, from a formal analysis of the intra-class mutual information between samples belonging to the same target class, we observe that, as pruning increases, a sweet phase naturally arises, where the model benefits from better features at the encoder's output. From this, we derive a Transferability Estimation for Pruned Backbones (TEP-ones) that eases the choice of which pruned model (without the need to train the classifier) is the best candidate for transfer learning. We publicly released the code and pre-trained pruned models at https://github.com/EIDOSLAB/TEP-ones.
2026
668
1
13
https://www.sciencedirect.com/science/article/pii/S0925231225028814
Pruning; Transfer learning; Transferability estimation
Spadaro G.; Bragagnolo A.; Renzulli R.; Grangetto M.; Giraldo J.H.; Fiandrotti A.; Tartaglione E.
File in questo prodotto:
File Dimensione Formato  
NEUROCOMPUTING_TNNLS_NeurIPS_2024_TEPOnes.pdf

Accesso aperto

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 7.64 MB
Formato Adobe PDF
7.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2116793
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact