The negative impact of label noise is well studied in classical supervised learning yet remains an open research question in meta-learning. Meta-learners aim to adapt to unseen tasks by learning a good initial model in meta-training and fine-tuning it to new tasks during meta-testing. In this paper, we present an extensive analysis of the impact of label noise on the performance of meta-learners, specifically gradient-based N-way K-shot learners. We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 34% when meta-training is affected by label noise on the three representative datasets: Omniglot, CifarFS, and MiniImageNet. To strengthen the resilience against label noise, we propose two sampling techniques, namely manifold (Man) and batch manifold (BatMan), which transforms the noisy supervised learners into semi-supervised learners to increase the utility of noisy labels. We construct N-way 2-contrastive-shot tasks through augmentation, learn the embedding via a contrastive loss in meta-training, and perform classification through zeroing on the embeddings in meta-testing. We show that our approach can effectively mitigate the impact of meta-training label noise. Even with 60% wrong labels BatMan and Man can limit the meta-testing accuracy drop to 2.5, 9.4, 1.1% points with existing meta-learners across Omniglot, CifarFS, and MiniImageNet, respectively. We provide our code online: https://gitlab.ewi.tudelft.nl/dmls/publications/batman-clr-noisy-meta-learning.

BatMan-CLR: Making Few-Shots Meta-learners Resilient Against Label Noise

Birke, Robert;
2025-01-01

Abstract

The negative impact of label noise is well studied in classical supervised learning yet remains an open research question in meta-learning. Meta-learners aim to adapt to unseen tasks by learning a good initial model in meta-training and fine-tuning it to new tasks during meta-testing. In this paper, we present an extensive analysis of the impact of label noise on the performance of meta-learners, specifically gradient-based N-way K-shot learners. We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 34% when meta-training is affected by label noise on the three representative datasets: Omniglot, CifarFS, and MiniImageNet. To strengthen the resilience against label noise, we propose two sampling techniques, namely manifold (Man) and batch manifold (BatMan), which transforms the noisy supervised learners into semi-supervised learners to increase the utility of noisy labels. We construct N-way 2-contrastive-shot tasks through augmentation, learn the embedding via a contrastive loss in meta-training, and perform classification through zeroing on the embeddings in meta-testing. We show that our approach can effectively mitigate the impact of meta-training label noise. Even with 60% wrong labels BatMan and Man can limit the meta-testing accuracy drop to 2.5, 9.4, 1.1% points with existing meta-learners across Omniglot, CifarFS, and MiniImageNet, respectively. We provide our code online: https://gitlab.ewi.tudelft.nl/dmls/publications/batman-clr-noisy-meta-learning.
2025
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2025
Oporto, Portogallo
2025
Lecture Notes in Computer Science
Springer Science and Business Media Deutschland GmbH
16018 LNCS
254
271
9783032061058
9783032061065
Galjaard, Jeroen M.; Birke, Robert; Pérez, Juan F.; Chen, Lydia Y.
File in questo prodotto:
File Dimensione Formato  
sub_336.pdf

Accesso aperto

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 762.06 kB
Formato Adobe PDF
762.06 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2104560
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact