In an end-to-end learned image compression framework, an encoder projects the image on a low-dimensional, quantized, latent space while a decoder recovers the original image. The encoder and decoder are jointly trained with standard gradient backpropagation to minimize a rate-distortion (RD) cost function accounting for both distortions between the original and reconstructed image and the quantized latent space rate. State-of-the-art methods rely on an auxiliary neural network to estimate the rate R of the latent space. We propose a non-parametric entropy model that estimates the statistical frequencies of the quantized latent space during training. The proposed model is differentiable, so it can be plugged into the cost function to be minimized as a rate proxy and can be adapted to a given context without retraining. Our experiments show comparable performance with a learned rate estimator and better performance when is adapted over a temporal context.

A Differentiable Entropy Model for Learned Image Compression

Presta A.
;
Fiandrotti A.;Tartaglione E.;Grangetto M.
2023-01-01

Abstract

In an end-to-end learned image compression framework, an encoder projects the image on a low-dimensional, quantized, latent space while a decoder recovers the original image. The encoder and decoder are jointly trained with standard gradient backpropagation to minimize a rate-distortion (RD) cost function accounting for both distortions between the original and reconstructed image and the quantized latent space rate. State-of-the-art methods rely on an auxiliary neural network to estimate the rate R of the latent space. We propose a non-parametric entropy model that estimates the statistical frequencies of the quantized latent space during training. The proposed model is differentiable, so it can be plugged into the cost function to be minimized as a rate proxy and can be adapted to a given context without retraining. Our experiments show comparable performance with a learned rate estimator and better performance when is adapted over a temporal context.
2023
Inglese
contributo
1 - Conferenza
Proceedings of the 22nd International Conference on Image Analysis and Processing, ICIAP 2023
ita
2023
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Esperti anonimi
Springer Science and Business Media Deutschland GmbH
Heidelberg
GERMANIA
14233
328
339
12
978-3-031-43147-0
978-3-031-43148-7
https://link.springer.com/chapter/10.1007/978-3-031-43148-7_28
autoencoder; differentiable entropy; entropy estimation; image compression; Learned image coding
FRANCIA
1 – prodotto con file in versione Open Access (allegherò il file al passo 6 - Carica)
4
info:eu-repo/semantics/conferenceObject
04-CONTRIBUTO IN ATTI DI CONVEGNO::04A-Conference paper in volume
Presta A.; Fiandrotti A.; Tartaglione E.; Grangetto M.
273
embargoed_20250904
File in questo prodotto:
File Dimensione Formato  
PrestaICIAP23.pdf

Accesso aperto con embargo fino al 04/09/2025

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 760.75 kB
Formato Adobe PDF
760.75 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1951390
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact