filename : Rel24b.pdf entry : conference : Workshop on Machine Learning and Compression, NeurIPS 2024 pages : year : 2024 month : December title : Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression subtitle : author : Lucas Relic, Roberto Azevedo, Yang Zhang, Markus Gross, Christopher Schroers booktitle : ISSN/ISBN : editor : publisher : OpenReview publ.place : volume : issue : language : English keywords : image compression, latent diffusion, generative models abstract : By leveraging the similarities between quantization error and additive noise, diffusion-based image compression codecs can be built by using a diffusion model to “denoise” the artifacts introduced by quantization. However, we identify three gaps in this approach which result in the quantized data falling out of distribution of the diffusion model: a gap in noise level, noise type, and a gap caused by discretization. To address these issues, we propose a novel quantization-based forward diffusion process that is theoretically founded and bridges all three aforementioned gaps. This is achieved through universal quantization with a carefully tailored quantization schedule, as well as diffusion model trained for uniform noise. Compared to previous work, our proposed architecture produces consistently realistic and detailed results, even at extremely low bitrates, while maintaining strong faithfulness to the original images.