Computer Graphics Laboratory

Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression

Lucas Relic, R. Azevedo, Y. Zhang, M. Gross, C. Schroers

Workshop on Machine Learning and Compression, NeurIPS 2024 (Vancouver, Canada, December 15, 2024)

Abstract

By leveraging the similarities between quantization error and additive noise, diffusion-based image compression codecs can be built by using a diffusion model to “denoise” the artifacts introduced by quantization. However, we identify three gaps in this approach which result in the quantized data falling out of distribution of the diffusion model: a gap in noise level, noise type, and a gap caused by discretization. To address these issues, we propose a novel quantization-based forward diffusion process that is theoretically founded and bridges all three aforementioned gaps. This is achieved through universal quantization with a carefully tailored quantization schedule, as well as diffusion model trained for uniform noise. Compared to previous work, our proposed architecture produces consistently realistic and detailed results, even at extremely low bitrates, while maintaining strong faithfulness to the original images.

Downloads

Download Paper
[PDF]
Download Paper
[BibTeX]