Schedule and Accepted Papers

The workshop will be held on Sunday 7th July 2024 at the Athenaeum Intercontinental Athens in Athens, Greece.

Some accepted papers are selected for spotlight presentations (see below).

The list of keynote speakers and of spotlight papers can be found in the main page.

Time (UTC +3) Event Speaker / Spotlight Paper
08:00 - 08:30 Coffee break  
08:40 - 08:45 Opening remarks  
08:45 - 09:30 Keynote presentation 1 Dr. Johannes Ballé
09:30 - 09:50 Spotlight presentation 1 Rate-Distortion-Perception Tradeoff for Vector Gaussian Sources
    Jingjing Qian, Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu, Wuxian Shi, Yiqun Ge, Wen Tong.
10:00 - 10:30 Coffee break  
10:30 - 11:15 Keynote presentation 2 Prof. José Miguel Hernández-Lobato
11:15 - 11:35 Spotlight presentation 2 Some Notes on the Sample Complexity of Approximate Channel Simulation
    Gergely Flamich, Lennie Wells.
11:35 - 11:55 Spotlight presentation 3 Staggered Quantizers for Perfect Perceptual Quality: A Connection between Quantizers with Common Randomness and Without
    Ruida Zhou, Chao Tian.
12:00 - 13:30 Lunch break  
13:45 - 14:30 Keynote presentation 3 Dr. Lucas Theis
14:30 - 16:00 Poster session  
15:00 - 15:30 Coffee Break  
16:00 - 16:45 Keynote presentation 4 Prof. Shirin Jalali
16:45 - 17:05 Spotlight presentation 4 Estimation of Rate-Distortion Function for Computing with Decoder Side Information
    Heasung Kim, Hyeji Kim, Gustavo De Veciana.
17:05 - 17:25 Open discussion  
17:25 - 17:30 Closing remarks + award reveal  

There will be a welcome reception for participants of the workshop, during 18:00 - 20:00 (further details can be found here).

Accepted posters:

Keynotes:

Speaker: Dr. Johannes Ballé.

Title: Learned Image Compression.

Abstract: Since its emergence roughly 7 years ago, the field of learned data compression has attracted considerable attention from both the machine learning and information theory communities. Data-driven source coding promises faster innovation cycles, as well as better adaptation to novel types of sources and unconventional models of distortion. For example, image codecs can now be end-to-end optimized to perform best for specific types of images, by simply replacing the training set. They may now be designed to minimize a given perceptual image metric, or in fact any differentiable perceptual loss function. In this talk, I will review nonlinear transform coding (NTC), a framework of techniques which over the past few years have superseded the state of the art of hand-crafted image compression methods (such as the family of JPEG and MPEG standards) in terms of subjective quality vs. rate. I’ll illustrate the empirical rate–distortion performance of NTC with the help of simple, analytically characterized data sources. Furthermore, I will discuss a recent direction of ongoing work, the search for better measures of perceptual quality, as captured by realism (“How realistic is an image?”) and fidelity (“How similar is an image to a reference?”). I present Wasserstein Distortion, a measure to unify the two, grounded in neuroscientific models of peripheral vision.

Speaker: Prof. José Miguel Hernández-Lobato.

Title: Accelerating Relative Entropy Coding with Space Partitioning.

Abstract: Relative entropy coding (REC) algorithms aim to transmit a random sample following distribution Q, using a prior distribution P shared between the sender and receiver. General REC algorithms suffer from prohibitive runtimes and existing fast REC algorithms have been limited to very specific problem settings. In this talk, I will introduce a new REC method that utilizes space partitioning to potentially reduce runtime in more practical scenarios than previous scalable REC algorithms. We provide theoretical results for our proposed method and demonstrate its efficiency through both toy examples and practical applications in neural compression. While our approach does not achieve polynomial time complexity, it enables handling larger REC problems much more efficiently. This results in not only faster REC encoding processes but also reduced codelength overhead, thereby offering performance improvements in neural compression applications.

Speaker: Dr. Lucas Theis.

Title: Lossy Compression with Diffusion.

Abstract: This talk explores new methods for lossy image compression based on diffusion and channel simulation. By simulating a Gaussian channel, any diffusion generative model can be appropriated for compression. The resulting approach is notably different from the transform coding approach that underpins modern codecs and almost all neural compression approaches. However, we find that it works surprisingly well despite the lack of an analysis transform and despite its conceptual simplicity. We further find that this simplicity makes it very amenable to theoretical analysis and offer initial results on its rate-distortion performance under realism constraints.

Speaker: Prof. Shirin Jalali.

Title: Compression Codes: Bridging Theory and Algorithms in Signal Processing and Learning.

Abstract: In the realm of signal processing and machine learning, a foundational challenge lies in developing robust theoretical frameworks that guide the analysis and design of effective solutions. This talk explores the power of compression codes as a unifying framework for these tasks. By leveraging the principles of data compression, we can derive insightful theoretical perspectives that enhance our understanding of inference and learning problems. This framework not only provides a novel lens for theoretical analysis but also informs the creation of practically sound and theoretically-grounded algorithms. We will examine how compression codes bridge this gap, paving the way for advancements in signal processing and machine learning research.