Schedule and Accepted Papers
The workshop will be held on Sunday 7th July 2024 at the Athenaeum Intercontinental Athens in Athens, Greece.
Some accepted papers are selected for spotlight presentations (see below).
The list of keynote speakers and of spotlight papers can be found in the main page.
Time (UTC +3) | Event | Speaker / Spotlight Paper |
---|---|---|
08:00 - 08:30 | Coffee break | |
08:40 - 08:45 | Opening remarks | |
08:45 - 09:30 | Keynote presentation 1 | Dr. Johannes Ballé |
09:30 - 09:50 | Spotlight presentation 1 | Rate-Distortion-Perception Tradeoff for Vector Gaussian Sources |
Jingjing Qian, Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu, Wuxian Shi, Yiqun Ge, Wen Tong. | ||
10:00 - 10:30 | Coffee break | |
10:30 - 11:15 | Keynote presentation 2 | Prof. José Miguel Hernández-Lobato |
11:15 - 11:35 | Spotlight presentation 2 | Some Notes on the Sample Complexity of Approximate Channel Simulation |
Gergely Flamich, Lennie Wells. | ||
11:35 - 11:55 | Spotlight presentation 3 | Staggered Quantizers for Perfect Perceptual Quality: A Connection between Quantizers with Common Randomness and Without |
Ruida Zhou, Chao Tian. | ||
12:00 - 13:30 | Lunch break | |
13:45 - 14:30 | Keynote presentation 3 | Dr. Lucas Theis |
14:30 - 16:00 | Poster session | |
15:00 - 15:30 | Coffee Break | |
16:00 - 16:45 | Keynote presentation 4 | Prof. Shirin Jalali |
16:45 - 17:05 | Spotlight presentation 4 | Estimation of Rate-Distortion Function for Computing with Decoder Side Information |
Heasung Kim, Hyeji Kim, Gustavo De Veciana. | ||
17:05 - 17:25 | Open discussion | |
17:25 - 17:30 | Closing remarks + award reveal |
There will be a welcome reception for participants of the workshop, during 18:00 - 20:00 (further details can be found here).
Accepted posters:
-
Rate-Distortion-Perception Tradeoff for Vector Gaussian Sources [spotlight presentation].
Jingjing Qian, Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu, Wuxian Shi, Yiqun Ge, Wen Tong. [Poster #1] -
Some Notes on the Sample Complexity of Approximate Channel Simulation [spotlight presentation].
Gergely Flamich, Lennie Wells. [Poster #2] -
Staggered Quantizers for Perfect Perceptual Quality: A Connection between Quantizers with Common Randomness and Without [spotlight presentation].
Ruida Zhou, Chao Tian. [Poster #3] -
Estimation of Rate-Distortion Function for Computing with Decoder Side Information [spotlight presentation].
Heasung Kim, Hyeji Kim, Gustavo De Veciana. [Poster #4] -
Alternate Learning and Compression approaching R(D).
Ram Zamir, Kenneth Rose. [Poster #5] -
Semantic Compression with Information Lattice Learning.
Haizi Yu, Lav R. Varshney. [Poster #15] -
Towards Hyperparameter Optimization of Sparse Bayesian Learning Based on Stein’s Unbiased Risk Estimator.
Fangqing Xiao, Dirk Slock. [Poster #6] -
Task-aware Distributed Source Coding under Dynamic Bandwidth.
Po-han Li, Sravan Kumar Ankireddy, Ruihan Zhao, Hossein Nourkhiz Mahjoub, Ehsan Moradi Pari, Ufuk Topcu, Sandeep P. Chinchali, Hyeji Kim. [Poster #7] -
Robust Distributed Compression with Learned Heegard—Berger Scheme.
Eyyup Tasci, Ezgi Ozyilkan, Oguzhan Kubilay Ulger, Elza Erkip. [Poster #8] -
Semantic Image Compression using Textual Transforms.
Lara Arikan, Tsachy Weissman. [Poster #9] -
Combining Batch and Online Prediction.
Yaniv Fogel, Meir Feder. [Poster #10] -
Deep-Learned Compression for Radio-Frequency Signal Classification.
Armani Rodriguez, Yagna Kaasaragadda, Silvija Kokalj-Filipovic. [Poster #11] -
Learned Lossless Compression via an Extension of the Bayes Codes.
Yuta Nakahara, Shota Saito, Koshi Shimada, Toshiyasu Matsushima. [Poster #12] -
An Efficient Difference-of-Convex Solver for Privacy Funnel.
Teng-Hui Huang, Hesham El Gamal. [Poster #13] -
The Likelihood Gain of a Language Model as a Metric for Text Summarization.
Dana Levin, Alon Kipnis. [Poster #14] -
Semi-Joint Source-Channel Coding over Wireless Networks: A Pragmatic Approach via Multi-Level Reliability Interface.
Tze-Yang Tung, Homa Esfahanizadeh, Jinfeng Du, and Harish Viswanathan. [Poster #16]
Keynotes:
Speaker: Dr. Johannes Ballé.
Title: Learned Image Compression.
Abstract: Since its emergence roughly 7 years ago, the field of learned data compression has attracted considerable attention from both the machine learning and information theory communities. Data-driven source coding promises faster innovation cycles, as well as better adaptation to novel types of sources and unconventional models of distortion. For example, image codecs can now be end-to-end optimized to perform best for specific types of images, by simply replacing the training set. They may now be designed to minimize a given perceptual image metric, or in fact any differentiable perceptual loss function. In this talk, I will review nonlinear transform coding (NTC), a framework of techniques which over the past few years have superseded the state of the art of hand-crafted image compression methods (such as the family of JPEG and MPEG standards) in terms of subjective quality vs. rate. I’ll illustrate the empirical rate–distortion performance of NTC with the help of simple, analytically characterized data sources. Furthermore, I will discuss a recent direction of ongoing work, the search for better measures of perceptual quality, as captured by realism (“How realistic is an image?”) and fidelity (“How similar is an image to a reference?”). I present Wasserstein Distortion, a measure to unify the two, grounded in neuroscientific models of peripheral vision.
Speaker: Prof. José Miguel Hernández-Lobato.
Title: Accelerating Relative Entropy Coding with Space Partitioning.
Abstract: Relative entropy coding (REC) algorithms aim to transmit a random sample following distribution Q, using a prior distribution P shared between the sender and receiver. General REC algorithms suffer from prohibitive runtimes and existing fast REC algorithms have been limited to very specific problem settings. In this talk, I will introduce a new REC method that utilizes space partitioning to potentially reduce runtime in more practical scenarios than previous scalable REC algorithms. We provide theoretical results for our proposed method and demonstrate its efficiency through both toy examples and practical applications in neural compression. While our approach does not achieve polynomial time complexity, it enables handling larger REC problems much more efficiently. This results in not only faster REC encoding processes but also reduced codelength overhead, thereby offering performance improvements in neural compression applications.
Speaker: Dr. Lucas Theis.
Title: Lossy Compression with Diffusion.
Abstract: This talk explores new methods for lossy image compression based on diffusion and channel simulation. By simulating a Gaussian channel, any diffusion generative model can be appropriated for compression. The resulting approach is notably different from the transform coding approach that underpins modern codecs and almost all neural compression approaches. However, we find that it works surprisingly well despite the lack of an analysis transform and despite its conceptual simplicity. We further find that this simplicity makes it very amenable to theoretical analysis and offer initial results on its rate-distortion performance under realism constraints.
Speaker: Prof. Shirin Jalali.
Title: Compression Codes: Bridging Theory and Algorithms in Signal Processing and Learning.
Abstract: In the realm of signal processing and machine learning, a foundational challenge lies in developing robust theoretical frameworks that guide the analysis and design of effective solutions. This talk explores the power of compression codes as a unifying framework for these tasks. By leveraging the principles of data compression, we can derive insightful theoretical perspectives that enhance our understanding of inference and learning problems. This framework not only provides a novel lens for theoretical analysis but also informs the creation of practically sound and theoretically-grounded algorithms. We will examine how compression codes bridge this gap, paving the way for advancements in signal processing and machine learning research.