Learn to Compress & Compress to Learn

Workshop at the International Symposium on Information Theory (ISIT) 2025

The exponential growth of global data has intensified the demand for efficient data compression, with deep learning techniques like variational autoencoders, generative adversarial networks (GANs), diffusion models, and implicit neural representations reshaping traditional approaches to source coding. Learning-based neural compression methods have demonstrated the potential to outperform traditional codecs across various data modalities like image, video, and audio. However, challenges remain in improving their computational efficiency and memory requirements, understanding the theoretical limits of neural compression and compression without quantization as well as addressing challenges in distributed settings.

In parallel, compression has emerged as a powerful proxy task for advancing broader learning objectives, including representation learning and model efficiency. Recent research is exploring how compression can enhance training and generalization of large-scale foundation models for vision, language, and multi-modal applications. Techniques like knowledge distillation, model pruning, and quantization share common challenges with compression, highlighting the symbiotic relationship between these seemingly distant concepts. The intersection of learning, compression and information theory offers exciting new avenues for advancing both practical compression techniques and also our understanding of deep learning dynamics.

This workshop aims to unite experts from machine learning, computer science, and information theory to delve into the dual themes of learning-based compression and using compression as a tool for learning tasks. We are excited to feature industry experts who will share valuable practical insights with the ISIT attendees.

The workshop will be held (in-person) on Thursday 26 June 2025 - we look forward to seeing you in Ann Arbor!

Accepted Posters

Spotlight papers (in alphabetical order):
  • Discretized Approximate Ancestral Sampling
    Alfredo De la Fuente, Saurabh Singh and Jona Ballé

  • DeCompress: Denoising via Neural Compression
    Ali Zafari, Xi Chen and Shirin Jalali

  • Online Conformal Compression for Zero-Delay Communication with Distortion Guarantees
    Unnikrishnan Kunnath Ganesan, Giuseppe Durisi, Matteo Zecchin, Petar Popovski and Osvaldo Simeone

Important Dates

  • Paper submission deadline: March 14 March 28, 2025 (11:59 PM, anywhere in the world!).
  • Decision notification: April 18, 2025
  • Camera-ready paper deadline: May 1, 2025
  • Workshop date: June 26, 2025

Invited Speakers

Prof. Shirin Saeedi Bidokhti

University of Pennsylvania

Prof. Ferenc Huszár

University of Cambridge

Dr. Pulkit Tandon

Granica.ai

Dr. Kedar Tatwawadi

Apple

Prof. Chao Tian

Texas A&M University

Dr. Yibo Yang

Chan Zuckerberg Initiative

Organizers

Gergely Flamich

University of Cambridge / Imperial College London

Ezgi Ozyilkan

New York University

Prof. Deniz Gündüz

Imperial College London

Prof. Elza Erkip

New York University

Questions

Contact us at learn.to.compress.workshop@gmail.com.