Call for Papers

The exponential growth of global data has intensified the demand for efficient data compression, with deep learning techniques like variational autoencoders, generative adversarial networks (GANs), diffusion models, and implicit neural representations reshaping traditional approaches to source coding. Learning-based neural compression methods have demonstrated the potential to outperform traditional codecs across various data modalities like image, video, and audio. However, challenges remain in improving their computational efficiency and memory requirements, understanding the theoretical limits of neural compression and compression without quantization as well as addressing challenges in distributed settings.

In parallel, compression has emerged as a powerful proxy task for advancing broader learning objectives, including representation learning and model efficiency. Recent research is exploring how compression can enhance training and generalization of large-scale foundation models for vision, language, and multi-modal applications. Techniques like knowledge distillation, model pruning, and quantization share common challenges with compression, highlighting the symbiotic relationship between these seemingly distant concepts. The intersection of learning, compression and information theory offers exciting new avenues for advancing both practical compression techniques and also our understanding of deep learning dynamics.

This workshop aims to unite experts from machine learning, computer science, and information theory to delve into the dual themes of learning-based compression and using compression as a tool for learning tasks.

Topics of interest include, but are not limited to:

  • ``Learn to Compress” – Advancing Compression with Learning
    • Learning-Based Data Compression: New techniques for compressing data (e.g., images, video, audio), model weights, and emerging modalities (e.g., 3D content and AR/VR applications).
    • Efficiency for Large-Scale Foundation Models: Accelerating training and inference for large-scale foundation models, particularly in distributed and resource-constrained settings
    • Theoretical Foundations of Neural Compression: Fundamental limits (e.g., rate-distortion bounds), distortion/perceptual/realism metrics, distributed compression, compression without quantization (e.g., channel simulation, relative entropy coding), and stochastic/probabilistic coding techniques.
  • ``Compress to Learn” – Leveraging Principles of Compression to Improve Learning
    • Compression as a Tool for Learning: Leveraging principles of compression and source coding to understand and improve learning and generalization.
    • Compression as a Proxy for Learning: Understanding the information-theoretic role of compression in tasks like unsupervised learning, representation learning, and semantic understanding.
    • Interplay of Algorithmic Information Theory and Source Coding: Exploring connections between Algorithmic Information Theory concepts (e.g., Kolmogorov complexity, Solomonoff induction) and emerging source coding methods.

All accepted papers will be presented as posters during the poster session. We welcome all relevant recent submissions that have been presented, published or are currently undergoing review elsewhere, if the authors decide not to publish their full-paper on IEEE Xplore. Some papers will also be selected for spotlight presentations.

Important Dates

  • Paper submission deadline: March 14 March 28, 2025 (11:59 PM, anywhere in the world!).
  • Decision notification: April 18, 2025
  • Camera-ready paper deadline: May 1, 2025
  • Workshop date: June 26, 2025

Submission Details

All submitted papers should be prepared in the ISIT 2025 paper format. You can find information for authors such as paper format, template and example at this link.

All papers should be made through our venue home page via this EDAS link. Note: The EDAS submission form is currently titled “Register a paper for 2025 IEEE International Symposium on Information Theory (ISIT)”, without reference to the workshop. Please disregard this; you can safely register your paper(s) through the form, they will be correctly assigned to the workshop.

Each paper will go through a rigorous review process. The workshop will follow a single-blind reviewing policy, aligned with the ISIT 2025, which means that the all submitted manuscripts should include author names and affiliations. The authors can post their papers on arXiv if they wish to do so.

We will offer authors the choice to publish their accepted papers on IEEE Xplore.

We welcome all relevant submissions that have been presented, published or are currently undergoing review elsewhere, if the authors decide not to publish their full-paper on IEEE Xplore.

An author of an accepted paper must register to the workshop and present a poster. For some selected papers, there will be a spotlight presentation. To maintain the interactive nature of the workshop, we kindly request all presentations to be in-person.

Only accepted papers that are presented will be published on IEEE Xplore. The requirements of the poster will be communicated with the acceptance notification for the paper.