Music2Latent: Consistency Autoencoders for Latent Audio Compression
Marco Pasini1, Stefan Lattner2, George Fazekas1
- Queen Mary University of London
- Sony Computer Science Laboratories Paris
Abstract
Efficient audio waveform representations in a compressed continuous latent space are critical for generative audio modeling and Music Information Retrieval (MIR) tasks. However, some existing audio autoencoders have limitations, such as multi-stage training procedures, slow iterative sampling, or low reconstruction quality. We introduce Music2Latent, an audio autoencoder that overcomes these limitations by leveraging consistency models. Music2Latent encodes samples into a compressed continuous latent space in a single end-to-end training process while enabling high-fidelity single-step reconstruction. Key innovations include conditioning the consistency model on upsampled encoder outputs at all levels through cross connections, using frequency-wise self-attention to capture long-range frequency dependencies with fixed memory, and employing frequency-wise learned scaling to handle varying value distributions across frequencies at different noise levels. We demonstrate that Music2Latent outperforms existing continuous audio autoencoders in sound quality and reconstruction accuracy on standard metrics while achieving competitive performance on downstream MIR tasks using its latent representations. To our knowledge, this represents the first successful attempt at training an end-to-end consistency autoencoder model.
Architecture
The input sample is first encoded into a sequence of latent vectors. The latents are then upsampled with a decoder model. The consistency model is trained via consistency training, with an additional information leakage coming from the cross connections.
Audio Examples
We compare the reconstructions of Music2Latent against baselines for MusicCaps evaluation samples. We also include reconstructions from Descript Audio Codec (DAC): altough not directly comparable since it encodes audio into discrete tokens instead of continuous embeddings at a much higher sampling rate, we understand it may be valuable to provide a comparison between the two models.
Original | Music2Latent | Musika | LatMusic |
---|---|---|---|
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |
Original | Music2Latent | Musika | LatMusic |
Mousaiv2 | Mousaiv3 | DAC | |