Make Some Noise: Towards LLM audio reasoning and generation using sound tokens

2025 International Conference on Acoustics, Speech, and Signal Processing |

Publication

Integrating audio comprehension and generation into large language models (LLMs) remains challenging due to the continuous nature of audio and the resulting high sampling rates. Here, we introduce a novel approach that combines Variational Quantization with Conditional Flow Matching to convert audio into ultra-low bitrate discrete tokens of 0.23kpbs, allowing for seamless integration with text tokens in LLMs. We fine-tuned a pretrained text-based LLM using Low-Rank Adaptation (LoRA) to assess its effectiveness in achieving true multimodal capabilities, i.e., audio comprehension and generation. Our tokenizer outperforms a traditional VQ-VAE across various datasets with diverse acoustic events. Despite the substantial loss of fine-grained details through audio tokenization, our multimodal LLM trained with discrete tokens achieves competitive results in audio comprehension with state-of-the-art methods, though audio generation is poor. Our results highlight the need for larger, more diverse datasets and improved evaluation metrics to advance multimodal LLM performance.

Architecture of audio tokenizer containing frozen autoencoder follow by a causal encoder and a conditional flow matching-based decoder with Diffusion Transformer to reconstruct representations from quantised vectors

Architecture of audio tokenizer containing frozen autoencoder follow by a causal encoder and a conditional flow
matching-based decoder with Diffusion Transformer to reconstruct representations from quantised vectors