arXiv:2406.19388v1 Announce Type: cross
Abstract: Generating ambient sounds and effects is a challenging problem due to data scarcity and often insufficient caption quality, making it difficult to employ large-scale generative models for the task. In this work, we tackle the problem by introducing two new models. First, we propose AutoCap, a high-quality and efficient automatic audio captioning model. We show that by leveraging metadata available with the audio modality, we can substantially improve the quality of captions. AutoCap reaches CIDEr score of 83.2, marking a 3.2% improvement from the best available captioning model at four times faster inference speed. We then use AutoCap to caption clips from existing datasets, obtaining 761,000 audio clips with high-quality captions, forming the largest available audio-text dataset. Second, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters and train with our new dataset. When compared to state-of-the-art audio generators, GenAu obtains significant improvements of 15.7% in FAD score, 22.7% in IS, and 13.5% in CLAP score, indicating significantly improved quality of generated audio compared to previous works. This shows that the quality of data is often as important as its quantity. Besides, since AutoCap is fully automatic, new audio samples can be added to the training dataset, unlocking the training of even larger generative models for audio synthesis.
Improving Ambient Sound Generation with New Models
Ambient sound generation is a complex task that has posed challenges due to limited data availability and the quality of captions. However, this article presents two novel models that address these issues effectively. The first model, AutoCap, is an automatic audio captioning model that leverages metadata to enhance the quality of captions. This approach not only improves the accuracy of captions but also allows for faster inference speed. In fact, AutoCap achieves an impressive CIDEr score of 83.2, which is a 3.2% improvement over existing captioning models.
AutoCap’s ability to generate high-quality captions is applied to existing datasets, resulting in the creation of a groundbreaking audio-text dataset containing 761,000 audio clips. This dataset, with its accurate and descriptive captions, serves as a valuable resource for future research in the field of ambient sound generation.
The second model introduced in this article is GenAu, a transformer-based audio generation architecture with a significant scale of 1.25B parameters. Trained on the newly created audio-text dataset, GenAu surpasses state-of-the-art audio generators. It achieves remarkable improvements of 15.7% in FAD score, 22.7% in IS, and 13.5% in CLAP score, indicating a substantial enhancement in the quality of generated audio compared to previous works.
These advancements highlight the significance of both the quality and quantity of data in multimedia information systems. By utilizing AutoCap’s automatic captioning capability, the training dataset for generative audio models can be exponentially expanded. This, in turn, unlocks the potential for training even larger models for audio synthesis, ultimately improving the overall realism and quality of generated ambient sounds.
This research encompasses various disciplines within multimedia information systems, including audio processing, natural language processing, and artificial intelligence. The integration of metadata with audio modality in AutoCap demonstrates the multi-disciplinary nature of the proposed approach. Furthermore, the utilization of transformer-based architectures in GenAu showcases the importance of leveraging advancements in deep learning techniques, specifically in the context of audio generation.