arXiv:2412.15220v1 Announce Type: new
Abstract: Video and audio are closely correlated modalities that humans naturally perceive together. While recent advancements have enabled the generation of audio or video from text, producing both modalities simultaneously still typically relies on either a cascaded process or multi-modal contrastive encoders. These approaches, however, often lead to suboptimal results due to inherent information losses during inference and conditioning. In this paper, we introduce SyncFlow, a system that is capable of simultaneously generating temporally synchronized audio and video from text. The core of SyncFlow is the proposed dual-diffusion-transformer (d-DiT) architecture, which enables joint video and audio modelling with proper information fusion. To efficiently manage the computational cost of joint audio and video modelling, SyncFlow utilizes a multi-stage training strategy that separates video and audio learning before joint fine-tuning. Our empirical evaluations demonstrate that SyncFlow produces audio and video outputs that are more correlated than baseline methods with significantly enhanced audio quality and audio-visual correspondence. Moreover, we demonstrate strong zero-shot capabilities of SyncFlow, including zero-shot video-to-audio generation and adaptation to novel video resolutions without further training.
SyncFlow: Simultaneously Generating Audio and Video from Text
In the field of multimedia information systems, the generation of both audio and video from text has been a challenging task. While advancements have been made in generating either audio or video separately, producing both modalities simultaneously has often resulted in suboptimal outcomes. Existing approaches rely on cascaded processes or multi-modal contrastive encoders, which suffer from information losses during inference and conditioning. In this study, the authors introduce SyncFlow, a system that can generate temporally synchronized audio and video from text in a more efficient and effective way.
The core of SyncFlow is the proposed dual-diffusion-transformer (d-DiT) architecture. This architecture enables joint video and audio modeling while ensuring proper fusion of information. By incorporating the d-DiT architecture, SyncFlow overcomes the limitations of previous methods and produces audio and video outputs that are more correlated than baseline systems. This improvement is demonstrated through empirical evaluations, where SyncFlow achieves significantly enhanced audio quality and audio-visual correspondence.
SyncFlow also addresses the computational cost of joint audio and video modeling by employing a multi-stage training strategy. This strategy separates video and audio learning before joint fine-tuning, allowing for efficient management of computational resources. This approach is crucial in real-time applications, where generating audio and video in a synchronized manner is essential.
The authors further highlight the strong zero-shot capabilities of SyncFlow. This includes zero-shot video-to-audio generation, where audio can be generated without explicitly training on specific video inputs. Additionally, SyncFlow can adapt to novel video resolutions without the need for further training, showcasing its flexibility and versatility in handling different multimedia scenarios.
From a multi-disciplinary standpoint, SyncFlow merges concepts from multimedia information systems, animations, artificial reality, augmented reality, and virtual realities. By enabling the simultaneous generation of audio and video, SyncFlow improves the overall user experience in various multimedia applications. It bridges the gap between text-based content and immersive multimedia experiences, opening up new possibilities for interactive storytelling, virtual simulations, and entertainment platforms.
In conclusion, SyncFlow presents a significant advancement in the field of multimedia information systems by introducing a novel architecture for generating synchronized audio and video from text. Its ability to produce high-quality outputs, efficient computational management, and strong zero-shot capabilities make it a promising tool for various applications in multimedia content creation and consumption.