Visually-Guided Audio Spatialization in Video with Geometry-Aware Multi-task Learning
Article Ecrit par: Garg, Rishabh ; Gao, Ruohan ; Grauman, Kristen ;
Résumé: Binaural audio provides human listeners with an immersive spatial sound experience, but most existing videos lack binaural audio recordings. We propose an audio spatialization method that draws on visual information in videos to convert their monaural (single-channel) audio to binaural audio. Whereas existing approaches leverage visual features extracted directly from video frames, our approach explicitly disentangles the geometric cues present in the visual stream to guide the learning process. In particular, we develop a multi-task framework that learns geometry-aware features for binaural audio generation by accounting for the underlying room impulse response, the visual stream's coherence with the sound source(s) positions, and the consistency in geometry of the sounding objects over time. Furthermore, we introduce two new large video datasets: one with realistic binaural audio simulated for real-world scanned environments, and the other with pseudo-binaural audio obtained from ambisonic sounds in YouTube \(360^{\circ }\) videos. On three datasets, we demonstrate the efficacy of our method, which achieves state-of-the-art results.
Langue:
Anglais