img

Notice détaillée

Automatic Generation of 3D Scene Animation Based on Dynamic Knowledge Graphs and Contextual Encoding

Article Ecrit par: Li, Shuai ; Guo, Yuting ; Zhang, Xinyu ; Hao, Aimin ; Qin, Hong ; Song, Wenfeng ;

Résumé: Although novel 3D animation techniques could be boosted by a large variety of deep learning methods, flexible automatic 3D applications (involving animated figures such as humans and low-life animals) are still rarely studied in 3D computer vision. This is due to lacking of arbitrary 3D data acquisition environment, especially those involving human populated scenes. Given a single image, the 3D animation aided by contextual inference is still plagued by limited reconstruction clues without prior knowledge pertinent to the identified figures/objects and/or their possible relationship w.r.t. the environment. To alleviate such difficulty in time-varying 3D animation, we devise a dynamic scene creation framework via a dynamic knowledge graph (DKG). The DKG encodes both temporal and spatial contextual clues to enable and facilitate human interactions with the affordance environment. Furthermore, we construct the DKG-driven variational auto-encoder (DVAE) upon animation kinematics knowledge conveyed by meta-motion sequences, which are disentangled from videos of prior scenes. It is then possible to utilize the DKG to induce the animations in certain scenes, thus, we could automatically and physically generate plausible 3D animations that afford vivid interactions among humans, low-and life animals in the environment. The extensive experimental results and comprehensive evaluations confirm our DKGs' representation and modeling power towards new animation production in 3D graphics and vision applications.


Langue: Anglais
Thème Informatique

Mots clés:
3D scene animation
Dynamic knowledge graphs
Contextual encoding
Automatic animation generation

Automatic Generation of 3D Scene Animation Based on Dynamic Knowledge Graphs and Contextual Encoding

Sommaire