Рет қаралды 4,506
Animation is a challenging and time-consuming process where animators must manipulate hundreds of controls over space and time to create compelling motions. Recent advances in motion diffusion models have shown impressive results for general motion generation and hold the potential to reduce the number of controls manipulated by animators to achieve high quality results. However, these models are limited by their inability to match sparse constraints precisely, preventing frame-level joint control required by artists. Additionally, recent models are trained for specific characters, preventing reuse, and are incompatible for characters with only a small datasets available. To tackle these shortcomings, we propose a novel factorization of motion between a character-agnostic Bézier Motion Model (BMM), which can be trained on a large motion dataset, followed by a character-specific posing model, trainable on a much smaller pose dataset, that enables reuse across many characters. BMM provides accuracy for meeting sparse joint-level constraints by working in a reduced space of Bézier curves that better aligns the condition signal with the prediction space of our model. Additionally, the Bézier curves offer animators an intuitive interface compatible with existing authoring software. Through quantitative and qualitative comparisons, we show the effectiveness of our factorization and parametric subspace, enabling user control with higher fidelity.
Publication Link: studios.disney...