filename : Ser24a.pdf entry : inproceedings conference : ACM/Eurographics Symposium on Computer Animation 2024 pages : 1-11 year : 2024 month : August title : VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters subtitle : author : Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, and Moritz Bàˆcher booktitle : Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation ISSN/ISBN : 1467-8659 editor : publisher : Eurographics Association publ.place : Montreal, Quebec, Canada volume : 43 issue : 8 language : English keywords : Learning from demonstrations; Learning latent representations; Reinforcement learning; Physical simulation; Animation; Control methods abstract : Recent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world.