filename : Ser24c.pdf entry : inproceedings conference : SIGGRAPH Asia 2024, Tokyo, Japan, 3-6 December, 2024 pages : year : 2024 month : December title : Robot Motion Diffusion Model: Motion Generation for Robotic Characters subtitle : author : Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, and Moritz Bàˆcher booktitle : ACM Transactions on Graphics (TOG) - SIGGRAPH Asia 2024 Conference Proceedings ISSN/ISBN : 0730-0301 editor : ACM New York, NY, USA publisher : Association for Computing Machinery publ.place : volume : issue : language : English keywords : physics-based characters, robotics, motion synthesis, motion control, reinforcement learning, animation abstract : Recent advancements in generative motion models have achieved remarkable results, enabling the synthesis of lifelike human motions from textual descriptions. These kinematic approaches, while visually appealing, often produce motions that fail to adhere to physical constraints, resulting in artifacts that impede real-world deployment. To address this issue, we introduce a novel method that integrates kinematic generative models with physics-based character control. Our approach begins by training a reward surrogate to predict the performance of the downstream non-differentiable control task, offering an efficient and differentiable loss function. This reward model is then employed to fine-tune a baseline generative model, ensuring that the generated motions are not only diverse but also physically plausible for real-world scenarios. The outcome of our processing is the Robot Motion Diffusion Model (RobotMDM), a text-conditioned kinematic diffusion model that interfaces with a reinforcement learning-based tracking controller. We demonstrate the effectiveness of this method on a challenging humanoid robot, confirming its practical utility and robustness in dynamic environments.