This seminar covers advanced topic in digital humans with a focus on the latest research results. Topics include estimating human pose and motion from images, human motion synthesis, learning-based human avatar creation, learning neural implicit representations for humans, modeling, animations, artificial intelligence for digital characters, and others. A collection of research papers is selected.
Every participant has to present one of the papers presented in the first session. Additionally, you are required to read the paper that is presented in class beforehand and participate in a discussion during the seminar. An assistant will provide support when preparing the slides and in case technical questions arise.
The goal is to get an overview of actual research topics in the field of digital humans and to improve presentation and critical analysis skills.
Presence is mandatory to pass the seminar. If a student cannot attend a seminar session, the reason (e.g. medical certificate) has to be given before the session and must be accepted by one of the organizers. More than three missed seminar sessions will cause the student to fail this class. The dates for the presentations cannot be moved.
The presentation of the selected paper contributes 80% to the final grade. Additionally, the students are required to lead the discussion of another paper (20%).
| Number | 263-5702-00L | 
| Lecturers | M. Gross, B. Solenthaler, S. Tang, R. Wampfler | 
| Location | HG E 22, Thursdays 16:15-18:00 | 
| Date | Paper | Presenter | Supervisor | Discussion | 
| 2.10.25 | UniPhys: Unified Planner and Controller with Diffusion for Flexible Physics-Based Character Control | Beo Laumanns | Yan Wu | Matan Davidi | 
| 2.10.25 | Joker: Conditional 3D Head Synthesis with Extreme Facial Expressions | Deniz Kaan Isik | Malte Prinzler | |
| 9.10.25 | RoHM: Robust Human Motion Reconstruction via Diffusion | Tom Schott | Siwei Zhang | Stefan Bjelajac | 
| 9.10.25 | DIMOS: Synthesizing Diverse Human Motions in 3D Indoor Scenes | Matan Davidi | Kaifeng Zhao | Joël Vögtlin | 
| 16.10.25 | 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting | Xinyuan Li | Zhiyin Qian | Jiayi Sun | 
| 16.10.25 | Identity Preserving 3D Head Stylization with Multiview Score Distillation | Madeleine Sandri | Bahri Bilecen | Beo Laumanns | 
| 23.10.25 | DartControl: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control | Jiayi Sun | Kaifeng Zhao | Xinyuan Li | 
| 23.10.25 | VoluMe: Authentic 3D Video Calls from Live Gaussian Splat Prediction | Egor Gubenko | Jackson Stanhope | Tom Schott | 
| 30.10.25 | Rendering with Style: Combining Traditional and Neural Approaches for High-Quality Face Rendering | Simon Peter | Yingyan Xu | Mahan Ahmadvand | 
| 6.11.25 | Large-Scale 3D Infant Face Model | Joël Vögtlin | Till Schnabel | Egor Gubenko | 
| 13.11.25 | EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech Animation | Stefan Bjelajac | Philine Witzig | Kevin Barbieri | 
| 20.11.25 | DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance | Maxim Litvak | Xinya Ji | Deniz Kaan Isik, Madeleine Sandri | 
| 27.11.25 | Text2Human: Text-Driven Controllable Human Image Generation | Mahan Ahmadvand | Lucas Relic | Maxim Litvak |