Computer Graphics Laboratory ETH Zurich

ETH

Accurate Markerless Jaw Tracking for Facial Performance Capture

G. Zoss, T. Beeler, M. Gross, D. Bradley

Proceedings of ACM SIGGRAPH (Los Angeles,USA, July 28 - August 1st, 2019), ACM Transactions on Graphics, vol. 38, no. 4, pp. 50:1-50:8

Abstract

We present the first method to accurately track the invisible jaw based solely on the visible skin surface, without the need for any markers or augmentation of the actor. As such, the method can readily be integrated with off-the-shelf facial performance capture systems. The core idea is to learn a non-linear mapping from the skin deformation to the underlying jaw motion on a dataset where ground-truth jaw poses have been acquired, and then to retarget the mapping to new subjects. Solving for the jaw pose plays a central role in visual effects pipelines, since accurate jaw motion is required when retargeting to fantasy characters and for physical simulation. Currently, this task is performed mostly manually to achieve the desired level of accuracy, and the presented method has the potential to fully automate this labour intense and error prone process.


Downloads

Download Paper
[PDF]
Download Video
[Video]
Download Paper
[BibTeX]