Computer Graphics Laboratory ETH Zurich


Coupled 3D Reconstruction of Sparse Facial Hair and Skin

T. Beeler, B. Bickel, G. Noris, S. Marschner, P. Beardsley, R. Sumner, M. Gross

Proceedings of ACM SIGGRAPH (Los Angeles, USA, August 5-9, 2012), ACM Transactions on Graphics, vol. 31, no. 4, pp. 117:1-117:10
[Abstract] [BibTeX] [PDF] [Video]


Although facial hair plays an important role in individual expression, facial-hair reconstruction is not addressed by current face-capture systems. Our research addresses this limitation with an algorithm that treats hair and skin surface capture together in a coupled fashion so that a high-quality representation of hair fibers as well as the underlying skin surface can be reconstructed. We propose a passive, camera-based system that is robust against arbitrary motion since all data is acquired within the time period of a single exposure. Our reconstruction algorithm detects and traces hairs in the captured images and reconstructs them in 3D using a multi-view stereo approach. Our coupled skin-reconstruction algorithm uses information about the detected hairs to deliver a skin surface that lies underneath all hairs irrespective of occlusions. In dense regions like eyebrows, we employ a hair-synthesis method to create hair fibers that plausibly match the image data. We demonstrate our scanning system on a number of individuals and show that it can successfully reconstruct a variety of facial-hair styles together with the underlying skin surface.

[Download Video]

author = {Beeler, Thabo and Bickel, Bernd and Noris, Gioacchino and Marschner, Steve and Beardsley, Paul and Sumner, Robert W. and Gross, Markus},
title = {Coupled 3D Reconstruction of Sparse Facial Hair and Skin},
journal = {ACM Trans. Graph.},
issue_date = {July 2012},
volume = {31},
issue = {4},
month = {August},
year = {2012},
issn = {},
pages = {117:1--117:10},
articleno = {117},
numpages = {10},
url = {},
doi = {10.1145/2185520.2185613},
acmid = {1964970},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {hair reconstruction, multi-view stereo, facial hair},
[Download BibTeX]

Sample Dataset

We provide a sample dataset for research purposes.

What the archive contains
The archive contains the captured image data, calibrated cameras as well as the reconstructed facial geometry (episurface) and the facial hair. The facial hair is stored both in a custom format (see below) and explicitly as mesh geometry (ply & obj).
The file structure is:
  • cameras
    • ...
  • data
    • cam1
      • take_001.cr2
      • take_001.jpg
    • cam2
      • ...
    • ...
  • INFO.txt
  • citation.bibtex
  • result
    • face.ply
    • hairs.ply
    • hairs.obj
    • hairTexture.png
    • hairs.hcn

The file formats


The images are provided both in jpg and cr2 (Canon RAW) format. These are the original images used by our algorithm.

Facial Geometry (Episurface)

The format of the reconstructed facial episurface is ply. The meshes can be read and converted with Meshlab.

Facial Hair
The facial hair is provided both explicitly as mesh data with texture and implicitly as polylines in our own .hcn format.
The mesh format is both ply and obj. The meshes can be read and converted with Meshlab. The texture is provided as .png. For some applications, the Y-axis must be flipped for correct display.

Our own .hcn format is a binary format. The following two functions demonstrate how to read the file:

bool HairCollection::deSerialize(ifstream &stream)
    stream >> m_lastID;
    int nhairs;
    stream >> nhairs;
    for( int i=0; i<nhairs; i++ ) {
        Hair h;
        if( !h.deSerialize(stream) ) return false;
    return true;
bool Hair::deSerialize(ifstream &stream)
    uint8 version;
    stream >> version;
    stream >> m_id;
    stream >> m_valid;
    int len;
    stream >> len;
    Vector3 startPoint;
    Vector3 endPoint;
    for( int i=0; i<len; i++ ) {
        float x,y,z;
        float weight;
        float thickness;
        Color c;
        stream >> x;
        stream >> y;
        stream >> z;
        startPoint = Vector3(x,y,z);
        stream >> x;
        stream >> y;
        stream >> z;
        endPoint = Vector3(x,y,z);
        stream >> weight;
        stream >> thickness;
        stream >> c.x();
        stream >> c.y();
        stream >> c.z();
        appendSegment(startPoint, endPoint, weight, c, thickness);
    return true;

The camera format is our own. It consists of two lines; a header and the actual parameters. The header describes the content of the parameters, possible values are:


Name of the camera

Focal length

Principal point


Image size

Distortion parameters as described by Bouguet

Extrinsic translation

Extrinsic rotation (given in Rodrigues notation)

Near and far planes of the working volume

Obtaining the data

The human face is very personal and we decided thus not to publish the data online. On the other hand, high quality reconstruction data is very valuable to many researchers. As a compromise we offer to send the data directly to approved researchers. To request the data, please send an email to dbeeler at inf dot ethz dot ch stating
  1. your name, title or position, and institution or affiliation
  2. your intended use of the images and/or reconstructed geometry
  3. a statement saying that you accept the following terms of licensing (please copy the licensing text into your email):

    The rights to copy, distribute, and use the 3D computer models and image data (henceforth called "data") you are being given access to are under the control of Markus Gross, director of the Computer Graphics Lab, ETH Zurich. You are hereby given permission to copy this data in electronic or hardcopy form for your own scientific use and to distribute it for scientific use to colleagues within your research group. Inclusion of rendered images or video made from this data in a scholarly publication (printed or electronic) is also permitted. In this case, credit must be given to the publication: *Coupled 3D Reconstruction of Sparse Facial Hair and Skin*. However, the data may not be included in the electronic version of a publication, nor placed on the Internet. These restrictions apply to any representations (other than images or video) derived from the data, including but not limited to simplifications, remeshings, and the fitting of smooth surfaces. The making of physical replicas this data is also prohibited, and the data may not be distributed to students - also not in connection with a class. For any other use, including distribution outside your research group, written permission is required from Markus Gross. Any commercial use also requires written permission from Markus Gross. Commercial use includes but is not limited to sale of the data, derivatives, replicas, images, or video, inclusion in a product for sale, or inclusion in advertisements (printed or electronic), on commercially-oriented web sites, or in trade shows.

Inappropriate use

Please remember that faces are of very personal nature. Keep your renderings and other uses of the data in good taste. Don't put the faces in degrading or tasteless context and don't simulate nasty things happening to them (like breaking, exploding, melting, etc.). Choose another model for these sorts of experiments. Also, exercise reasonable caution to prevent the data from wandering beyond your research group.

Commercial use

Please contact dbeeler at inf dot ethz dot ch if you are interested in using the data and/or the system/algorithms commercially.


Download Paper
Download Video