A Robust Multilinear Model Learning Framework for 3D Faces

CVPR 2016

Timo Bolkart Stefanie Wuhrer
Saarland University, MMCI INRIA Grenoble Rhône-Alpes

Abstract
Multilinear models are widely used to represent the statistical variations of 3D human faces as they decouple shape changes due to identity and expression. Existing methods to learn a multilinear face model degrade if not every person is captured in every expression, if face scans are noisy or partially occluded, if expressions are erroneously labeled, or if the vertex correspondence is inaccurate. These limitations impose requirements on the training data that disqualify large amounts of available 3D face data from being usable to learn a multilinear model. To overcome this, we introduce the first framework to robustly learn a multilinear model from 3D face databases with missing data, corrupt data, wrong semantic correspondence, and inaccurate vertex correspondence. To achieve this robustness to erroneous training data, our framework jointly learns a multilinear model and fixes the data. We evaluate our framework on two publicly available 3D face databases, and show that our framework achieves a data completion accuracy that is comparable to state-of-the-art tensor completion methods. Our method reconstructs corrupt data more accurately than state-of-the-art methods, and improves the quality of the learned model significantly for erroneously labeled expressions.

Files
Paper Video
PDF (6 MB) AVI (10 MB)

Downloads
To facilitate the use of our framework on new databases, we make our code available. Further, we publish a multilinear model for non-commercial research purposes learned using RMM from the combination of all 100 identities in 7 expressions of the BU-3DFE database1 and all 105 identities in 23 expressions of the Bosphorus database2. The different expression sets of both databases and the missing shapes of the Bosphorus database cause a large portion of the joint database to be missing (2205 shapes of 4715 missing). RMM successfully learns a model for these data by estimating the missing data. We ask that you respect the conditions of using the models, which are detailed in the readme.pdf file provided with the model.
We provide
  • the multilinear face models from the joint BU-3DFE and Bosphorus databases and an example framework that shows how to use a multilinear face model to reconstruct unregistered face scans here, and
  • the source code of the robust multilinear model learning framework here.

Utilized Face Data
1 This face model was computed using the Bosphorus face database. If you use this statistical model in your publications, please also reference the following work.
  • A. Savran, N. Alyüz, H. Dibeklioglu, O. Çeliktutan, B. Gökberk, B. Sankur, and L. Akarun
    Bosphorus Database for 3D Face Analysis
    The First COST 2101 Workshop on Biometrics and Identity Management (BIOID), 2008
2 This face model was computed using the BU-3DFE face database. We wish to thank Lijun Yin for making this database available and for giving us permission to make the statistical models available for non-commerical research purposes. If you use this statistical model in your publications, please also reference the following work..
  • L. Yin, X. Wei, Y. Sun, J. Wang, M. Rosato
    A 3D Facial Expression Database For Facial Behavior Research
    International Conference on Automatic Face and Gesture Recognition, 2006, pages 211-216

Acknowledgments
This work has been partially funded by the German Research Foundation (WU 786/1-1, Cluster of Excellence MMCI, Saarbrücken Graduate School of Computer Science).