The realistic generation of virtual doubles of real-world actors has been the focus of computer graphics research for many years. However, some problems still remain unsolved: it is still time-consuming to generate character animations using the traditional skeleton-based pipeline, passive performance capture of human actors wearing arbitrary everyday apparel is still challenging, and until now, there is only a limited amount of techniques for processing and modifying mesh animations, in contrast to the huge amount of skeleton-based techniques. In this work, we propose algorithmic solutions to each of these problems. First, two efficient mesh-based alternatives to simplify the overall character animation process are proposed. Although abandoning the concept of a kinematic skeleton, both techniques can be directly integrated in the traditional pipeline, generating animations with realistic body deformations. Thereafter, three passive performance capture methods are presented which employ a deformable model as underlying scene representation. The techniques are able to jointly reconstruct spatio-temporally coherent time-varying geometry, motion, and textural surface appearance of subjects wearing loose and everyday apparel. Moreover, the acquired high-quality reconstructions enable us to render realistic 3D Videos. At the end, two novel algorithms for processing mesh animations are described. The first one enables the fully-automatic conversion of a mesh animation into a skeletonbased animation and the second one automatically converts a mesh animation into an animation collage, a new artistic style for rendering animations. The methods described in this book can be regarded as solutions to specific problems or important building blocks for a larger application. As a whole, they form a powerful system to accurately capture, manipulate and realistically render real-world human performances, exceeding the capabilities of many related capture techniques. By this means, we are able to correctly capture the motion, the time-varying details and the texture information of a real human performing, and transform it into a fully-rigged character animation, that can be directly used by an animator, or use it to realistically display the actor from arbitrary viewpoints.
Cognitive Systems Monographs Volume 5 Editors: Rüdiger Dillmann · Yoshihiko Nakamura · Stefan Schaal · David Vernon
Edilson de Aguiar
Animation and Performance Capture Using Digitized Models
ABC
Rüdiger Dillmann, University of Karlsruhe, Faculty of Informatics, Institute of Anthropomatics, Humanoids and Intelligence Systems Laboratories, Kaiserstr. 12, 76131 Karlsruhe, Germany Yoshihiko Nakamura, Tokyo University Fac. Engineering, Dept. Mechano-Informatics, 7-3-1 Hongo, Bukyo-ku Tokyo, 113-8656, Japan Stefan Schaal, University of Southern California, Department Computer Science, Computational Learning & Motor Control Lab., Los Angeles, CA 90089-2905, USA David Vernon, Khalifa University Department of Computer Engineering, PO Box 573, Sharjah, United Arab Emirates
Author Dr.-Ing. Edilson de Aguiar Carnegie Mellon University Disney Reseach Pittsburgh 4615 Forbes Avenue Pittsburg, PA 15213 USA E-mail:
[email protected]
ISBN 978-3-642-10315-5
e-ISBN 978-3-642-10316-2
DOI 10.1007/978-3-642-10316-2 Cognitive Systems Monographs
ISSN 1867-4925
Library of Congress Control Number: 2009940444 c 2010
Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustration