Now that we’ve seen the ability of the Kinect to detect facial expressions, can the data be related to other images and the control these digital faces? The answer is yes and through the Kinect Facial Recongition and Transfer, the depth camera device becomes more intricate. Through facial tracking and recognition, the Kinect becomes a viable tool for developers, artists and producers to have virtual avatars cleanly respond even to the user’s facial expressions. This video by Youtube User haoli81 showcases this particular feature of the Kinect and how this would be the game-changer in the virtual industry. In the video, the user is seen manipulating a virtual image of another person and Slimer using the Kinect and his own face. The virtual avatars respond and give the accurate imitations of the user’s facial movements.
Here is a description by the developers:
“SIGGRAPH 2011 Paper Video: This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available 3D sensor. The simplicity of this acquisition device comes at the cost of high noise levels in the acquired data. To effectively map low-quality 2D images and 3D depth maps to realistic facial expressions, we introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. Formulated as a maximum a posteriori estimation in a reduced parameter space, our method implicitly exploits temporal coherence to stabilize the tracking. We demonstrate that accurate 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware. This makes our system easy to deploy and facilitates a range of new applications, e.g. in digital gameplay or social interactions.”