Web video conferences are limited to the classic webcam to webcam setup. The Kinect though has introduced additional dynamics to video conferencing that gives users a more in-depth and personalized conference. Lining Yao, Anthony DeVincenzi, Ramesh Raskar, Hiroshi Ishii from MIT Media Lab sent us this great Kinect program they’ve made where they capitalized on the Kinect’s depth camera in order to provide focus as well as interactive options to the standard video conference. They call it, the Kinected Conference. The team was able to configure a software that detects the facial movements of a person, providing focus to that person while blurring the other people in the conference, highlighting the designated speaker. There is also a timer to measure the time elapsed of the speaker. Another feature that we can see here is the ability of the conference to detect the depth of each object given and the software’s feature to replace the designated object with a specific image. This is indeed handy for presentation purposes that can help users to elaborate points and also keep the conference interesting and visual.
Here is a brief description of the project by the developers:
“What can we do if the screen in videoconference rooms can turn into an interactive display? With Kinect camera and sound sensors, we explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus”, “Freezing Former Frames”, “Privacy Zone” and “Spacial Augmenting Reality”.”
For more information about the Kinected Conference, visit the project’s website.