Content and relevance will never lose importance, even in a gesture-based Kinect world. The Kinect Back-Space relies on the content’s relevance to the selected topic in creating a NUI for learning. This video by Raschin Fatemi displays this concept of having having gestures and the settings be dependent on the value of the content. In this video, users are flooded with links and keywords which they can reach out. The user grabs words in order to browse around Wikipedia and based on the relevance of the content, the further and the more difficult for the user to grab the link, the less significant the content is.

Here is a description by the developers:

In back-space participants stand or move in front of a large projection screen. Outgoing links from a Wikipedia article are projected for users in 3D and each link is placed in a distance from user according to its semantic relevance to the main article. The more relevant they are, the closer they become to user. Participant’s body is captured using a Kinect camera and they can interact and grab links by using their body to navigate in Wikipedia. The effort needed to reach a link is proportional to the relevance of the link. The content of each article they visit appears beneath the link cloud as a block of text. By navigating inside Backspace a city-like structure emerges from the history of navigation and user can bend to land down and look through the city they’ve just created.

For more information about the Kinect Back-Space, visit the project’s website.

Visit Website