Object tracking can be as accessible as an online browser! The Kinect Intrael is a Computer Vision for the web which allows object tracking and recognition. This video by Yannis Gravezas displays the foundation of the Intrael and how it can help relay valuable object information across the web. In this video, the Kinect tracks the hands of the users and certain virtual shapes respond. But this is just a showcase of the basic foundation of the Intrael’s browser. Based from the developers, the Intrael is supposed to recognize various objects and identify them. This program looks like the Google of objects. Unknown objects can then be tracked, stored and even identified through the growing data base.

Here is a description but the developers:

“Intrael is a portable app server that processes depth data from a kinect and identifies objects using several configurable selection criteria.

It is based on an algorithm for run-based component labeling described by He, Chao, Suzuki and Itoh which provides excellent performance.

Found objects are analyzed and detailed 3D bounding boxes are measured for each of them. 24+ objects can be tracked at fluid frame rates.

The collected data are made available to network clients through polling as raw delimited text or as JSONP HTTP responses for use in browsers.

The programmer just has to include a javascript on his pages to enable the handling of the stream from the server running localy on the client.”

For more information about the Kinect Intrael, visit the project’s website or download the code here.

Download Code
Visit Website

2 COMMENTS

  1. Thanks for the praise man. Just so people don’t get confused, intrael provides just seven 3D points per object which are the extremes of the X,Y,Z axes along with the center of mass. These are not adequate for the software to act as the google of object(neat concept though ;). They are enough for a developer to implement tracking and labelling of blobs as you can see here http://www.intrael.com/track . With the correct assumptions per the actual installation it could distinguish objects. For instance on a top down scenario where tha kinect is mounted on the ceiling and scans a table from above, objects on the table could be distinguished from user hands based on the assumption that tha hands will always have their extreme in one axis out of the tables boundaries. The rest of the objects on the table could then be matched on a DB based on the vector of the angles formed between their seven points so they can be further classified. All the logic is left to be implemented by the pages author in Javascript and gets instantly delivered through the web. It’s a bit complex as it was designed to allow for max creativity and works well expecially in large public installations. If, like me, you’re tired of this dull and world, now we can easily make it interactive. Let’s bring it on

    • Thanks for the clarification Yannis! Thought it might not be a google of objects yet, it does impose that concept!

LEAVE A REPLY

Please enter your comment!
Please enter your name here