0

SigmaNIL is, according to the developer, “the most powerful vision framework for natural user interfaces.” The tool also comes with elaborate features such as finger level precision hand shape recognition, hand gesture recognition, and hand skeleton tracking.

For those looking for more, SigmaNIL framework also provides customization tools for each of these advanced features. SigmaNIL is designed to support all depth sensor devices and base libraries such as OpenNI and KinectSDK. It is also designed so that developers can enhance it by adding new powerful modules.

The tool is composed of the following parts:

  • SigmaNIL Core (source code included)
  • SigmaNIL Modules (HandSegmentation, HandSkeleton, HandShape and HandGesture)
  • SigmaNIL Tools (Mainly training tools to customize the modules by creating relevant data files. Current distribution includes HandShapeTrainingTool only)

For those interested in trying out SigmaNIL, you can find documentation and tutorials on their website. Be sure to click the handy-dandy link below!

 

Visit Project Website

 

Turn your surroundings into your desktop with Ubi Displays

Vote This Hack:

1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...


Posted on 12/19/2012



0

Today’s featured app is a tool which, according to the developer, “makes building interactive projected displays quick and easy.”  It requires a Kinect and a Projector and then you can drag-drop web content into the world around you, transforming practically any surrounding into your desktop. As the developer says,  ”think of it as a programming environment for physical spaces!”

The video shows some possible real world applications for such a set-up and could, like other Kinect-based computer interfaces, change how we interact with computers in the future.

The developer says that his research is still ongoing and part of his focus is to work out major problems that are facing this kind of technology. Part of that is finding uses for the tool in various situations that wouldn’t normally be considered.

Be sure to check out his video link and provide feedback on how you think this technology could progress. You can also download the code for the app using the link below.

 

Visit Project Website

 

ReconstructMe SDK is a real time 3D reconstruction tool

Vote This Hack:

1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...


Posted on 12/17/2012



0

Looking for a new real time 3D reconstruction tool to try out? Not happy with the current one you’re using? Then you may want to learn more about ReconstructMe SDK.

According to developer Christoph Heindl, ReconstructMe is a a real time 3D reconstruction system that “allows everyone to control the reconstruction process the way they want it be.” Heindl adds that his tool “simple applications as well as multi-sensor reconstruction processes.” Furthermore, ReconstructMe is very user friendly; Heindl claims that it comes fully documented and has consistent API which “allows you grasp the concepts quickly and develop your first reconstruction application within minutes.”

If this sounds right up your alley, be sure to check out the demo video and the project website using the link we’ve included below.

 

Visit Project Website

Augmented Reality Telepresence via Kinect

Vote This Hack:

1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...


Posted on 12/14/2012



0

When Google Glass was announced, people were blown-away by the tech giant’s plans for augmented reality technology. Not to be left behind, today’s featured hack is about developers creating their own version that’s focused on telepresence aspects of the technology.

Like Google Glass, users can teleconference using optical see-through head-mounted displays. However, one thing users may take issue with is that these displays are currently bulky. What does set this hack aside from Google Glass is that the remote users appear fully 3D, allowing you to look and walk around them, and are properly merged into the local environment — they can occlude and be occluded by local objects. Here are some specifics shared by our submitter:

“For example, a remote user could appear to be seated at the other end of a local table — the local user would see the remote user from the correct tracked perspective, but wouldn’t see the remote user’s legs if they appear behind the table. Other scenarios are supported as well, such as the remote user’s environment appearing as an extension of the local environment, or the local user being completely immersed in the remote scene. The 3D appearances and proper local and remote scene merger are accomplished with Kinect-based 3D scanning on both sides and projector based lighting control, which causes real objects to be illuminated only if they are not occluded by remote virtual objects.”

This one development we’re all really excited about and one that hopefully gets more focus from the Kinect Hacking community. Once perfected, it could potentially change the way we socialize and will make distance even less of a barrier.

Visual Space: Kinect-powered, motion-based mixing

Vote This Hack:

1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...


Posted on 12/12/2012



0

Visual Space is a project which uses a Microsoft Xbox Kinect to control video clips, visual effects and sound via hand, head and shoulder movement.

According to the developers, tracking data is extracted in processing by using the SimpleOpenNI library and sent over to Max/MSP Jitter as OSC messages via Maxlink. In Max/MSP, the joint position data is used to calculate a manipulation range for each limb. This allows gradual control of colors, sound, live video input and various other effects.

Here are some additional details from the hack’s creator:

“The program I wrote also specifies positions for different joints in which new footage is being triggered. For example, moving the right hand or the right shoulder triggers pre-recorded clips of a lady reaching forward and turning, which imitate the user’s actions. The application also features a customizable interface that allows loading in videos and sound files as well as to select effects. Clips and visual effects can be deselected, changed or linked to different joints at any time. Therefore, Visual Space facilitates motion controlled video mixing in real time, allows for numerous modes of interaction and can be easily customized.”

The project was developed as part of the creator’s MSc in Creative Technologies at De Montfort University, Leicester.