Through the Kinect, helpful innovations are popping out in order to make life easier for those who need it. The recent group to reap the benefits provided by the Kinect are those who make use of Sign Language. Developer Zahoor Zarfulla released this video of their Kinect American Sign Language Recognition at work. By detecting the skeletal framework of the user, the computer recognizes the hand gestures and translates them into text. In the video, detailed instructions of the tests and cases done are exhibited and the results are very promising. The accuracy rate of sign language recognition of the Kinect is high and with a user-made program and an affordable gadget, people who need to learn American Sign Language will be delighted with this alternative.
The programs used in order to develop the Kinect American Sign Language recognizer are as follows: Copycat Game, Georgia Tech Gesture Toolkit, Hidden Markov Model Toolkit OpenNI Framework and PrimeSense NITE middleware.
For more information about the project, visit the American Sign Language Youtube Page.
I originally sent this idea to a friend who writes games for the Kinect and he found your post for me. I’m not sure if the Kinect is sensitive enough for a wide vocabulary, since it doesn’t track the fingers/hand shapes. But maybe you are already figuring that out! Here is my e-mail to him:
Kayle and I were listening to a Focus on the Family broadcast about reaching out to the Deaf community. In it, there were some surprising statistics about families with deaf children. About 90% of deaf kids are born to hearing parents, and, sadly, less than 8% of those parents and only 2% of dads ever learn sign language. I thought, wouldn’t it be cool if the Kinect could help translate between deaf people and their hearing relatives and friends? There are already whole “dictionaries” of videos of individual signs and their meanings online (www.aslpro.com is the best I know of and has a main dictionary, fingerspelling, and tons of religious signs). I was thinking that the chat mode over Kinect could feature someone signing with text running below for the recipient to read. The deaf person could also see their relative speak with subtitles running along the bottom. Since dialects exist within ASL, it would be cool to have the option to make a sign several times/ways and type in a new meaning if not recognized by the Kinect. I don’t know how “marketable” this idea would be, but it sure could do a lot of good in different situations, like counseling sessions or online school courses for the deaf or aiding a deaf person in the workplace. What do you think?
Angie