Imagine a world where the barriers between the deaf and the hearing are gone. A world where conversations flow fluidly among those who hear sound, and those who don't. Could that world become real in the near future?

Back in June, a group of marketing students at Berghs School of Communication developed a concept video for a fictional product called Google Gesture. The product included a wearable wrist band and a mobile app that used electromyography to read the muscle movements of a deaf person doing sign language. The system then passed those signals to the app, which translated those signals into a digitized voice and would speak for the person.

Watching the fake marketing video was pretty exciting. So exciting, in fact, that major tech sites like Mashable, Slashgear and others originally announced the product as real, and then later had to recount the claim and apologize for their mistake.  It was an odd error, considering that there really is a Google product called Google Gesture Search [No Longer Available], which lets you draw gestures onto your mobile screen to search contacts, bookmarks and more.

Regardless, the Berghs students' concept presented a remarkable idea, leading to the question - could it be possible to create such an app for real?

Is Translating Sign Language Possible?

Dissecting the marketing concept on a technical level reveals that the technology using electromyography (EMG) to "sense" sign language isn't really that far fetched at all. As far back as 2009, researchers from the University of Washington were able to utilize multiple EMG sensors to decode muscle movements and convert them into actual arm and hand gestures.

http://www.youtube.com/watch?feature=player_embedded&v=6_7BzUED39A

The researchers actually built a complete "gesture recognition library", identifying which muscle signals represented which gesture. The research proves that this kind of technology is available and ready to implement in the sort of application the Berghs students envisioned.

So has anyone actually accomplished this yet? If not, why isn't anyone doing so, and providing the deaf and hearing impaired with the ability to communicate with anyone in the world through sign language?

The Future of Real-Time Translation Technology

The truth is that someone is in fact working on such real-time sign language translation technology.

sign-language1

There is a company called SpeechTrans [Broken URL Removed], that has been pushing the limits of translation technologies in recent years. SpeechTrans works with technology partners to produce some of the most remarkable real-time language translation services on the market today. These are services that translate text chat to text chat, voice to text, and even real-time voice-to-voice language translation through mobile phone and desktop applications.

To explore whether a sign-language technology could become a reality in the near future, MakeUseOf sat down with SpeechTrans CEO John Frei and COO Yan Auerbach to discuss this groundbreaking new translation technology and just how far out into the future it might be.

Developing a Sign Language Translation App

MUO: Is it possible to do the sort of sign language to speech translation that the Berghs students portrayed in this Google Gesture concept video?

John Frei - SpeechTrans CEO
John Frei - SpeechTrans CEO

John: I would say that the technology is available to develop that. We're currently working with Microsoft and Intel  in exploring some of the technologies they're coming out with, in terms of hardware and software. We envision the ability to use that technology to recognize sign language and then convert that into speech and audio output.

MUO: You're actively working on developing that technology right now?

Yan: So, there was a customer that was using our software, and thought it would be wonderful if we could modify it so that people who are hearing-impaired can use it to make telephone calls from our app, to communicate without the need of sign language when they're in person, or without the need of a TTY type service for telephone calls. We developed that product, and with funding from Microsoft and Intel, we launched SpeechTrans for the hearing impaired on Windows 8.1, which removes the need for sign language.

MUO: How does the "in-person" app work?

Yan: There's a listen mode, and there's an input mode. So, when someone is speaking to you, you put on listen mode and it types out anything that they're saying on the screen in text. Then, when you respond back, you type it out and then it speaks out loud what you type. With the telephone, you just dial any person's phone number, and when they answer the phone, it becomes like an instant message.  Whatever they speak, you get as an IM. Then, whatever you type is spoken out loud through the telephone. That's the first phase.

Near Future Sign Language Translation

MUO:  What technologies can people expect to see in the near future? What's the next phase?

John: We do envision - for someone who only has the ability to use sign language - to be able to do that in front of a device such as a phone, PC or laptop. Intel has a new camera system, and Microsoft does as well with the Kinect, that does gesture recognition.

MUO: How is this better than the arm band concept put forth by the Berghs students?

sign-language3

Yan: Basically, ours is going to work in a way where it doesn't require you to put anything on your arm. It'll recognize 64 points on your hand . We're using the beta version of Intel's RealSense camera. So not only will we be able to recognize all of the sign language, and multiple different dialects of sign language, but we'll also be able to recognize emotions, facial expressions, and other small nuances and then convert that into spoken words as well.

That won't require you to wear any gloves or anything.  We are focusing on that market specifically because we just like to help people in general. We don't only want to help people who want to speak in many different languages, but we also want to help people who can't speak in any language. The technology exists, and there's no reason their quality of life should be any different than ours.

John:  It's a feasible concept. It's just a different input and output. Some of the conversations that we've had with Microsoft about some of the future technologies is where someone could actually implant a microchip in their neck, and it directly reads their brainwave patterns. So if you're helping people who can't communicate, we would actually see something like that as a phase three deployment.

Last year, there was a group of students that came out with a working prototype of a glove that was plugged into a computer. The gestures that you made with the glove would then be recognized and converted to sign language. So they already have the capability with the current technology.

http://www.youtube.com/watch?v=DpcI5h1EuqI

Our vision is that we don't really want to create the accessories like the arm bands that you have to put on. It should just be natural and free flowing for them [people doing sign language] to do it in front of their video camera. 90% of what people say in communication is actually in their body language. Being able to access facial recognition, gestures and emotions, we'll be able to use that data as well to make sure that the translation and what they're trying to convey is expressed in the right form.

MUO: Is Intel within the 5 year range of having that sign language recognition technology?

John: Intel definitely has a lot of resources available. Their technology and their software is coming along at a pace where we can make this happen fairly quickly.

MUO: Do you have a time frame for when you're hoping to get this technology into the market?

John: For the sign language, it's about 12 to 18 months.

MUO: Does anyone else out there have anything else like this for sign language translation?

John: I saw a YouTube video of IBM doing a prototype where someone was doing sign language, but it was a proof of concept and it only had like three or four words that it recognized. We have over five hundred thousand words that we're planning to roll it out with, so it's going to take it to a whole other level.

Conclusion

While the vision that the students at Berghs dreamed up might not become the sign language translation app of the future, that doesn't mean that such a concept won't happen. What the work done at SpeechTran, Intel and Microsoft proves is that sign language translation is almost certain to become a real technology within just a few years. It most likely won't involve cumbersome arm bands, but instead nothing more than a special video camera, and a mobile app.

Through the magic of gesture and facial recognition, this sign language translation app of the future promises to completely revolutionize interpersonal communications for hundreds of thousands of hearing impaired or hard of hearing individuals all around the world.