Researchers from the Chinese Academy of Sciences (CAS) have used
a Microsoft Kinect to live-translate Chinese sign language into
text.
The work, a collaboration between the CAS Institute of Computing
Technology and Microsoft Research Asia, could be vital to helping
deaf and non-deaf people communicate with each other.
"We ultimately hope this work can provide a daily interaction
tool to bridge the gap between the hearing and the deaf and hard of
hearing in the near future," said Guobin Wu of Microsoft Research
Asia,
in a blogpost about the research.The project involved collaboration
Sign language is not merely a mirror of spoken language -- it has a sentence structure and grammar that can be quite different to the language it's derived from.
For that reason, typing and writing in English, for example, isn't straightforward for deaf and hard-of-hearing people. For those who have been deaf their whole lives it's akin to learning a new language.
This means that it's currently not possible for deaf and hard-of-hearing people to communicate with each other in their native language using computers. Essentially, they have to communicate in a foreign language whenever they to send a message over the internet.
"Technology that enhances communication between non-deaf people and deaf people is to be encouraged," a spokesperson for the British Deaf Association told Wired.co.uk. "Many non-deaf people do not possess the skills of sign language and this hinders deaf people from fully participating in wider society and having equal rights".
A Scottish company, Technabling, is also working to address this issue. It has developed similar software to the Kinect solution that they say works on any camera-enabled device, including laptops, tablets and even mobile phones.
"The main goal is to bridge the gap [between deaf and non-deaf people] in both directions," says Technabling operations director Jacques-Yves Silvia.
Their software, which will be free for individuals, will allow a deaf and non-deaf person to overcome the communications barrier between them and chat freely. Typed text will be translated into sign-language, as currently exists, but then sign-language will be translated into text allowing two-way conversation.
It will require a device to relay the conversation, so will obviously not be as seamless as it would be if the non-deaf person put the effort into learning sign language.
Silvia says that the software was finished in late June, and will be launched later this year after further testing.
In 2012, a Spanish computer systems engineer Daniel Martinez
Capilla also developed sign-language translation software for the
Kinect, but for American Sign Language.