From Fast Company:
The voice assistant speaker revolution of Google Home and Amazon Alexa has left the deaf community behind. It’s a two-fold problem. These devices never learned to decipher the spoken voices of people with an extreme hearing impairment. At the same time, anything Home or Alexa say in response can’t be heard by the user. Adding a screen to display information on a device like the Echo Show might help, but it can only get someone so far if they want to have a natural conversation with a machine.
Now, one creative coder has built a solution. Abhishek Singh–who you may recognize for building Super Mario Bros. in augmented reality–built a web app that reads sign language through a camera, then says those words aloud to an Amazon Echo. When the Echo speaks its response, the app hears that, then types it out. The app allows deaf people to “talk” to a voice interface using nothing but their hands and their eyes.
.
.
Link to the rest at Fast Company
The webcam based Sign language to spoke English translator *is* really cool….. buy why even bring Alexa into it?
Because Alexa is the clear leader in home automation and hundreds of devices rely on it?
The deaf might want to automate their homes, too.
Also, Amazon pays developers who make Alexa apps and skills:
https://developer.amazon.com/blogs/alexa/post/48fdb9ac-70a9-4b6f-a2e0-19f4ff817216/developers-earn-money-for-eligible-skills-that-customers-engage-with-most
Having proven the system works with Alexa, the developer now has a range of options for commercialization that can go beyond Alexa, to adapting it Kinect or other computer vision systems or to seek venture funding to build his own comoany or gadgets or to sell the tech to a bigger company.
Alexa just happens to provide a robust and free to use natural voice recognition system that is convenient for Indie developers. Lots of reasons.
Cool in the extreme.
This tech needs to go on phones.