How does a deaf or mute person use a smart speaker’s voice assistant? This concept tries to build a more inclusive smart speaker

Here’s a question nobody probably ever thought of… how do deaf and mute people communicate with voice assistants? Or specifically, with smart speakers? It’s a question that Jinni, a sign-language-based smart assistant, hopes to answer.

While the most obvious use for a smart speaker is to listen to music and podcasts, the ubiquitous little gadget has much more far-reaching features, allowing users to ask questions, get alerts and weather updates, and most importantly, control aspects of one’s smart home, like the lights, thermostat, security cameras, etc… so when the smart speaker almost solely works on voice commands, its interface practically alienates an entire group of people with special needs who don’t rely on voice commands.

Designed to include a camera that can read sign language inputs, and a large screen that can communicate with its user, Jinni brings the power of virtual assistants to a subset of people that are often sidelined when designing mainstream tech. Relying on visual cues instead of audio ones, the Jinni can easily interface with people fluent in sign language, offering a more natural input technique for them. Responses are provided through Jinni’s large circular screen, taking audio entirely out of the equation. Just as the smart speaker is a ubiquitous little gadget in homes, Jinni hopes to do the same for the deaf and mute communities, giving them the same access to life-changing tech. The speaker concept runs on a battery (so it can be carried to different rooms) and even comes with a charging dock/mat to juice it up after a day’s use.

The Jinni is a winner of the Red Dot Design Concept Award for the year 2021.

Designer: Zhong Zuozheng