It’s estimated there are around 9 million people in the UK who are deaf or hard of hearing, and it’s an area of medicine which is chronically underfunded.
In 2014, less than one per cent of the total public and charity investment in medical research was spent on hearing research.
This could pose a major issue in the future – with the World Health Organisation estimating that by 2050, there will be over 900 million people worldwide with hearing loss.
Empowering people through hearing apps
Live Transcribe began as a personal project. A Google research scientist, Dimitri Kanesvsky who has been deaf since childhood, and one of his teammates, Chet Gnegy, decided to hack together a product which would enable their casual conversations.
Kanesvsky and Gnegy were inspired by a service Kanesvsky was using for meetings, named CART, which allows a captioner to virtually join a gathering and create a transcription of spoken dialogue. But it wasn’t conducive to more informal chats, so they used this experience to inform the creation of Live Transcribe, an app which uses automatic speech recognition (ASR) to display spoken words on a screen.
Something like Live Transcribe is possible now thanks to a combination of ASR and cloud computing, according to Brian Kemler, product lead at Google’s Android Accessibility team.
“ASR is a tech that’s been around for a long time, but thanks to cloud computing, its cost has dropped tremendously so we can take something like Live Transcribe and give it to users for free and have it be part of the Android operating system,” Kemler told the Standard.
This is important because Android is the world’s biggest mobile operating system, with 2.3 billion using it across various Android devices. Creating apps like this, Kemler said, is a way to use a phone’s capability, alongside artificial intelligence and machine learning, “to make the real world more accessible.”
Then there’s also Sound Amplifer. Plug in your headphones and bring up the Sound Amplifier app on an Android device, and then you’re able to play with the live sound environment and make the audio clearer to hear, make quiet sounds louder, and reduce loud noises. Users can tune the settings in the app to create the right sound environment for them.
“It’s definitely not meant to be a hearing aid, or replace a hearing aid, but it does some have analogous attributes, so you can tune into the characteristics of the audio that work best for you, get it to a state that you like and you should be able to hear more clearly,” said Kemler.
The work Kemler and his team do for Android Accessibility helps to inform the wider strategy at Google around tech and accessibility. “There’s so much Google technology that we can take advantage of to make Android better for users and vice versa,” he said. “We hope to push the envelope on Android and challenge the rest of the company to be better and more inclusive and we welcome that challenge back.”
Using tech to help children learn how to read
Many of the world’s 32 million deaf children struggle to read which is why Chinese tech giant Huawei launched StorySign, an app that translates selected children’s books into sign language in real time, with the help of the avatar “Star”.
Many deaf children struggle to learn to read because they can’t match words with sound, hear their parents read them a bedtime story or a teacher repeating sentences – all key milestones for a hearing child who is learning to read,” Huawei’s Western Europe chief marketing officer, Andrew Garrihy, told the Standard. “What’s more, for a deaf child the written word does not translate directly to sign, meaning they are learning to read in a language that is not their primary language.”
With the help of Star translating books, children and parents are able to learn to read and sign together. By holding up a phone to the words on the page, Star will start signing the story as the printed words are highlighted. This is made possible through Huawei’s AI technology, something the company has been investing in for a while.