Dr. Wim Melis from the University of Greenwich is working on deconstructing and reconstructing audio signals with extremely high accuracy.

Audio is captured and, from there, converted into a spiking signal—the type the uses. This is then fed into the brain and reconstructed as a 90-100 percent replica of the original sound.

Current technologies, known as , only achieve a fraction of this. They do the work of damaged parts of the inner ear (cochlea) to provide sound signals to the brain, whereas hearing aids make sounds louder.

Wim says: “The signals created by current hearing implants sound very metallic to the user because they only a provide part of the full audio wave to the brain. This prevents a full reconstruction of the original signal.

“We developed a method that breaks down the input signal in its analogue components, while introducing multiple versions in storage. This means we can reconstruct the signal with very , even if part of the system drops out.

“To put it simply, imagine a line of buckets, which you walk along pouring into. The water would go into the one underneath where you are pouring. If that bucket is broken then the water would be lost. The current hearing implant technology operates on this basis. It looks at the amount of water being poured at a point of time, not its other parameters, such as volume, phase and frequency.

“Our system is more advanced. Using the same analogy, you would have a row of buckets with partly-perforated funnels above them. This means that, while water goes through to the bucket underneath, some goes into the adjacent buckets.”

“What we’re working on will be very low power, making it ideal for bio-medical use. While the current technology could be improved to provide better outputs, they would be more difficult to make, as well as being bigger and using more power.

“We envisage our system, which could be available commercially within about six years, to be the same size as current hearing implants, or possibly even smaller.”

Current users experience a metallic sound which means there needs to be a significant period of training for the brain to be able to interpret these signals appropriately.

Wim adds: “The training is necessary for the brain to learn to extract useful information from the noisy signal being received. Once the brain is trained, it receives a clearer signal. But it will still be rather metallic, as there is limited information about the audio signal being fed into the brain.

“So, while people will can have a conversation, they struggle to filter background noise in a busy environment, such as a crowd, heavy traffic or parties. Live music sounds horrible, like a crash.”

This study is forming the basis of work where Dr. Melis’ aims to develop mimicking true human intelligence in hardware using analogue computing.

His team are also exploring the possibility of using this method to compress audio with very high fidelity. This could be a replacement for the current compression and audio storage formats, used by streaming services and audio storage on phones, for example.