- Researchers are working on deep-learning algorithms that will allow headphone users to select which sounds they hear.
- Users will be able to choose from 20 classes of sounds, including sirens, baby cries, bird chirps, and more.
- The researchers plan to create a commercial version of the technology.
Noise cancelation on headphones is great when you want to block out all the noise around you. But what about when you want to hear certain sounds? Modes like Ambient Sound on Sony’s WF-1000XM5 let you hear your surroundings, but also lets everything in. A new technology designed for headphones could soon allow you to pick what sounds in your environment you can hear.
Researchers at the University of Washington are currently working on deep-learning algorithms that will allow headphone users to select which sounds they hear in real-time, according to Tech Xplore. Dubbed “semantic hearing,” the headphone technology will capture audio and send it to the connected phone to cancel out all environmental sounds except for the ones you picked.
It appears the feature will work either through voice command or by smartphone app. When activated, users will be able to choose from 20 classes of sounds, some of which include baby cries, sirens, speech, bird chirps, and more.
Creating such an AI that can sort out these sounds fast and accurately is not easy. As senior author and UW professor in the Paul G. Allen School of Computer Science and Engineering Shyam Gollakota explains:
The challenge is that the sounds headphone wearers hear need to sync with their visual senses. You can’t be hearing someone’s voice two seconds after they talk to you. This means the neural algorithms must process sounds in under a hundredth of a second.
The speed at which this processing needs to occur also means that semantic hearing can’t be done through the cloud. If the feature is to work as intended, the processing has to be done on a device, like the connected phone. The outlet also points out that because sounds arrive at your ear at different times, the technology also needs to account for delays.
So far, semantic hearing has been tested in offices, streets, and parks. Overall, the feature has been a success, but it has reportedly struggled with sounds that share certain properties. For example, the AI had difficulty separating vocal music from speech. However, more training on real-world data could improve this.
The researchers have presented their findings and plan to create a commercialized version of the feature in the future. However, it appears there’s no timeline for when that day will come. What do you think of semantic hearing possibly coming to future ANC headphones? Let us know in the comments below.
The post Future ANC headphones may let you pick which real world sounds to filter first appeared on www.androidauthority.com