From www.slashgear.com

Before splurging on a pair of premium headphones or earbuds, people often have concerns over two key factors – noise isolation and transparency. Active Noise Cancellation essentially tries to block all external noise so that you only hear what the audio drivers are blasting inside your ear canal, while transparency allows limited passthrough of ambient sound while still listening to music. So far, the likes of Apple and Sony have done a terrific job at it, but no audio gear lets you focus on a single source of external sound.

What if you only want to hear what a single person is saying in a room full of other people? The experts over at the University of Washington have developed an AI-driven kit for headphones that lets you look at a person for three to five seconds as a directional signal, and the headphones will only allow their voice to pass through. The team calls it “Target Speech Hearing” and it works even if the listener is moving around and no longer sitting directly in front of the speaker.

“In this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking,” says Professor Shyam Gollakota from the Paul G. Allen School of Computer Science & Engineering.

Ripe for tinkering by AI enthusiasts

“In addition to the noise canceling, the innovation is also in the neural network and AI algorithms we design,” Gollakota, a Moore Inventor Fellow and Thomas J. Cable Endowed Professor, tells SlashGear. 

So, here’s how it all works. An individual wearing commercially available headphones armed with microphones activates a physical control while facing a speaker. The voice from the targeted speaker should reach the microphones on either side of the headphones simultaneously. Interestingly, as the speaker continues talking, providing more data for training, the system’s capability to focus on and isolate the target speaker’s voice improves. The headphones transmit this signal to an integrated computer system, where the team’s machine learning software analyzes and learns the desired speaker’s vocal patterns. The AI system subsequently locks onto the target voice and feeds it into the listener’s ear canal, even as they move around.

As for the hardware, there’s no secret sauce. “Yes, we use off-the-shelf hardware components that are fully available for DIY enthusiasts. We use a Sony WH-1000XM4 noise-cancelling headset, a pair of binaural microphones (Sonic Presence SP15C) and an Orange Pi, all of which are available for folks to tinker with,” Gollakota tells SlashGear. Given the hardware at hand, and the work that brands like Apple and Sony have recently done with customized audio chips, it won’t be too difficult for them to miniaturize or integrate this tech with their audio gear and deliver a next-generation transparency and noise cancellation experience.

AI or human, it’s all going to be focused

The AI headphones are not actually bespoke hardware, even though it’s not commercially available for enthusiasts to buy. The entire code driving the proof-of-concept device has been openly listed on GitHub, which means any enthusiast is welcome to work and make their own version. And the best part — the handful of extra items required to make these AI headphones is readily available on the market.

“The AI accelerator hardware is getting very cheap, with, I think, it being much less than 10 dollars per unit at scale. So I don’t think it would significantly increase the cost to include AI into headphones once it gets incorporated at scale,” Gollakota tells SlashGear. “Our work is showing the applications and capabilities and technologies that can benefit from AI into headphones, which is inevitable.” There is already some precedent for that. London-based upstart, Nothing, recently added an exclusive new feature to its wireless earbuds that allow users to summon ChatGPT with their voice.

The timing is also quite perfect. Within a span of two days earlier this month, Google and OpenAI showcased their latest audio-visual enhancements for Gemini and ChatGPT, that allows users to have natural language conversations with the AI chatbots via voice. In the case of the AI headphones developed by Gollakota’s team, the real magic is in the neural networks and the AI algorithms, which are openly accessible for enthusiasts to experiment with.

[ For more curated tech news, check out the main news page here]

The post These Superhuman Headphones Could Change Your Mind On AI – SlashGear first appeared on www.slashgear.com

New reasons to get excited everyday.



Get the latest tech news delivered right in your mailbox

You may also like

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

More in Tech News