{"id":236280,"date":"2025-12-16T20:13:10","date_gmt":"2025-12-16T20:13:10","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/236280\/"},"modified":"2025-12-16T20:13:10","modified_gmt":"2025-12-16T20:13:10","slug":"smart-headphones-may-solve-the-cocktail-party-problem","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/236280\/","title":{"rendered":"Smart headphones may solve the &#8216;cocktail party problem&#8217;"},"content":{"rendered":"<p>Share this <br \/>Article<\/p>\n<p>You are free to share this article under the Attribution 4.0 International license.<\/p>\n<p>Researchers have developed smart headphones that proactively isolate all the wearer\u2019s conversation partners in a noisy soundscape<\/p>\n<p>Holding a conversation in a crowded room often leads to the frustrating \u201ccocktail party problem,\u201d or the challenge of separating the voices of conversation partners from a hubbub. It\u2019s a mentally taxing situation that can be exacerbated by hearing impairment.<\/p>\n<p>The new headphones are powered by an AI model that detects the cadence of a conversation and another model that mutes any voices which don\u2019t follow that pattern, along with other unwanted background noises. The prototype uses off-the-shelf hardware and can identify conversation partners using just two to four seconds of audio.<\/p>\n<p>The system\u2019s developers think the technology could one day help users of <a href=\"https:\/\/www.futurity.org\/algorithm-hearing-aids-3279312\/\" rel=\"nofollow noopener\" target=\"_blank\">hearing aids<\/a>, earbuds, and smart glasses to filter their soundscapes without the need to manually direct the AI\u2019s \u201cattention.\u201d<\/p>\n<p>The team presented the technology in Suzhou, China at the Conference on Empirical Methods in Natural Language Processing. The underlying code is open-source and <a href=\"https:\/\/github.com\/guilinhu\/proactive_hearing_assistant\" rel=\"nofollow noopener\" target=\"_blank\">available for download<\/a>.<\/p>\n<p>\u201cExisting approaches to identifying who the wearer is listening to predominantly involve electrodes implanted in the brain to track attention,\u201d says senior author Shyam Gollakota, a\u00a0University of Washington professor in the Paul G. Allen School of Computer Science &amp; Engineering.<\/p>\n<p>\u201cOur insight is that when we\u2019re conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.\u201d<\/p>\n<p>The prototype system, dubbed \u201cproactive hearing assistants,\u201d activates when the person wearing the headphones begins speaking. From there, one AI model begins tracking conversation participants by performing a \u201cwho spoke when\u201d analysis and looking for low overlap in exchanges. The system then forwards the result to a second model which isolates the participants and plays the cleaned up audio for the wearer. The system is fast enough to avoid confusing audio lag for the user, and can currently juggle one to four conversation partners in addition to the wearer\u2019s audio.<\/p>\n<p>The team tested the headphones with 11 participants, who rated qualities like noise suppression and comprehension with and without the AI filtration. Overall, the group rated the filtered audio more than twice as favorably as the baseline.<\/p>\n<p>Gollakota\u2019s team has been experimenting with AI-powered hearing assistants for the past few years. They developed one smart headphone prototype that can pick a person\u2019s audio out of a crowd when the wearer looks at them, and another that creates a \u201csound bubble\u201d by muting all sounds within a set distance of the wearer.<\/p>\n<p>\u201cEverything we\u2019ve done previously requires the user to manually select a specific speaker or a distance within which to listen, which is not great for user experience,\u201d says lead author Guilin Hu, a doctoral student in the Allen School. \u201cWhat we\u2019ve demonstrated is a technology that\u2019s proactive\u2014something that infers human intent noninvasively and automatically.\u201d<\/p>\n<p>Plenty of work remains to refine the experience. The more dynamic a conversation gets, the more the system is likely to struggle, as participants talk over one another or speak in longer monologues. Participants entering and leaving a conversation present another hurdle, though Gollakota was surprised by how well the current prototype performed in these more complicated scenarios. The authors also note that the models were tested on English, Mandarin, and Japanese dialog, and that the rhythms of other languages might require further fine-tuning.<\/p>\n<p>The current prototype uses commercial over-the-ear headphones, microphones, and circuitry. Eventually, Gollakota expects to make the system small enough to run on a tiny chip within an earbud or a hearing aid. In concurrent work that appeared at MobiCom 2025, the authors demonstrated that it is possible to run AI models on tiny hearing aid devices.<\/p>\n<p>This research was funded by the Moore Inventor Fellows program.<\/p>\n<p>Source: <a href=\"https:\/\/www.washington.edu\/news\/2025\/12\/09\/ai-headphones-smart-noise-cancellation-proactive-listening\/\" rel=\"nofollow noopener\" target=\"_blank\">University of Washington<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Share this Article You are free to share this article under the Attribution 4.0 International license. Researchers have&hellip;\n","protected":false},"author":2,"featured_media":236281,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[74],"tags":[289,18,18268,19,17,82],"class_list":{"0":"post-236280","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-artificial-intelligence","9":"tag-eire","10":"tag-hearing","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115731077848254479","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/236280","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=236280"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/236280\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/236281"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=236280"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=236280"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=236280"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}