Breaking news

James Bond in your ears: this is what the new AI-powered headphones will soon allow

James Bond in your ears: this is what the new AI-powered headphones will soon allow
James Bond in your ears: this is what the new AI-powered headphones will soon allow

Atlantico: What are the specific features of these new AI-powered headphones? Are they worthy of spy films?

Gilles Dounès : It is a small team of young researchers, of foreign origin, from Washington State University in Seattle who have succeeded in developing a device coupled with a supra-oral hearing headset, based on artificial intelligence , and which allows you to isolate a particular conversation in a noisy environment.

In this device, two microphones are embedded in the headset, and when the wearer looks towards this or that person who is speaking, the artificial intelligence algorithm somehow isolates the voice print of the person thus “targeted”, after four to five seconds. From there, you no longer need to look at the person you want to listen to, and you can continue the conversation side by side. There is, however, a margin of error of 16° in the selection of the relevant source.

It is more of a demonstrator, which has also been the subject of a communication, than a gadget coming out of Q’s laboratories, ready to accompany James Bond in the field. The device was presented on May 14 in Honolulu, during the ACM CHI Conference on Human Factors in Computing Systems. The source code was also made available to the scientific community on this occasion: it is not impossible that we will see other projects appear based on this same concept… Unless it is a special service state or a private company in the MILIPOL sector which decides to take it over…

“Target Speech Hearing” was tested on 21 people, and its effectiveness increases the more it is used, once the “target » chosen and calibrated.

It is already quite possible to obtain this type of results with a “cannon” type microphone, possibly with a parabolic dish, but this involves voluntarily following the “target” constantly as it progresses. possible travel. There, it is the artificial intelligence which “fixes” itself on its target, by isolating the relevant frequencies from the surrounding noise, embedded “locally” on a simple electronic device,

This is probably a so-called “narrow” artificial intelligence, trained to isolate a certain number of relevant elements in a “fuzzy” set, in the same way that Chinese researchers had succeeded in training an artificial intelligence to detect a lung cancer at a very early stage on medical imaging images, several years ago. Here, the artificial intelligence, or rather the algorithm, is trained to simply isolate what appears to it to be both different, and possibly consistent.

Could this innovation revolutionize the earphones and headsets of tomorrow? For example, does the latest edition of Apple’s AirPods Pro allow you to automatically adjust sound levels for users by detecting when they are in conversation?

The concept is extremely interesting, and should probably give ideas to others, especially since the artificial intelligence system used is light enough to be able to operate locally, in an electronic device. The next challenge will be to develop a specific chip, to miniaturize it enough to fit into headphones that are sufficiently discreet because they are in-ear. Two solutions are available to the small team which has also set up a start-up: either attempt a launch on the market one day or another, when the concept is more mature, or be bought by a heavyweight in the sector (or who aspires to become one). It is likely that the 5 researchers have already received emissaries sent by Cupertino… but not only that!

The “Target Speech Hearing” project also works in the opposite way, and therefore complementary to the current AirPods, which isolate the user inside an active noise reduction bubble. But it’s the same principle: controlling the user’s environment, in order to improve their experience.

Either we present more information, as in augmented reality glasses for example.

Either we isolate the relevant information and present it in a magnified way, in some way isolating it from the context.

In the present case, we can talk about augmented reality even if there is a good amount of isolation from the rest of the environment: we can talk about auditory zoom.

In the case of AirPods we have a sort of “circuit breaker” which, proactively, will in some way “cut off” the arrival of violent sound before it enters the circuit, and which is based on the H1 chip or H2, specially developed by Apple for this. The system designed by the Paul G. Allen School of Computer Science & Engineering is much more artisanal, it is a research project carried out by a handful of doctoral students, but, like Apple, the team is working to miniaturize its device , moving to the in-ear headphone format and with the hearing aid market in perspective.

Could AI through such sound selection technology modify the auditory perception of anyone wearing headphones, based on their preferences? Could this even help improve hearing aids?

It is this double perspective, of the augmented reality of the hearing aid, which is very interesting in this advance in research. Moreover, the team initially worked on bird songs. But it is at the level of hearing aids that the prospects potentially seem the most interesting, with this ability of systems to concentrate on at least one interlocutor.

We indeed find this loss of discrimination of the main transmitter in a noisy environment, both in the progressive loss of hearing acuity linked to age for example, or to an infection. But this is also the case in the total or partial loss of hearing acuity in one ear. This reduction in the acoustic stereo sensation is also annoying, by forcing, during a conversation, one to systematically place the interlocutor on the side of their “good” ear. This type of progress, especially if it is integrated into in-ear headphones, can represent significant progress and welcome autonomy for a growing part of the population.

-

-

PREV Nubia announces a new flagship phone, more efficient than the Z60 Ultra equipped with Snapdragon 8 Gen 3
NEXT SZBOX S7: New version combines tablet and mini-PC form factors with modern Intel processor for $250