DayFR Euro

Meta allows itself to use data from Ray-Ban “smart” glasses

After maintaining the blur, Meta confirms that it allows the use of data from users of North American Ray-Ban “smart” glasses to train its AI, as soon as they activate the AI ​​functionalities of their glasses. For European users, there are still some gray areas, but until Meta decides to use its multimodal models in the area covered by the GDPR, its glasses should not be able to benefit from this functionality.

At the end of September, during Meta Connect 2024, Meta presented its various new products related to various helmets and glasses. The company explained that it was bringing various new features to the glasses developed in cooperation with Ray-Ban with, as with almost any current digital product, a good dose of AI: live translation of languages, real-time video processing, reminder management, QR code recognition, integration with Amazon Music, iHeartRadio and Audible…

As we mentioned, image analysis allows the user to ask questions about what they see.

North American user data used

But, as with all commercial uses of deep learning (including large language models used in generative AI), a question quickly arises: does the company in return use its users’ data to train its AI?

Concerning North American users, the answer is now clear following the request of our colleagues at TechCrunch: the American media summarizes it with a “ In short, any image you share with Meta AI can be used to train its AI ».

Meta replied to them in an email that “ In countries where Multimodal AI is available (currently the United States and Canada), images and videos shared with Meta AI may be used to improve it in accordance with our Privacy Policy ».

The company explained in another email that the photos and videos captured with the Ray-Ban Meta would not be used until the user submits them to the AI.

As our colleagues point out, “ the only way to disengage (opt-out) is not to use Meta’s multimodal AI features “. And they insist on the problem:

« The implications are concerning, as Ray-Ban Meta users may not understand that they are giving Meta tons of images – for example showing the interior of their home, their loved ones or their personal files – to train its new AI models ».

The company also points out to our colleagues that the information can be found in the Terms of Service relating to Meta AI: “ once shared, you agree that Meta will analyze these images, including facial features, using AI ».

GDPR blocking Europe

Regarding European Ray-Ban Meta users, we remain in the dark. These same Terms of Service relating to Meta AI say a little before that “ Depending on your location, you may be able to share images with AIs ».

This text is only specific on this subject regarding images of individuals who reside in two American states: “ you also agree not to upload images to Meta AI that you know include individuals who reside in Illinois or Texas, unless you are their legally authorized representative and consent on their behalf ».

But, as Axios explained last July, Meta has decided not to use or distribute its multimodal models in Europe for the moment. And this media specified that the AI ​​used by Meta in Ray-Bans would be particularly based on these models.

The company mentioned “ the unpredictable nature of the European regulatory environment » to explain his decision. However, it targeted the GDPR, a text adopted eight years ago now, and not more recent texts such as the AI ​​Act. But it was this text that blocked Meta’s use of Facebook and Instagram user data to train its AI. Meta is therefore putting pressure on Europe.

Moreover, even if Meta does not immediately plan to activate AI on Ray-Ban glasses in Europe, it has already updated the Franco-French version (fr-fr) of its “notice of voice command privacy” regarding these “Ray-Ban Meta smart glasses”.

In it, the company already provides that “ Depending on your settings, we may also use transcripts and stored recordings of your voice interactions and data associated therewith to improve Meta’s products. When storing voice interactions to improve products is enabled, we use machine learning and trained reviewers to process the information to improve, troubleshoot, and train Meta’s products ».

This opinion also states that “ Storing your voice interactions allows Meta’s products to better process your requests and respond to a wide range of voice samples, expressions, local dialects and accents. For example, if people speak a regional dialect and enable storage of their voice interactions, this will allow Meta’s products to better understand and respond to the requests of people speaking that dialect more accurately. ».

Contacted by us, the company has not yet responded to our request for further clarification.

-

Related News :