Breaking news

Switzerland is not sufficiently armed against manipulated videos – rts.ch

Switzerland is not sufficiently armed against manipulated videos – rts.ch
Switzerland is not sufficiently armed against manipulated videos – rts.ch

It is difficult for an average person to spot a fake video using artificial intelligence. TA-Swiss, the Foundation for the Evaluation of Technological Choices, is sounding the alarm and believes that Switzerland is poorly prepared to deal with deepfakes.

“Over the years, it has become easier and easier to create deepfakes,” Murat Karaboga of the Institute told media in Bern on Tuesday. Fraunhoferwho participated in thestudy (and his summary in French). There are many programs that allow you to create or manipulate sound recordings, but also images and videos, using artificial intelligence (AI), he added.

It is thus possible to create content in which a person does or says something that they have never done or said. Such content is used in particular by criminals who copy the voices of private individuals for shock and blackmail calls.

Identity theft, reputational damage, dissemination of false information and pornographic scenes without the consent of the person concerned are also risks linked to deepfakes, as indicated in the report of more than 400 pages.

External content

This external content cannot be displayed because it may collect personal data. To view this content you must authorize the category Social networks.

Accept More information

Almost impossible detection

According to the report, individuals are hardly able to spot deepfake videos (read box). In an experiment, participants failed to recognize falsified videos. Even after they were given instructions on how to better recognize such videos. Only people with an affinity for new media performed better.

Detection programs, on which research teams have high hopes for detecting deepfakes, have not obtained better results: “We have come to the conclusion that detection programs are neither reliable nor accessible”, remarks Murat Karaboga.

How can we then recognize such falsified videos? There are a few visual signs of recognition, such as hair, which often looks unnatural, or movements that seem a little rough. But with these technologies constantly improving, this will probably no longer be possible.

To recognize deepfakes, “above all, you need a certain flair”, according to Murat Karaboga: “You have to ask yourself: ‘Is it really possible that this content is authentic’?”

Necessary measures

To combat the harmful effects of deepfakes, researchers are proposing a series of measures. Some legal bases exist, but are sometimes difficult to apply, because the large online platforms on which deepfakes circulate are located abroad and those responsible for the crimes are not always identifiable, said Nula Frei of the university institute UniDistance Switzerland.

>> Read also: Faced with the proliferation of deepfakes, Meta sets up a labeling system And UN chief calls for ‘united’, ‘comprehensive’ response to dangers of AI

TA-Swiss, believes that the state should require online platforms to remove harmful deepfakes; scientists still recommend the establishment of victim counseling centers. Continuing training in media and information skills or the use of advanced authentication procedures are also possible measures.

The report further recommends self-regulation of the public relations and advertising sector. Journalists are also involved: the research team therefore recommends promoting strong verification methods in editorial offices.

Opportunities for deepfakes

The authors of the study, however, believe that there is no question of banning this technology, because deepfakes also offer many opportunities.

In the entertainment industry, deepfakes offer many exciting possibilities, the report says. For example for synchronized versions of films: with the support of AI, the lip movements of actresses and actors can be adapted to the language used.

External content

This external content cannot be displayed because it may collect personal data. To view this content you must authorize the category Social networks.

Accept More information

For their part, the police and criminal prosecution authorities hope that deepfakes will offer new possibilities in the fight against crime, for example for the reconstruction of crime scenes or the sequence of events.

TA-Swiss’ mission is to assess the viability of new technologies in collaboration with Swiss universities and institutions: all of them carry out studies on technological development in order to provide information to Parliament, the Federal Council, the administration and population.

>> Read also: AI response engines, a technological revolution with reliability challenges

sjaq and ats

-

-

PREV Linguère from Saint-Louis: Amara Traoré returns the apron
NEXT British passenger died overnight after violent collision in Orches