In video – AlgorithmWatch sends a petition to the Federal Council to call for the regulation of artificial intelligence

In video – AlgorithmWatch sends a petition to the Federal Council to call for the regulation of artificial intelligence
In video – AlgorithmWatch sends a petition to the Federal Council to call for the regulation of artificial intelligence

Published on September 24, 2024 at 5:47 p.m. / Modified on September 24, 2024 at 9:15 p.m.

“It is now up to the Federal Council to assume its responsibilities.” After launching an appeal last June, the AlgorithmWatch association and its director Angela Müller are sending a petition to the Federal Council this Tuesday to call for more regulations around artificial intelligence. In the crosshairs, cases of algorithmic discrimination, a subject that is essential “from a fundamental rights point of view, but also from a social, ethical and economic point of view,” the association explains in a press release.

The timing is right: this winter, the Federal Council will study the outlines of potential legislation on the subject. The Federal Department of the Environment, Transport, Energy and Communications has been tasked with drafting a report on the subject, before starting discussions on specific regulations to be discussed as early as 2025.

AlgorithmWatch’s previous call: Pressure mounts on Switzerland to regulate artificial intelligence

Supported by 45 organizations, including UNICEF and the Fédération romande des consommateurs, the petition was also signed by 70 Swiss personalities and nearly 3,000 citizens. Among the signatories is Fabian Luetz, a doctoral student at the Faculty of Law at the University of Lausanne. According to him, algorithms cannot continue to be used without greater regulation: “If there is no legal framework, there is no obligation to inform. If I apply for a company tomorrow, I don’t even know if an algorithm is used or not.”

The explanation

Where do the identified biases of generative AI come from?

Published on July 17, 2024 at 12:22 p.m. / Modified on August 29, 2024 at 3:36 p.m.

Algorithmic biases in generative AI are the result of two distinct issues:

The people who design these technologies infuse them with their ideas during their design. There is a tendency to underestimate the human component of these tools. Having more diversity at the base of the design of artificial intelligence can only be positive regardless of the type of discrimination.


The quality of the data obviously plays a role. The date on which it was collected, the way in which it was processed, etc., but there are many human choices hidden behind this work. There is also the question of the models of these algorithms, that is to say the parameters which are chosen, and finally of the use of these systems. Even a technologically “perfect” system can have discriminatory consequences if it is misused.

Estelle Pannatier, Policy and Advocacy Officer at AlgorithmWatch CH
From this interview: Sexism in AI: “We underestimate the human component of these tools”

For AlgorithmWatch, strengthening protection against discrimination involves in particular:

  • To extend protection so that it also applies to discrimination by private actors. “In Switzerland, we are currently not protected or very poorly protected against discrimination when it is the work of private actors,” explains Estelle Pannatier, head of advocacy for AlgorithmWatch.
  • The establishment of class action procedures.
  • And the obligation for entities that develop or use algorithmic systems to carry out impact analyses.
    Read also: In Switzerland, the future supervision of artificial intelligence divides

How to resolve all the issues of discrimination by algorithms? Only a potential legal framework put in place by the Federal Council will have a hard time resolving the issues of discrimination by algorithms designed in the United States or elsewhere, says Fabian Luetz. Better international cooperation is necessary for the doctoral student at the Faculty of Law of the University of Lausanne, particularly between “the different agencies that deal with the regulation of AI, otherwise we will have continents with different issues”. Also, protection against biased algorithms can be integrated during the design, then controlled before their publication: “We do not see drugs on the market that have not been controlled, because Swissmedic carries out checks upstream”, says Fabian Luetz.

All of these solutions imply greater regulation of AI, which is far from unanimous. In Switzerland, the umbrella organization of companies Economiesuisse published a report on the subject. It stated that we should not “give in to political activism.” According to the organization, “in most cases, there is no need to revise a law in depth.”

-

-

PREV Cobenfy: the first treatment for schizophrenia in 70 years
NEXT fewer tourists than in 2023, but more than in 2019