The RIA constitutes the culmination of long work intended to supervise the marketing of products based on artificial intelligence.
The proposed text relating to this regulation was initially published in 2021 by the European Commission, with the desire to offer Europeans a framework where they can use AI with complete confidence. Also, this regulatory framework aims to enable the European ecosystem around artificial intelligence to be efficient and competitive at the global level. In this sense, it completes the Coordinated AI planinitiated in 2018 and modified in 2021. This plan aims to implement a coordinated strategy between the Commission and the Member States, to on the one hand accelerate investments in artificial intelligence technologies and be proactive in terms of timing on programs dedicated to AI with the desire to be the first adopters, with all the advantages that this provides. And on the other hand, align European policies in terms of AI, above all to avoid the risks of fragmentation of the market and ecosystems of each country around this technology, which would have the direct consequence of an overall fragility of the Old continent on everything related to artificial intelligence.
Let’s come back to the RIAadopted in June of this year. What are its objectives? Beyond ensuring that AI-based products placed on the market respect fundamental rights, several objectives are pursued.
First, fundamental rights are not the only red lines. Thus, EU values, the rule of law, and environmental sustainability also act as a safeguard to assess whether the AI models in question can be authorized or not. Regarding the environmental aspect, it is today highlighted by many research players (see an example study here) that AI systems are extremely energy intensive, and will be more and more so. In the text of the RIA, the sustainability aspect refers to “ that AI systems are developed and used in a sustainable and environmentally friendly way, but also in a way that all humans benefit from them, while monitoring and evaluating the long-term effects on the individual, society and democracy. »
Next, the framework emphasizes trust and AI being human-centered. Trust is a sensitive subject in artificial intelligence, which often goes hand in hand with the explainability of AI. Indeed, how can we ensure that we can have confidence in artificial intelligence if we are not capable of understanding how it works? Regarding the need to focus AI on humans and their “well-being”, the objective is vague, summarized as is in the text: “ It is essential that AI be a human-centered technology. It should serve as a tool for people, with the ultimate goal of increasing human well-being “. An AI serves humans if it accomplishes tasks – complex, unfulfilling, repetitive, etc. – for them, but does it serve humans if it puts them out of work?
As mentioned above, the regulation also aims to guarantee a standardized legal framework, with the aim of promoting the security of players in the sector and their ability to innovate and invest.
In this same vein, the text reinforces governance and effective enforcement of existing legislation on security requirements applicable to AI systems and fundamental rights.
Finally, the text must prevent market fragmentation, which appears more than ever a necessity, given global competition on AI-based products, area in which Europe is already lagging behindnot to mention the incessant technological disruptions in the AI sector.