What does the adoption of the AI ​​Act change?

What does the adoption of the AI ​​Act change?
What does the adoption of the AI ​​Act change?
What change IA act?

The AI ​​Act aims to become a global framework and a lever for European territorial marketing, by establishing operating conditions based on ethical principles. Companies integrating or deploying AI must already prepare for the changes introduced by the future European Regulation.

This landmark law, the first of its kind in the world, addresses a global technological challenge and creates opportunities for our societies and economies. With the AI ​​Act, Europe underlines the importance of trust, transparency and accountability in dealing with new technologies, while ensuring that this fast-evolving technology can thrive and drive European innovation.” said Mathieu Michel, Belgian Secretary of State for Digitalisation, Administrative Simplification and the Protection of Privacy, following the adoption of the text by the Council of Europe.

The regulation establishing harmonised rules on artificial intelligence took only 3 years to be formally adopted by the Council. After being signed very soon by the Presidents of the European Parliament and the Council, the legislative act will be published in the Official Journal of the EU in the coming days and will enter into force twenty days after this publication.

The new regulation will apply, and this is an innovation, 6 months after its entry into force, for certain uses of AI presenting an unacceptable risk.It is important to anticipate the application of prohibitions in order to take into account unacceptable risks and to have an effect on other procedures, for example in civil law.“, specifies the Regulation.

The provisions on notified bodies and the governance structure should apply 12 months after publication of the text, as well as the obligations for providers of general-purpose AI models. Furthermore, codes of practice should be ready no later than 9 months after the date of entry into force. The other measures will enter into force two years after publication.

The Regulation is part of a regulatory package aimed at stimulating innovation. It does not apply to systems used exclusively for military and defence or national security purposes or for research purposes.

A risk-based assessment approach

The text establishes a risk-based approach and is based on the GEHN AI guidelines. Several ethical principles guide this new regulation: human action and human control; technical robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; societal and environmental well-being; and responsibility.

Classification of AI systems

The AI ​​Act categorizes four types of artificial intelligence based on risk: unacceptable, high, limited, minimal, or no risk. Unacceptably risky systems include those aimed at social assessments, profiling, inducing dangerous behavior through interaction devices or subliminal techniques below a person’s threshold of consciousness to substantially alter their behavior and cause physical or psychological harm, or remote “real-time” biometric identification systems in public spaces for law enforcement purposes (with certain exceptions).

High-risk systems typically concern critical or vitally important sectors. For example, AI technologies in transport that could endanger the lives and health of citizens; AI technologies used in education or employment; technologies used in product security components; essential private and public services such as credit risk assessment; AI technologies used in law enforcement; AI technologies used in migration, asylum and border control management (e.g. verifying the authenticity of travel documents); AI technologies used in the administration of justice and democratic processes.

Low-risk systems include chatbots or decision support tools. Minimal-risk solutions include, for example, AI-enabled video games or spam filters.

The text provides for transparency requirements.AI systems are developed and used in a manner that allows for appropriate traceability and explainability, ensuring that individuals are aware that they are communicating or interacting with an AI system, that deployers are adequately informed of the capabilities and limitations of that AI system, and that data subjects are informed of their rights.” in order to prevent any risk and give users a choice.

In order not to kill AI initiatives, the text establishes a legal framework that is favourable to innovation and aims to promote evidence-based regulatory learning. The text provides for AI regulatory “sandboxes”, which allow a controlled environment for the development, testing and validation of innovative AI systems. They should also allow innovative AI systems to be tested in real-world conditions.

Several bodies are created to facilitate regulation

The text creates several structures. A European Office for Artificial Intelligence (AI Office), which will be responsible for facilitating compliance and reducing the administrative burden on AI deployers. Within the Commission, it is responsible for enforcing common rules across the EU. For its part, a Scientific Panel of independent experts supports monitoring activities. An AI Management Board, composed of representatives from the Member States, is created to advise and assist the Commission and the Member States. Finally, a Stakeholder Advisory Forum provides technical expertise.

Heavy sanctions

Violations of the AI ​​law will be penalized based on a percentage of the company’s annual global turnover or a predetermined amount, whichever is higher. Failure to comply with the prohibition of AI practices referred to in Article 5 is subject to administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover in the preceding financial year, whichever is the higher.“. Other criminal or administrative sanctions may be combined. SMEs and startups will be subject to proportional administrative fines.

Before the deployment of a high-risk AI system by certain entities providing public services, the impact on fundamental rights will have to be assessed. High-risk AI systems will have to be registered in an EU database accessible to all.

Patrice Remeur



PREV Interest on loans from major listed global companies is becoming increasingly expensive
NEXT Tesla and the S&P500 on top of the world!