DayFR Euro

no, but we need safeguards

Artificial intelligence represents an opportunity to focus on activities with high added value. But, this requires rigorous human control to guarantee respect for ethical principles and the quality of analyses, while remaining vigilant to prevent long-term risks such as intellectual laziness, which could emerge from the uncritical use of the ‘AI. Explanations with Stéphanie Corbière, Group Head of Legal & Compliance Aramis Group and co-founder of the AFJE AI scientific group.

A refocus on strategic missions

AI tools make it easier to automate repetitive tasks, such as documentary research, drafting standardized documents or analyzing contracts, freeing up time for lawyers to focus on more complex missions, e.g. :

  • legal risk management;
  • contractual strategy;
  • complex negotiations.

While numerous studies, such as that of Goldman Sachs in 2023, highlight the risks of disappearance of several professions due to the rise of AI, complex missions require essential human skills. MIT CSAIL clarified, however, that automation has some limitations and not all functions are ready for automation. The July 2024 study by PWC Law Society indicated that the more nuanced sectors of legal advice, interaction with clients and complex litigation will continue to rely heavily on human skills.

Business management

Business management constitutes the essential activity of a business manager. It uses a large number of concepts borrowed from accounting, finance (risk management through asset management and professional insurance), business law (legal status, commercial contracts, taxation, regulatory framework and legal activity), human resources management…

Discover all related content

The importance of human control: ethics and algorithmic bias

Even though AI is able to quickly process large volumes of data, it can reproduce biases present in the training data. This is why it is essential that lawyers maintain control of the process. Their role is necessary in order to:

  • verify that the decisions taken or the analyzes proposed by the AI ​​respect ethics and regulatory compliance;
  • control algorithmic biases that could lead to unfair or biased decisions.

Lawyers must therefore play a watchdog role, constantly monitoring AI decisions and ensuring their compliance.

As such, the PWC study from July 2024 indicates that 48% of the companies surveyed have already started thinking about responsible AI, which involves in particular the establishment of solid governance and a responsible ethical approach. The development of the role of “Digital Trust Officer” could be increasingly developed.

An increase in skills for the new generation

Contrary to the idea that AI could lead to a deskilling of young lawyers, it allows them to develop new skills and appropriate modern technological tools. The lawyers of tomorrow may no longer spend as much time on simple tasks, such as summarizing contracts, but they will be trained in innovative use cases, which could breathe new life into the profession.

Thomson Reuter’s May 2023 study on the training of legal professionals demonstrates that AI can help young professionals develop technological skills, which is likely to enable them to better respond to contemporary legal challenges​ with several training typologies: continuous learning, blended learning, gamification or, for example, training in non-technical skills including critical thinking, problem solving and creativity.

Necessary implementation of long-term vigilance to preserve expertise

A potential danger of excessive task automation is the gradual loss of critical know-how and skills. If AI takes over some complicated tasks, this could lead to a depletion of lawyers’ expertise, particularly when it comes to complex legal thinking.

Long-term risks include:

  • the decline in the quality of legal services: too much reliance on AI could standardize responses, reducing the ability of lawyers to provide tailored and nuanced advice;
  • loss of innovation: automation could limit creativity and legal innovation, by encouraging an overly mechanized approach to legal solutions;
  • intellectual laziness: uncritical use of technologies could lead to a weakening of analytical skills and judgment.

To prevent these risks, it is essential not to rely entirely on AI. Lawyers must continue to play an active role in developing solutions, systematically validating the results produced by AI.

Remaining critical of AI: errors and hallucinations

Although AI is effective, it is not infallible. Algorithmic errors and hallucinations are still frequent, hence the importance of rigorous control by lawyers. Two examples illustrate this situation:

  • DPD had to deactivate part of its chatbot in 2024 after it started insulting customers instead of solving their problems (here simple question about a lost package. Unable to resolve the request, the AI ​​responded with insults and even wrote a poem harshly criticizing the company itself.)
  • In 2018, Amazon had to abandon its recruitment AI project after discovering that the system systematically discriminated against women.

The AI ​​Act aims to impose strict regulations on high-risk AI systems. This legislation highlights the importance of transparency, human oversight, and bias management to prevent potential errors and discrimination. As users or regulators, we must therefore play an active role in monitoring these technologies, ensuring that they meet these high ethical and legal standards.

This analysis illustrates how AI can optimize processes within legal departments, while requiring careful oversight to ensure ethics and regulatory compliance. These discussions align with the objectives of the AFJE AI Scientific Group, which aims to equip lawyers with the skills necessary to wisely integrate AI into their practices. This initiative helps prepare lawyers to use AI in a responsible and informed manner, taking into account the legal and ethical implications of these technologies.

-

Related News :