While AI advances, management continues to be delayed

On June 16, 2022, the Canadian government followed in the footsteps of the European Union and became, by tabling Bill C-27, one of the first states in the world to propose a legislative framework for artificial intelligence (AI) . Two years later, its adoption is still pending.

In the eyes of many observers, the unveiling, a few months after the bill was tabled, the first version of the chatbot ChatGPT propelled the industry. For the first time, the general public saw the potential of this enigmatic technology.

Since then, we have observed a frantic race for the development ofAImarked sometimes by notable advances, sometimes by worrying deviations.

If Ottawa hopes that its Artificial Intelligence and Data Act (LIAD) instills a wind of confidence inAIthe bill in which it is embedded is struggling to complete the necessary steps for its adoption.

Two years later, where are we?

Bill C-23 has three parts. The first two aim to protect privacy and personal information, while the third proposes the creation of the LIAD.

Since the spring of 2023, the legislative piece has been held up for examination in committee.

After around thirty meetings, the Standing Committee on Industry and Technology is only studying the second of the 225 articles of the bill.

It’s not moving fast enough, launches unequivocally the Minister of Innovation, Science and Industry, François-Philippe Champagne. According to him, the opposition parties are trying to slow down the parliamentary process.

Open in full screen mode

The Minister of Innovation, Science and Industry, François-Philippe Champagne, would like his bill on artificial intelligence to move forward more quickly. (Archive photo)

Photo: Radio-Canada / Ivanoh Demers

NDP MP Brian Masse, who sits on the committee, has a completely different opinion. He believes that the bill is a complete fiasco from the beginning.

There are hundreds of amendments [à étudier]. I think there are more amendments than pages in the bill because it was so poorly writtenhe pleads in an interview with Radio-Canada.

The story is the same on the side of the vice-chair of the committee, the Conservative Rick Perkins, who maintains in writing that the bill is broken and deficient.

If everyone seems to agree on the need for a law onAIthe three-part structure of the bill has annoyed opposition since its submission.

Many times, parliamentarians have proposed splitting the bill into two to isolate the LIADwhich the government refused to do.

Still far from having completed the study of the first two sections of C-27, the committee will have to wait until at least the resumption of its work in the fall to examine the LIAD.

Mr. Masse hardly expects the process to speed up once it reaches the part onAI. There are still all kinds of amendments tabled, and I don’t know how many others [modifications] the liberals will launch us at the last minutehe laments.

I have never seen anything like this in my 22 years in Parliament.

A quote from Brian Masse, NDP MP, member of the Standing Committee on Industry and Technology

A bill that becomes clearer along the way

In the first version of the bill, the government left several provisions of the LIAD deliberately vague to allow the legislation to adapt to future innovations, which has given it the label of “empty shell” among many experts.

Everything was delegated to upcoming government regulations, posing a risk to both the industry and Canadiansexplains Florian Martin-Bariteau, professor of law at the University of Ottawa and holder of the University’s Research Chair in Technology and Society.

Open in full screen mode

Florian Martin-Bariteau believes that the structure of Bill C-27 is one of the reasons why it is progressing slowly in parliament.

Photo: Radio-Canada

Faced with criticism, Minister Champagne submitted, last fall, modifications to the LIADin order to define certain categories ofAI has high incidencethe cornerstone of the law.

It is these systems deemed to be high risk that the federal government primarily wishes to regulate. If the LIAD is adopted, companies that develop and operate such systems would be obliged, among other things, to evaluate and mitigate the dangers of their AI, in addition to ensuring human surveillance.

Violators would be subject to a fine of up to $25 million, or 5% of the company’s overall revenue, whichever is greater.

For the moment, the Canadian government has identified seven sectors deemed to be more at risk, a list which is expected to grow over time.

The seven classes of systemAI has high incidences

Far from being hypothetical, the excesses ofAI in these sectors are already a source of much controversy, experts assure.

For example, Amazon’s intelligent hiring software discriminated against applications submitted by women for months, while the use of AI in the management of Microsoft’s news portal helped the spread of several false news stories.

There has been a series of scandals or issues which may have awakened the population to the risks [de l’IA] and the need to supervise it.

A quote from Florian Martin-Bariteau, professor of law at the University of Ottawa

Aware of the growing fears of its population, the European Union has worked hard in recent months to adopt the world’s first legislation on AI. Its provisions will come into full force in 2026.

François-Philippe Champagne is eager to follow suit.

There is a time to chat. There is a time to debate, but there is also a time to act, and then I think the time to act has arrivedhe insisted in an interview, on the sidelines of an automobile industry conference taking place in Vaughan, Ontario.

Holes in the Canadian bill, argues an expert

In the eyes of Minister Champagne, the LIAD has nothing to envy of European legislation.

We took a different way of regulating artificial intelligence […]but our approach is entirely validhe maintains.

European Union flags fly in Brussels, Belgium.

Open in full screen mode

The European Union was the first governing body in the world to adopt comprehensive regulations on artificial intelligence. (Archive image)

Photo: Getty Images

Like its Canadian counterpart, EU legislation primarily targets systems ofAI at high risk. However, for the moment it offers a much more exhaustive definition than the LIAD.

Furthermore, European law completely prohibits the use ofAI in certain contexts, notably for mass surveillance and citizen rating.

So far, the Canadian bill, whose provisions would only apply to private companies, does not include such guidelines.

For several months, various groups have denounced, among other things, the absence of safeguards regarding facial recognition in the bill.

I find that Canada perhaps lacks ambition herejudge Céline Castets-Renard.

The holder of the University of Ottawa Research Chair in Responsible Artificial Intelligence on a Global Scale also fears that LIAD does not provide sufficiently strict supervision for certain systems ofAI seemingly harmless, but which can lead to significant deviations.

Portrait of Céline Castets-Renard.

Open in full screen mode

Céline Castets-Renard believes that Bill C-27 must move forward, even if it remains imperfect. (Archive image)

Photo: Radio-Canada

The professor cites as an example conversational robots, which, beyond everyday uses, can be called upon to respond to ambiguous requests, such as those of a person looking for solutions to mental health problems.

Hyperfaking tools can also cause enormous harm, she observes, particularly whenAI is used to produce false pornographic images or to facilitate disinformation campaigns.

On this subject, the federal government has proposed modifications to the LIAD so that it ensures Canadians the ability to identify content generated by theAIa goal that remains ambitious, warn some experts.

Without the LIADare we protected?

If he considers it necessary to impose a legislative framework onAIMr. Martin-Bariteau recalls that in the meantime, numerous laws continue to protect Canadians from the excesses of technology.

In my opinion, there is no urgency.

A quote from Florian Martin-Bariteau, professor of law at the University of Ottawa and holder of the University Research Chair in Technology and Society

Companies that develop and operate AI systems must, among other things, comply with existing regulations regarding the protection of privacy and discrimination, underlines the professor.

However, it is appropriate that the adoption of the LIAD would prevent risks, rather than simply responding to harm.

What is missing […]it is to have obligations in terms of transparency to be able to understand these systems and assess their risksbelieves Mr. Martin-Bariteau.

If we want to move from fear to opportunity, we have to build people’s trust. The best way to do this is by regulating this industry.affirms for his part Minister François-Philippe Champagne.

He himself admits that the voluntary code of conduct which was unveiled last September and which to date has around thirty signatory companies will not be sufficient to respond to the concerns of Canadians. There is always a limit to what companies can do voluntarilyhe admits.

Opposition parties, however, refuse to hastily adopt a bill which, according to them, still requires a lot of work.

Imagine the damage we could do to our democracy and our economyargues NDP MP Brian Masse.

At the same time, he suggests to Minister Champagne to review his approach during the summer if he wants his bill to have a chance of being adopted before the next elections.

-

-

PREV EM 2024: Christian Eriksen trifft beim 1:1 gegen Slowenien
NEXT Schlägerei in the Innenstadt von Gelsenkirchen