Canada will attempt to build a wall of security around artificial intelligence (AI), whose technology continues to be refined at high speed, thus expanding not only its beneficial potential, but also the risks and dangers it presents to society. Company.
Posted at 5:09 p.m.
Pierre Saint-Arnaud
The Canadian Press
The federal Minister of Innovation, Science and Industry, François-Philippe Champagne, launched the Canadian Institute for Artificial Intelligence Security (ICSIA), which had been promised in his government’s last budget.
“It’s rare in life that we have the chance to witness, to contribute to progress to protect humanity,” he said, in the offices of Mila, the Quebec Institute of artificial intelligence, alongside its founder, Professor Yoshua Bengio.
“Already gone somewhere worrying”
The Institute will benefit from a budget of 50 million over five years and will bring together high-level researchers who will look at ways of installing safeguards around this technology which already presents problems, explained Professor Bengio. “There is already misinformation. The challenge is that artificial intelligence could scale up tools of influence and persuasion. We already see it with hyperfakes [deep fakes]. »
Experiments with AI tools, he said, have demonstrated “that the machine is capable of influencing better than the human.” We are already somewhere worrying and we need to deal with it.” Also, he argued, “we want to make sure that AI is not used in dangerous ways by authoritarian regimes and that it does not blow up in our faces.”
The Canadian Institute joins two other organizations of its kind, one in the United States, the other in the United Kingdom, and the announcement of its creation comes on the eve of an international summit on security of artificial intelligence, which will be held on November 20 and 21 in San Francisco, United States.
Fears in the company
“Canada is taking the lead to ensure that we will have technology that will serve humanity, that will have a good number of rules and frameworks,” said Mr. Champagne.
Canada is lagging a little in the integration of artificial intelligence in various economic sectors and the fears it arouses are not unrelated to this delay, argued the president and CEO of the Institute Canadian Advanced Research Institute (CIFAR), Stephen Toope. “Several business leaders have told us that, to feel confident in adopting AI, they need assurances around security, a strong regulatory environment. » CIFAR will be responsible for the research component of the Institute.
“If we want to move from fear to opportunity, we must build trust,” argued Minister Champagne. And the repercussions of this confidence will be felt in more and more aspects of Canadians’ everyday lives, as he illustrated: “We don’t care if artificial intelligence helps you choose the best pizza you want on a Thursday evening with your family, but we are concerned about the artificial intelligence which will decide if you have a loan or if for example you are going to have an insurance policy or at a pinch if we are going to offer a job, because that is where there can be deviations and that’s what we want to prevent. »
Work upstream
Yoshua Bengio recognizes that the task will be difficult. “We must work with (AI) companies to assess these risks and mitigate these risks. I am thinking of the problems of alignment – how we ensure that the AI behaves in a way that corresponds to our intentions, our laws – and of control, so that it acts in the direction of what we want, for example in the context of cybersecurity and disinformation. »
The expert believes that there are avenues for intervention first directly at the manufacturing stage of AI systems. “If the AI system is built with safeguards that prevent it from producing content that is dangerous for democracy, toxic [il donne l’exemple de la pornographie juvénile]there are things that we can do technically upstream. » The designers of these systems should therefore make it more difficult to use “for countries that want to use them against us. It’s an important safety issue, it’s a design issue for these systems,” he says.
Platforms disempowered
Another part of the problem does not lie in the hands of the designers of these systems, but in those of the web giants who let everything pass, deplores Mr. Bengio. “The platforms should have a responsibility. Today they are in a bit of a gray area. […] It’s so easy to create an account on one of these platforms and do it anonymously. It is clear that this is an open door to groups who want to destabilize our democracies. »
Part of the solution, he suggests, could very well lie with AI itself. “On the technological side, there are researchers who are trying to see how we could use AI to detect that content is false or misleading or violates certain standards. »
Another of the problems presented by AI is its use of copyrighted content. AI tools like ChatGPT are trained on, for example, reading newspapers and books of all kinds. This question is on everyone’s lips internationally, recognizes Yoshua Bengio. “There are trials underway. This is not an easy question. We’re going to hope that it converges as quickly as possible in a way that both enables innovation and protects those who create content. »
And in Canada? Minister Champagne is only moving forward with the greatest caution on this question. “There is often, in these models, intellectual property that is used and at that point, the question is: how do we remunerate those who have the rights to this intellectual property? Artificial intelligence is subject to copyright. It’s a new technology, but it doesn’t take away the basic principles that have always existed. We are currently holding a public consultation in Canada precisely to address this issue,” explains Mr. Champagne.
Related News :