Processing of asylum applications | Growing use of AI raises doubts

Processing of asylum applications | Growing use of AI raises doubts
Processing of asylum applications | Growing use of AI raises doubts

A growing number of countries are using sophisticated information processing systems, including artificial intelligence (AI), to optimize and accelerate the analysis of immigration and asylum applications.


Published at 6:00 a.m.

“Governments are pushing these technologies without always ensuring that they are compatible with the fundamental rights” of the people concerned, warns Petra Molnar, a lawyer specializing in immigration attached to York University in Toronto.

The situation is all the more problematic, she says, as these initiatives are generally launched without public consultation.

“It is often once they are in place, due to a procedure launched by a lawyer or a document obtained by a journalist, that we learn of their existence,” notes Mme Molnar, who documented several problematic cases in a recent book on the subject.

Analyze accents

In particular, she says, Germany uses voice analysis software to study the accents of asylum seekers.

The result is used to detect contradictions in their testimonies, even if there can be considerable variability in accents from one individual to another, underlines the researcher.

In 2019, Greece, Hungary and Latvia experimented with AI-based software to try to analyze the facial expressions of people interacting with border agents to determine whether they were telling the truth.

The experiment, partly funded by the European Union, sparked controversy and ended after significant errors were reported. The designers of the software publicly agreed that it risked “undermining fundamental rights”.

The Netherlands, for its part, has undertaken to analyze the stories of asylum seekers with AI in order to find commonalities with past requests to speed up their processing.

A recent study notes that this comparative process is risky since it can lead responsible agents to conclude that a story that is too similar is copied, or that a story that is too different is unrealistic.

The British government has for its part been accused of using an algorithm that “automatically” identifies migrants on whom it should impose repressive measures.

Cut short disputes

The organization Privacy International, which has made several requests for access to information on this subject, maintains that the system was designed so that staff cannot challenge the recommendations put forward.

Madeleine Sumption, who heads the Migration Observatory at Oxford University, notes that the British government provides little information on its initiatives in this area, making it difficult to assess their impact.

The use of technology like AI in processing immigration applications is not a priori unacceptable, she argues.

“What matters is the way in which it is used and the guidelines in place to supervise it,” notes the researcher, who considers it important to keep a human being at the heart of the decision-making process on delicate subjects.

Let’s make an analogy with the medical world. When asked if your doctor should be replaced by a machine, you will say no. But when asked if you are in favor of your doctor having access to a tool that allows them to make a better decision, you will say yes.

Madeleine Sumption, director of the Migration Observatory at Oxford University

The use of artificial intelligence remains potentially problematic, according to Mme Sumption, since it can be difficult to understand what elements a given recommendation is based on, complicating any appeal process.

Bias reproduced by the machine

The data used to “train the AI” may also contain biases that are then likely to be reproduced by the system, she notes.

Mme Molnar believes a moratorium should be placed on this type of use until a system is put in place to prevent human rights abuses.

In a report produced with colleagues at the University of Toronto in 2018, the researcher expressed alarm that the Canadian government had undertaken “experiments” to automate part of the decision-making process normally carried out by immigration agents.

The Immigration and Refugee Board (IRB) specified that it had adopted several “innovative approaches” using “sophisticated analytical systems” to manage the “high volume” of applications to be processed and improve these services .

The organization revealed, in a document updated in 2023, that it used them in particular to sort visa applications or to extract data likely to facilitate the decision-making process.

The IRB said it “has not established absolute limits on the circumstances in which automation and AI can be used” but is “taking a cautious approach” by ensuring, among other things, that its staff maintains the control over the decision-making process.

-

-

PREV A week of horror: Two young Swiss women say they were kidnapped and raped for 7 days in Paris
NEXT Federal training: The new version of De Wever’s “super note” still does not appeal to Vooruit, the blockage continues