Health Insurance monitors the poorest and harasses precarious mothers – La Quadrature du Net

Health Insurance monitors the poorest and harasses precarious mothers – La Quadrature du Net
Health Insurance monitors the poorest and harasses precarious mothers – La Quadrature du Net

Since 2021, we have been documenting through our Control campaign the social control algorithms used within our social administrations. In this context, we particularly analyzed the use of scoring algorithms. After revealing that the algorithm used by the CAF particularly targeted the most precarious, we demonstrate, via the publication of its code, that Health Insurance uses a similar algorithm directly targeting women in precarious situations.

Since 2018, an algorithm developed by Health Insurance (CNAM) has assigned a note, or suspicion score, to each household benefiting from free Supplementary Health Solidarity (C2SG), i.e. 6 million people among the poorest in France. This note is used to select households to be monitored. The higher it is, the greater the probability that an outbreak will be controlled. Following an error by the CNAM, we were able to access the source code of this algorithm which we are making public with this article. The observation is overwhelming.

The algorithm deliberately targets precarious mothers. The latter, openly presented in official documents by CNAM officials as being “ most at risk of anomalies and fraud », receive a higher suspicion score than the rest of the insured. In return, they undergo a greater number of checks. Note that the – too rare – testimonies available to us show that these controls can in particular lead to abusive suspensions of health coverage leading to disruptions in access to care with particularly serious consequences, for all beneficiaries of the household. including children.

Stigmatize precarious women

« First application where the applicant is a woman over 25 years old with more than one adult and at least one minor in the household ». Here, word for word, is how it is described, in one slide PowerPointwhat CNAM officials call the “ typical profile of the fraudster “. It’s this “ profile type » that the algorithm is responsible for detecting among the insured. The closer a person comes to this profile, the more their suspicion score is high and its probability of being controlled is high.

Analysis of the algorithm code confirms this description. Among the variables used by the algorithm and increasing the suspicion score, we find in particular being a woman, having minor children or being over 25 years old..

If this algorithm does not directly reveal criteria linked to economic precariousness, it is quite simply because this criterion is already present in the basic definition of the population analyzed. Beneficiary of the C2SG, this “ woman over 25 years old » is one of the 6 million poorest people in France, the majority of whom are recipients of RSA and/or deprived of employment.

Towards targeting sick or disabled people?

In addition to the code of the algorithm used since 2018, we obtained that of an experimental model developed with a view to future developments. In addition to targeting precarious mothers, this model adds to the criteria increasing the suspicion score of an insured person the fact of being in a situation of disability (“ benefit from a disability pension “), to be sick (to be ” care consumer » or have “ received daily allowances “, that is to say having been on sick leave) or even… to be ” in contact with Health Insurance ».

A clarification is necessary. The fact that this experimental model has not been generalized is in no way linked to a burst of decency on the part of the CNAM. Her ” efficiency » was on the contrary praised in documents distributed during his presentation in “ Fraud Management Committee » at the beginning of 2020. The only problem, explain the teams of statisticians from the CNAM, is that its use is not legal because this new model would require a “ unauthorized data crossing “. To be able to put it in place, the teams are seeking to lure the leaders of the CNAM in order to gain their support to obtain the regulatory change necessary for the implementation of this data crossing.

Opacity and indecency

If there is one crucial thing that the documents that we are making public show, it is that the leaders of the CNAM are perfectly aware of the violence of the tools that they have validated. No need to be an expert in statistics to understand the descriptions transcribed above relating to “ typical profile of the fraudster » that the algorithm is responsible for targeting.

But rather than opposing it, those in charge of the CNAM preferred to use the opacity surrounding its operation to take advantage of it. Technique « at the forefront of technology », « artificial intelligence » allowing a “ proactive detection » fraud, predictive tool « at the Minority Report »: here is how, according to official reports or public interventions, this type of tool is praised. The lack of transparency towards the general public regarding the algorithm's targeting criteria allows the reality of control policies to be masked. This situation then allows Health Insurance managers to shine their managerial skills and their capacity for innovation on the backs of the most precarious.

To the indecent nature of such a presentation, let us add here that it is also misleading. Because, contrary to the way it is presented, the algorithm is not built to detect fraud situations only. The technical documentation shows that it is trained to predict the fact that a file presents what Health Insurance calls a « anomalies »i.e. the fact that an insured person's income exceeds the C2S income ceiling. However, only a part of these “ anomalies » – when the gap between income and the ceiling exceeds a certain amount – is classified as fraud by Health Insurance. Everything suggests that the majority of “ anomalies » detected by the algorithm results above all from involuntary errors, linked to the complexity of the C2SG attribution criteria which includes in particular all the income available to the household, including gifts and family donations.

This communication must ultimately be put into perspective in the face of financial issues. In 2022, the director of Health Insurance announced that the fraud across the entire C2S was estimated at 1% of its cost, or 25 million out of more than 2.5 billion euros. On the other hand, the rate of non-use of this social benefit was estimated at more than 30%, i.e. a “gain” of around… one billion euros for the CNAM. These figures underline the political hypocrisy of the importance of fighting against C2SG fraud – and the need for tools powered by artificial intelligence – while demonstrating that the use of such tools is above all a question of image and communication at the service of the institution's leaders.

Technology and dehumanization

There is one last thing that the documents we make public highlight. Written by the teams of statisticians from the CNAM, they offer a particularly harsh insight into the flagrant absence of ethical consideration by the technical teams who develop digital tools for social control. In these documents, nowhere appears the slightest comment regarding the human consequences of their algorithms. Their construction is approached solely according to technical considerations and the models only compared with the yardstick of the sacrosanct criterion of efficiency.

We then perceive the risk posed by the digitization of control policies in the weight it gives to teams of data-scientists cut off from realities on the ground – they will never be confronted with the reality of control and its consequences in terms of access to care – and nourished by a purely mathematical vision of the world.

We also understand the interest of such an approach for those responsible for social administrations. They no longer have to deal with possible reluctance from controller teams when defining control policies. They no longer even have to explain how these policies were constructed to the controller teams, who are simply asked to check the lowest rated files using a black box algorithm.

The problem is not technical but political

For two years now, we have been documenting the generalization of rating algorithms for control purposes within our social system. Like the CNAM, we have shown that they are today used at the Caisse Nationale des Allocations Familiales (CNAF), Old Age Insurance or even the Mutualité Sociale Agricole and have been tested at France Travail.

For two years, we have been warning about the risks associated with the rise of these techniques, both in terms of digital surveillance and discrimination and institutional violence. Above all, we have constantly repeated that, whatever the social institutions, these algorithms only serve one objective: to facilitate the organization of policies of harassment and repression of the most precarious, and this thanks to the opacity and the scientific veneer that they offer to those responsible for social administrations.

This has now been proven for two administrations. For the CNAM with this article. But also for the CNAF, whose rating algorithm we published just a year ago, powered by the personal data of more than 30 million people, and which we attacked before the Council of State in last October with 14 other organizations due to the targeting of people in precarious situations, with disabilities or even single mothers.

We hope that this article, associated with those published on the CNAF, will end up demonstrating that it is not necessary to access the code of all of these algorithms to know their social consequences. Because the problem is not technical but political.

Sold in the name of the so-called “fight against social fraud”, these algorithms are in reality designed to detect overpayments, or undue payments, which all studies show are concentrated on precarious people in very serious difficulty. Indeed, these overpayments are largely the result of involuntary declarative errors resulting from two main factors: the complexity of the rules for allocating social minima (RSA, AAH, C2SG, etc.) and personal situations of great instability (personal, professional or administrative). A former CNAF official explained that “ The industrials are explained […] by the complexity of benefits, the large amount of information used to determine rights and the increased instability of the professional situation of beneficiaries ”, which is above all the case for “ benefits linked to precariousness […] very dependent on the family, financial and professional situation of the beneficiaries ».

In other words, these algorithms cannot be improved because they are only the technical translation of a policy aimed at harassing and repressing the most precarious among us.

Lutter

The hypocrisy and violence of these practices and the policies that underlie them must be denounced and these algorithms abandoned. As for those responsible who call for their wishes, validate them and promote them, they must answer for their responsibility.

To help us continue to document this abuse, you can make a donation to us. We also call on those who, beneficiaries of the C2SG or not, wish to act against this algorithm and more broadly the control policies of the CNAM. Insured people, collectives, unions, CNAM employees, you can contact us at [email protected] to collectively think about the follow-up to this publication.

-

-

PREV the unions are maintaining the pressure, will the movement be followed?
NEXT The CRTC, a lame duck swimming in uncharted waters