Towards a fairer and more respectful AI: federated learning to combat bias

Towards a fairer and more respectful AI: federated learning to combat bias
Towards a fairer and more respectful AI: federated learning to combat bias

Share the publication “Towards a fairer and more respectful AI: federated learning to combat bias”

Several scandals have erupted in recent years, implicating decision support systems based on artificial intelligence (AI) which produce racist or sexist results. This was the case, for example, of Amazon’s recruitment tool which exhibited bias against women, or of the system guiding hospital care in an American hospital which systematically favored white patients by compared to black patients. In response to the problem of bias in AI and machine learning algorithms, legislation has been proposed, such as the AI ​​Act in the European Union, or the National AI Initiative Act in the United States.

A widely repeated argument about the presence of bias in AI and machine learning models is that they simply reflect a ground truth: bias is present in real data. For example, data from patients with a disease specifically affecting men results in an AI biased towards women, without this AI being incorrect.

When AI bias betrays social inequalities

While this argument is valid in certain cases, there are many cases where the data has been collected incompletely and does not reflect the diversity of the reality on the ground, or where data includes statistically rare cases and which will be undervalued. represented, or even not represented, in machine learning models. This is the case, for example, of Amazon’s recruitment tool which exhibited a bias towards women: because women working in a sector are statistically few in number, the resulting AI simply rejects the applications feminine.

What if rather than reflecting, or even exacerbating a dysfunctional current reality, AI could be virtuous and serve to correct biases in society, for a more inclusive society? This is what researchers are proposing with a new approach: “federated learning”.

Virtuous AI: objective or utopia?

AI-based decision support systems are data-driven. Indeed, in traditional machine learning approaches, data from several sources must first be transmitted to a repository (for example, a server on the cloud) which centralizes them, before executing a processing algorithm. machine learning on this centralized data.

However, this raises data protection questions. Indeed, in accordance with current legislation, a hospital does not have the right to outsource the sensitive medical data of its patients, a bank does not have the right to outsource the private information of its clients’ banking transactions.

Consequently, to better preserve data confidentiality in AI systems, researchers are developing approaches based on so-called “distributed” AI, where the data remains on the sites that own the data, and where machine learning algorithms execute in a distributed manner on these different sites – this is also called “federated learning”.

Towards a decentralized and data-friendly AI

Concretely, each data owner (participant in federated learning) trains a local model on the basis of its own data, then transmits the parameters of its local model to a third party which carries out the aggregation of the parameters of all of the local models (for example, via a weighted average according to the volume of data of each participant). This last entity then produces a global model which will be used by the different participants to make their predictions.

Thus, it is possible to construct global knowledge from each other’s data, without revealing one’s own data and without accessing the data of others. For example, patient medical data remains in each hospital center that owns it, and it is the federated learning algorithms that execute and coordinate between these different sites.

Build a global and effective model without centralizing data

With such an approach, it will be possible for a small hospital center in a less populated geographical area than large metropolises – and therefore having less medical data than in large hospitals, and consequently, having a priori less well trained AI – to benefit from an AI reflecting global knowledge, trained in a decentralized manner on data from different hospital centers.

Other similar application cases can be mentioned, involving several banks to build a global fraud detection AI, several smart buildings to determine appropriate energy management, etc.

Biases in decentralized AI are more complex to understand

Compared to the traditional centralized machine learning approach, decentralized AI and its federated learning algorithms can, on the one hand, further exacerbate bias, and on the other hand, make dealing with bias more difficult.

Indeed, the local data of participants in a federated learning system can have very heterogeneous statistical distributions (different data volumes, different representations of certain demographic groups, etc.). A participant contributing to federated learning with a large volume of data will have more influence on the overall model than a participant with a small volume of data. If the latter is in a certain geographical area which represents a particular social group, this will unfortunately not be, or very little, reflected in the overall model.

Furthermore, the presence of bias in the data of one of the participants in a federated learning system can cause this bias to propagate to the other participants via the overall model. Indeed, even if a participant has taken care to have unbiased local data, he will inherit the bias present in others.

And more difficult to correct

Furthermore, the techniques classically used to prevent and correct bias in the centralized case cannot be applied directly to federated learning. Indeed, the classic bias correction approach mainly consists of preprocessing the data before machine learning so that the data has certain statistical properties and is therefore no longer biased?

However, in the case of decentralized AI and federated learning, it is not possible to access participant data, nor to have knowledge of the overall statistics of decentralized data.

In this case, how to deal with bias in decentralized AI systems?

Measuring AI bias without access to decentralized data

A first step is to be able to measure the biases of decentralized data among participants in federated learning, without having direct access to their data.

Together with my colleagues, we designed a new method to measure and quantify bias in federated learning systems, based on parameter analysis of local models of federated learning participants. This method has the advantage of being compatible with the protection of participant data, while allowing the measurement of several bias metrics.

Capturing the interdependence between multiple types of bias, and correcting them in decentralized AI

But there can also be several types of demographic bias, which vary according to different sensitive attributes (gender, race, age, etc.), and we have demonstrated that mitigating a single type of bias can have the effect collateral the increase of another type of bias. It would then be a shame if a solution to mitigate race bias, for example, caused an exacerbation of gender bias.

We then proposed a multi-objective method for the complete measurement of biases and the joint and coherent treatment of several types of biases occurring in federated learning systems.

About the author: Sara Bouchenak. Professor of Computer Science – INSA , INSA Lyon – University of Lyon.
This article is republished from The Conversation under a Creative Commons license. Read the original article.

-

-

PREV at COP29, maximum tension between North and South
NEXT Senegal. “The time for promises is over”: young people await Assembly and government at the turning point