“Supervising the use of generative AI in business”

“Supervising the use of generative AI in business”
“Supervising the use of generative AI in business”

Two years after the creation of ChatGPT, Nicolas Blanc, CFE-CGC national secretary for economic transition, analyzes technology practices in companies and its use by employees and staff representatives.

Created in November 2022, the artificial intelligence tool ChatGPT celebrated its two-year anniversary. What conclusions can we draw from its deployment?

ChatGPT quickly surpassed its initial goal as a simple demonstration project. We can no longer ignore it. Designed by US company OpenAI to explore how users would interact with a tool based on the GPT 3.5 language model, ChatGPT has seen massive adoption, surprising even its creators. In just two months, it has amassed a base of 100 million users! This success demonstrates the potential of large language models (LLM), which rely on the “transform” architecture introduced by Google in 2017. Thanks to self-attention mechanisms, these models process and generate text at the same time. human manner and excel at tasks such as understanding natural language.

And in the professional world?

We are already seeing a lot of uses emerge at work but today we need to regulate these practices which will further develop with the arrival of Microsoft Copilot, a ChatGPT type tool which will be integrated into its office and software suite.

More and more employees are “discreetly” using ChatGPT for their work. What do you think?

This phenomenon of “Shadow AI” refers to the use of generative AI solutions such as ChatGPT or Google Gemini without company approval. For example, the use of AI on the employee’s personal devices to correct text, generate content or write an email to a client. AI use cases despite the policies put in place by organizations are increasing. Despite undeniable advantages, this “clandestine” use raises serious concerns regarding data security and confidentiality. Some GAFAM (Microsoft, Apple, Amazon) have also banned their employees from using ChatGPT for professional use. This is not trivial!

How does this constitute a risk for businesses?

Regardless of the rules put in place, Shadow AI poses very real risks as AI solutions suck up a large amount of data to function. Without control and oversight, it is possible that sensitive company data could be exposed or used inappropriately. Shadow AI can also introduce vulnerabilities into company systems because the solutions used are not subject to the same security controls as those approved by the IT department… Security and confidentiality of company data are therefore the two main risks and employees must therefore not put themselves in danger.

How can we regulate these practices?

The idea is not to prohibit them but to support them. To do this, companies must first discuss it with staff representatives in order to regain control and define the terms of use by their employees of this type of tool. This can involve the negotiation of a charter in which employees undertake to respect the rules negotiated by agreement. It is of course necessary to associate this with an employee awareness and training policy to make the approach coherent.

It is also essential to provide adequate training to employees on the risks associated with the use of AI and on the best practices to adopt. This will raise awareness of data security and privacy issues as well as the limitations and appropriate uses of generative AI. Some companies have already taken the lead by deploying their own secure ChatGPT to counter these wild practices. In all cases, we recommend that companies go through information-consultation to involve employees and their representatives in the economic and social councils (CSE). This allows for a transparent, smooth deployment process for all employees.

Staff representatives and unions can also use generative AI. What should be the good practices?

Of course, we can imagine that such tools could help them prepare summaries of CSE meetings, write union leaflets or negotiate agreements. Here again, caution is required. Without seeking to prohibit them, we must, however, regulate these uses and recall good practices in terms of confidentiality and data security. Documents provided as part of an information-consultation may be confidential. The same goes for negotiating an agreement.

Furthermore, the use of a paid solution, for example ChatGPT, does not guarantee absolute security, as OpenAI points out. in its FAQ. Indeed, you must also not indicate anything confidential in the exchanges you have with the tool, what we call “prompts”. But we rarely know this… Overall, it is therefore appropriate for stakeholders to adopt a position of data controller within the meaning of the GDPRwith the resulting commitments.

How does the CFE-CGC work on all these subjects?

We have been investigating AI for a long time, for example with the European SecoIADeal project on social dialogue and AI. We are going to work on an artificial intelligence guide which will remind us of good practices for our members and activists. It will also be an opportunity to educate on the use of these systems from an environmental point of view since, according to various studies, a simple query on ChatGPT consumes ten times more energy than a Google search.

At the same time, our network of AI referents in the CFE-CGC federations are guarantors for the dissemination of these rules as close as possible to our members and activists. We are also working on the qualification of a certain number of AI or training tools which meet a high level of security and confidentiality requirements to allow our members to calmly benefit from the advantages of generative AI.

Comments collected by Mathieu Bahuet

-

-

PREV TinyFeed – Simplify your RSS monitoring with this minimalist tool | Open source
NEXT In 2025, inspire us again and again!