“The autonomous car is much more complicated than some people claim”

“The autonomous car is much more complicated than some people claim”
“The autonomous car is much more complicated than some people claim”

On June 13, 2024, in Fullerton, California, a Tesla crashed head-on into a stationary police car with its flashing lights on to signal an emergency. The vehicle was in autopilot mode and its driver was busy on his phone. The video posted by the police on their Instagram account is quite impressive.

At one time, the information would have hit the headlines. But it is clear that the subject of autonomous cars has taken a back seat for some time in the news of artificial intelligence (AI). Explanations with Luc Julia, scientific director of Renault since April 2021, in charge of supervising R&D in AI.

Sciences and the Future: Even if we talk less about it at the moment, the autonomous car remains the emblematic example of the promises of artificial intelligence. For what ?

Luc Julia: Because driving is the most loaded cognitive activity. It’s complicated to drive: you have to pay attention to lots of things at the same time, several of our senses are involved (sight, hearing), you have to have reflexes… So in the imagination, this corresponds to a intelligent activity, in the sense that all these different human parameters must be mobilized.

Would an artificial intelligence that could drive a car correspond to what we call general AI, AGI?

AGI refers rather to an intelligence that knows how to master all areas and which would be more intelligent than us, humans, in everything. It’s still a step above autonomous driving!

Read alsoArtificial intelligence: can we trust it?

“John Krafcik estimated that the fully autonomous car will never exist”

Why are self-driving car projects getting less attention? Is it only an effect of current events which favor generative AI or have these projects disappointed?

There’s a little bit of everything. We realize that Elon Musk lied to us, once again (laughs)! Since 2014, he has been announcing the autonomous car for tomorrow and this has started to tire people. And those who really work on the subject have realized that it is really much more complicated than what some people claim.

Waymo is certainly the most advanced company in terms of autonomous cars and the one that has been working on it the longest. In 2018, its CEO John Krafcik estimated that the level 5 autonomous car, that is to say completely autonomous, capable of driving in all contexts, in all weather, all year round, will never exist. And that we had to focus on level 4. It was already a reaction to Elon Musk’s assertions.

Waymo still operates the only robotaxis in the United States (the competitor Cruise, after one of its cars seriously injured a pedestrian in October 2023, in San Francisco, resumed tests this year in Phoenix, without taking any passengers , editor’s note). But it does it in constrained environments, particular geographies and with remote control to ensure that real people react when the cars are in difficulty. It is therefore not deployable on a large scale.

Luc Julia during an interview for the Economic, Social and Environmental Council (Cese). Credits: Economic, social and environmental council

Read alsoSelf-driving cars come close to road accidents due to lack of social intelligence

“The problem is not the context change, it’s the context itself”

Is it the ability of the same car to change context, for example from a residential area to a motorway, then to a country road, which poses a particular problem?

This is one of the most complicated aspects, but not the only one. It is true that cars that are somewhat autonomous – we have reached level 3 today – drive more often on motorways than in town. On the highway, a priori, only cars, sometimes motorcycles, but no bicycles, no pedestrians, no dogs… In the city, you have everything. The problem isn’t the context change, it’s the context itself.

Is this pendulum movement between excitement and then waning of enthusiasm specific to AI?

It’s really unique to AI because of its very name: the word “intelligence.” This creates a sort of anthropomorphic relationship that makes us believe that the machine can be like us. We immediately want to make him do things that we would do.

But in the sixty-eight years that this field of research has existed, as soon as a new type of AI appears, as is the case with current generative AI, the excitement is at its height, the arrival of general AI, and after a few years, we realize that this will not be the case. People calm down, we apply the new AI technology in question to very specific areas and it works very well. And we forget the generic applications which will never work.

-

-

PREV Tesla: Model 3, Model Y, Model X, price, autonomy
NEXT Is it true that 15 RN candidates were “chosen by Putin” to observe the Russian elections, as David Lisnard says? – Liberation