Clearly, I don't think so. Gradually, especially for the general public, most likely. I think that over the years, the uses of x86 will be confined more and more to professional niches which have significant power requirements. The general public has already demonstrated that it clearly does not need a lot of computing power (predominant use of smartphones, tablets, laptops).
I have a MacBook M3 Pro and frankly, it's great, but it's really not comparable in terms of computing power with my Threadripper 7970X station (so x86). Nor in terms of expansion possibilities: I can connect 7 PCIe 5.0 x16 cards, and I have 3 cards to increase the capacity of the machine to 14 PCIe (NVMe) ports, all used. This is possible thanks to the large number of PCIe lanes in this architecture.
Even my 7950X/4090-based machine literally bursts an M3 Max (fortunately). The reality is that very few people need such power (especially useful for creators, 3D, VFX, cinema post-production, scientists), but for us who have the use, ARM chips still have long, long way to go before matching x86.
ARM chips have an undeniable advantage: lower consumption and therefore greater autonomy on portable machines and much lower heating. But laptops are not an option when you need the most powerful machine possible (and the screens are far too small and insufficiently qualitative for certain uses). Saving 5s of rendering time per frame, on a 200-frame scene, is huge in a professional workflow.