In September, the teams ofElon MuskElon Musk have completed the constructionconstruction of Colossus, the new supercomputer from its artificial intelligence company xAI. This is used to train its Grok AI and contains 100,000 Nvidia Hopper graphics chips. According to Nvidia, it is the largest supercomputer dedicated to AI. In October, during a videoconference published on YouTubeYouTubeElon Musk had, among other things, announced that he wanted to double the number of GPUs to 200,000.
The billionaire has obviously decided that simple excess is not enough. According to the Financial Timesrather than doubling the capacity of Colossus, he decided to multiply it by 10 and therefore equip it with a total of one million Nvidia Hopper graphics chips!
Discover the interior of xAI's Colussus supercomputer and its 100,000 Nvidia Hopper chips. In English, enable automatic translation of subtitles. © ServeTheHome
An investment worth several tens of billions of dollars
Grok is very controversial, with Elon Musk having wanted to make it an “anti-woke” chatbot while accusing his rivals of training their AIs to lie. Ultimately, it is much less advanced than its main competitors like ChatGPTChatGPT or GoogleGoogle Gemini, with a lower number of users. Musk seems to want to catch up by increasing the power of the supercomputer on which he is trained. However, the cost is likely to be extremely high.
Given the price of chips, a supercomputer with a million Nvidia Hopper GPUs risks costing several tens of billions of dollars. For the world's richest man, whose fortune is currently valued at more than $350 billion, this may not be such a big investment. However, increasing computing power will not be enough to improve Grok. This will allow developers to train it faster to test different versions, but it won't solve the problem of hallucinations and errors that are still very common with large language models.