Elon Musk plans supercomputer to power next-gen Grok AI chatbot
Elon Musk, the American entrepreneur and founder of artificial intelligence startup xAI, has unveiled plans to construct a supercomputer. This "gigafactory of compute" is projected to be operational by fall 2025. The supercomputer will power an enhanced version of the company's AI chatbot, Grok. The project is expected to cost billions and utilize tens of thousands of NVIDIA H100 GPUs.
Musk's ambitious GPU requirements for Grok's third iteration
Musk has indicated that the third iteration of Grok will require at least 100,000 NVIDIA H100 GPUs. This marks a significant increase from the 20,000 GPUs used in training Grok 2.0. The planned GPU cluster is expected to be four times larger than any currently used by xAI's competitors. The current version of Grok, version 1.5, released in April, can process visual information and text.
Potential partnership with Oracle for supercomputer development
Reports suggest that xAI could potentially collaborate with Oracle to develop this colossal computer system. However, neither xAI nor Oracle have confirmed these speculations. The NVIDIA H100 GPUs, which dominate the data center chip market for AI, are often hard to acquire due to high demand. Musk's proposed supercomputer would house connected groups of these chips, four times larger than the largest existing GPU clusters.