News
To Support OpenAI, Microsoft Taps Oracle for More Cloud Power
Frenemies Oracle and Microsoft are combining their respective cloud platforms to give generative AI wunderkind OpenAI more room to innovate.
Historically rivals, particularly in the database space, Microsoft and Oracle have spent the better part of the last decade in a detente. Last week, their partnership added another wrinkle: Oracle announced it has agreed to lend Microsoft some cloud capacity to support the latter's partnership with OpenAI.
Microsoft is famously a major partner of OpenAI, which primarily uses Microsoft's Azure cloud to power its various AI models and solutions, including ChatGPT. However, with over 100 million monthly users, ChatGPT's capacity demands present a significant challenge.
The partnership with Microsoft will effectively extend Azure's capacity by tapping into the Oracle Cloud Infrastructure (OCI) platform.
"OCI will extend Azure's platform and enable OpenAI to continue to scale," said OpenAI CEO Sam Altman in a prepared statement.
Under the hood, OCI is purpose-built for training AI models. Per Oracle's announcement:
For training large language models (LLMs), OCI Supercluster can scale up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage. OCI Compute virtual machines and OCI's bare metal NVIDIA GPU instances can power applications for generative AI, computer vision, natural language processing, recommendation systems, and more.
The following AI companies also use OCI: Adept, Modal, MosaicML, Nvidia, Reka, Suno, Together AI, Twelve Labs and xAI.
"The race to build the world's greatest large language model is on, and it is fueling unlimited demand for Oracle's Gen2 AI infrastructure," said Oracle CTO Larry Ellison. "Leaders like OpenAI are choosing OCI because it is the world's fastest and most cost-effective AI infrastructure."