News

Microsoft Brings More AI-Optimized VMs to Azure

Microsoft recently announced a pair of new Azure virtual machine (VM) options designed to accommodate the intense compute demands of modern -- specifically, AI -- workloads.

The new Azure ND H200 v5 VMs are now generally available, and are pre-integrated into Azure Batch, Azure Kubernetes Service, Azure OpenAI Service and Azure Machine Learning. Meanwhile, the FXv2-series VMs are now in public preview for select Azure regions.

Azure ND H200 v5
Microsoft launched the ND H200 v5 VMs this week, promoting them as a way for its cloud customers to run AI supercomputing clusters on Azure.

"These VMs...have been tailored to handle the growing complexity of advanced Al workloads, from foundational model training to generative inferencing," Microsoft said in its announcement Wednesday.

Microsoft's ND family of VMs is suited for workloads with large datasets that require fast computation -- for instance, deep learning and AI simulations.

The ND H200 v5 VMs have eight Nvidia H200 Tensor Core GPUs. Compared to their predecessor, the ND H100, they "deliver a 76% increase in High Bandwidth Memory (HBM) to 141 GB and a 43% increase in HBM Bandwidth to 4.8 TB/s." These specs enable low latency, higher throughput and more efficient use of GPUs, per Microsoft

More information on these VMs is available here.

Azure FXv2-Series
The public preview of the FXmsv2-series and FXmdsv2-series VMs can be accessed here with sign-up.

These releases are part of Microsoft's FX family of VMs, which is designed for compute-intensive workloads like modeling and simulations for finance and science. These tasks require extensive data analytics capabilities and rapid computation.

The new FXv2-series VMs are especially suited for SQL Server workloads, as well as AI, with built-in technology from Intel enabling "higher inference and training performance," according to Microsoft's late-September blog post announcing the preview.

Microsoft lists the following improvements in the new VMs compared to the earlier FXv1-series:

  • up to 1.Sx CPU performance
  • 2x vCPUs, with 96 vCPU as the largest VM size
  • 1.Sx+ Network bandwidth, and offers up to 70Gbps
  • up to 2x local storage (Read) IOPS and offers up to 5280 GiB local SSD capacity
  • up to 2x IOPS and up to Sx throughput in remote storage performance
  • up to 400k IOPS and up to 11 GBps throughput with Premium v2/ Ultra Disk support
  • up to 1800 GiB memory

Information on the FXmsv2-series VMs is available here and the FXmdsv2-series VMs here.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025