Datacenter Trends

Qualcomm Back in Datacenter Fray with AI Chip

The chip maker joins a crowded field of vendors that are designing silicon for processing AI inference workloads in the datacenter.

There's more than one way to skin a cat -- and, apparently, to compete in the datacenter silicon market.

Qualcomm Technologies -- which last year seemed to be turning away from that market when it abandoned its Centriq line of CPUs and its datacenter technology chief, Anand Chandrasekher, departed -- just unveiled a new line of artificial intelligence (AI) processors aimed at the datacenter: the Qualcomm Cloud AI 100 platform for inference acceleration.

Qualcomm Cloud AI 100 is an all-new, highly efficient family of chips specifically designed for processing AI inference workloads. "Inference processing" refers to the ability of a trained neural network to infer things from new data inputs in which it was not specifically trained.

Datacenter demand for AI accelerators -- and inference acceleration -- has been nothing short of explosive. Many analysts are predicting that inference processing will outstrip the neural network training market, which is currently one of the top AI investments in the datacenter.

Cloud AI 100 was "built from the ground up" to meet the demand for AI inference processing in the cloud, Qualcomm said in a statement. It leverages Qualcomm's "heritage in advanced signal processing and power efficiency" to "facilitate distributed intelligence from the cloud to the client edge and all points in between."

The new line of inference accelerators was announced last week at Qualcomm's AI Day event. Keith Kressin, senior vice president in Qualcomm Tech's product management group, promised that the new accelerator "will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs and/or FPGAs [field programmable gate arrays] used in today's datacenters."

He added: "Qualcomm Technologies is now well-positioned to support complete cloud-to-edge AI solutions all connected with high-speed and low-latency 5G connectivity."

The company is promising a lot with this release, including greater than 10x performance per watt over the industry's most advanced AI inference solutions deployed today; a 7nm process node to provide performance and power advantages; and support for industry-leading software stacks, including PyTorch, Glow, TensorFlow, Keras and ONNX.

Qualcomm also plans to support the new product line with a full stack of tools and frameworks, and to facilitate the development of an ecosystem around the distributed AI model for everything from personal assistants for natural language processing and translations, to advanced image search and personalized content and recommendations.

Qualcomm faces plenty of competition in this space. Lots of vendors are designing silicon for inference processing in the datacenter. NVIDIA and Xilinx, for example, have been in the market for months with solid partnerships in place. NVIDIA unveiled its Turing-based Tesla T4 GPUs for inference processing last year. And AMD and Xilinx last year joined forces to set an AI inference processing record of 30,000 images per second. And Amazon Web Services (AWS) is marketing its Inferentia solution as a machine learning inference chip designed to deliver high performance at low cost.

The Qualcomm Cloud AI 100 is expected to begin sampling to customers in the second half of 2019.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025