Google, IBM Adopt New Intel Xeon Chips for the Cloud
Google Cloud is rolling out VMs optimized for compute and memory.
April 4, 2019
Major cloud providers are adopting the latest generation of Xeon server processors introduced this week to help bolster their environments.
Google Cloud Platform is leveraging the “Cascade Lake” chips to create compute- and memory-optimized virtual machines in the public cloud. In addition, IBM Cloud said the new processors are now available on bare-metal servers in the cloud provider’s data centers around the world.
In a huge product release at an event in San Francisco that put on display the chip maker’s deep data center reach and broad portfolio, Intel officials rolled out the company’s newest Xeon processors, now officially called Second Generation Xeon Scalable processors. The 14-nanometer chips come armed with a range of capabilities aimed at compute-intensive modern workloads in this new data-centric era that include everything from artificial intelligence (AI) and machine learning to data analytics.
Driving these workloads are such trends as cloud computing, big data, the proliferation of mobile devices and the fast-growing internet of things (IoT). Beyond boosts in performance and efficiency, the new Xeons include such new Intel technologies as Deep Learning Boost (DL Boost) aimed at AI inference workloads like image recognition and object detection from the data center to the edge, and support for Intel’s Optane DC persistent memory.
In conjunction with Intel’s Cascade Lake chips announcement, Google Cloud said it would leverage the new Xeons in compute-optimized and memory-optimized virtual machines (VMs) in the company’s Compute Engine VM lineup. The new VMs target an array of workloads, according to Brad Calder, vice president of engineering for Google Cloud, and Bart Sano, vice president of platforms.
The chips also will be available in the cloud provider’s general-purpose VMs.
Google’s Brad Calder
“Whether you’re running compute-bound applications for HPC [high-performance computing] or large, in-memory database applications like SAP HANA, you need the right mix of compute resources for the job, while also keeping an eye on price performance,” Calder and Sano wrote in a blog. “The vast majority of enterprise workloads run successfully on Google Cloud Platform using our general-purpose VMs; however, as you port more workloads to the cloud, you may need VMs that are optimized for specific types of workloads.”
The family of compute-optimized VMs (C2) is new to Google Cloud, offering high per-thread performance and memory speeds for highly compute-intensive workloads, the men wrote. They’re designed for such workloads as HPC, electronic design automation (EDA) and gaming. According to Calder and Sano, they provide 40 percent better performance than current Google Cloud VMs. Customers can run compute-optimized VMs with up to 60 virtual CPUs, 240GB of memory and 3TB of local storage. They’re available in alpha now.
The M2 Memory-Optimized VMs are aimed at such applications as SAP’s HANA in-memory offerings and in-memory data analytics. The first of the M2 VMs were announced last July and came with up to 4TB of memory. The newest VMs deliver up to 12TB and 416 vCPUs. They will be available to early-access customers this quarter.
The moves toward optimized VMs make sense for both Google Cloud and its customers, according to industry analysts.
Pund-IT’s Charles King
“At one level, it’s all about offering customers more and better choices,” Charles King, principal analyst with Pund-IT, told Channel Futures. “The significant uptick in both performance and memory capacity (as well as support for Optane DC persistent memory modules) in Intel’s Cascade Lake means that Google can slice/dice systems/capacity in interesting new ways.”
Also, Google and other cloud providers are “hyper-aware of and sensitive to system power consumption issues, and the company has a history … of throwing its support behind platforms that successfully balance system performance and electrical power requirements,” King said.
Rob Enderle, principal analyst with The Enderle Group, told Channel Futures that the industry constantly has a need “for higher performance and …
… greater value [and] this is what is driving these additional choices and the need for resources so that the related decisions are made wisely. For VMs, customization allows the system to be optimized for the load you’ll put it under. This can mean better performance and better value, both of which are typically high-valued customer benefits.”
It also can be a boon for channel partners, Enderle said.
Enderle Group’s Rob Enderle
“The more complexity you put into a decision, the greater the need for help to understand the options,” he said. “Channel partners, who are often closer to their clients due to relative size, can provide the needed advice and service help to assure the options chosen are the best for the customer.”
According to Gregg Bennett, senior offering manager of computer hardware for IBM Cloud, using the new Xeon processors in bare-metal cloud servers will make it easier for enterprises to move their “performance-heavy” workloads to the cloud.
“Historically, key enterprise applications have mostly been implemented on-premises; but increasingly, companies are looking to ‘burst up’ to the cloud as a way to manage IT capacity,” Bennett wrote in a blog, adding that the new servers with the Cascade Lake Xeons “will help our clients address their growing performance needs.”
IBM eventually is looking to offer more Second Generation Xeon Scalable Processor capabilities in such areas as the IBM Cloud Virtual Server cloud hosts and SAP-certified infrastructure in the coming months.
Pund-IT’s King said he expects Amazon Web Services (AWS) – which has a site dedicated to its partnership with Intel – and Microsoft Azure to follow suit with offerings enabled by Cascade Lake processors.
Read more about:
VARs/SIsAbout the Author
You May Also Like