Nvidia, Google Cloud Help Startups with Gen AI
The companies aim to make the hot tech affordable for new entrants. Look for cloud credits and more.
GOOGLE CLOUD NEXT — Nvidia and Google Cloud are furthering their partnership, this time with generative AI for startups in mind.
The companies made the announcement tat Tuesday at ogle Cloud Next ’24.
Here’s how the new initiative between Nvidia and Google Cloud will work: Nvidia will bring its Inception program for startups to the table alongside the Google for Startups Cloud Program. As a result, members of Nvidia Inception, which supports more than 18,000 startups, will get an “accelerated path” to using Google Cloud infrastructure, Nvidia said. That means they’ll have the chance to use up to $350,000 worth of Google Cloud credits focused on AI.
In return, members of the Google for Startups Cloud Program may join Nvidia Inception. That will give them avenues to technological expertise, Nvidia Deep Learning Institute course credits, and Nvidia hardware and software. They also may take part in Nvidia Inception Capital Connect. That’s a platform where startups get exposure to venture capital firms keen on AI.
On top of that, Nvidia said, high-growth emerging software makers of both programs may take advantage of fast-tracked onboarding to Google Cloud Marketplace, co-marketing and product acceleration support.
Nvidia and Google Cloud say their efforts aim to help ease the costs and barriers associated with developing generative AI applications for enterprises of all sizes. Of course, those expenses tend to hinder startups the most, which is why Nvidia and Google Cloud are targeting that demographic.
Of note is that Nvidia has optimized its AI platforms for Gemma, Google’s family of open AI models. Nvidia says that, too, helps to reduce end-user costs and speeds up innovation. Conversely, Google Cloud has made some tweaks across its platform to enable easier deployment of Nvidia frameworks for developers.
More AI News from Nvidia and Google Cloud
Also of interest to people deep in the generative AI space is that Google Cloud says it’ll make A3 Mega generally available next month. That will widen the availability of Nvidia-accelerated generative AI computing, the companies said. The instances expand the A3 virtual machine family, powered by Nvidia H100 Tensor Core GPUs. The additions will double the GPU-to-GPU network bandwidth from A3 virtual machines, Nvidia and Google Cloud said.
Next, Google Cloud’s new Confidential virtual machines on A3 also will include support for confidential computing. These will help customers protect sensitive data and secure applications and AI workloads during training and inference, with no code changes while accessing H100 GPU acceleration, Google Cloud said. They’ll be available in preview this year.
Finally, as we cover in our article previewing Google Cloud CEO Thomas Kurian’s keynote speech, Google Cloud will add the latest Nvidia Blackwell platform as of early 2025. That will comprise two variations: the HGX B200 and the GB200 NVL72.
“The potential for gen AI to drive rapid transformation for every business, government and user is only as powerful as the infrastructure that underpins it,” Kurian said.
The HGX B200 handles the most demanding AI, data analytics and high-performance computing workloads, Nvidia said, while the GB200 NVL72 supports massive-scale, trillion-parameter model training and real-time inferencing.
As part of that, as Nvidia recently announced, Nvidia DGX Cloud, an AI platform for enterprise developers optimized for generative AI, is generally available on A3 VMs powered by H100 GPUs. DGX Cloud with GB200 NVL72 will be available on Google Cloud in 2025.
About the Author
You May Also Like