Nvidia to Ease Generative AI Development in Snowflake Data Cloud
Snowflake and Nvidia are partnering to bring generative AI to the Snowflake Data Cloud.
SNOWFLAKE SUMMIT — Snowflake and Nvidia are partnering to make it easier for developers and data scientists to build custom applications enabled with generative AI via the Snowflake Data Cloud. The just-announced partnership kicked off this week’s Snowflake Summit 2023 in Las Vegas. It will let partners and customers use Nvidia’s NeMo platform for building large language models (LLMs) and its GPUs for accelerated computing.
Nvidia’s NeMo is a cloud-native enterprise workflow framework for building, training and deploying generative AI models. The partnership will allow organizations that store their data in the Snowflake Data Cloud to create LLMs for chatbots, advanced search and summarization using Nvidia’s NeMo.
Snowlfake chairman Frank Slootman and Nvidia CEO and founder Jensen Huang revealed the partnership during the opening keynote at the Snowflake Summit.
Nvidia’s Jensen Huang
“We’re going to bring the world’s best compute engine to the world’s most valuable data,” Huang said.
The Snowflake Data Cloud, available in AWS, Microsoft Azure and Google Cloud, is a managed platform based on a multi-cluster shared data architecture that Snowflake boasts can scale to the limits of the cloud providers. It enables data access and sharing. Furthermore, it can run multiple workloads, including data warehousing, data lakes, data engineering and data science. It also serves as a repository for application development. Many large enterprises now use the cloud-native platform to consolidate and connect their data into a single version.
‘Not Just Pie in the Sky’
Snowflake didn’t announce any deliverable products, but it is demonstrating various generative AI models at the conference that are possible.
Snowflake’s Frank Slootman
“This is not just pie in the sky; this is literally happening as we sit here,” Slootman said. “And that’s what’s so exciting. The ‘deployability’ of the services is quite high.”
Manuvir Das, Nvidia’s VP of enterprise computing, said Snowflake is widely adopted across large enterprises.
“Our engineering teams are working together to insert Nvidia NeMo in the Snowflake Data Cloud so all of the customers of Snowflake can take foundation models, train them, fine-tune them with the data that they have installed on Snowflake Data Cloud, or they can just train a model from the ground up,” Das said.
Das added that AI developers working on behalf of these companies – as well as data scientists – can do this training to produce these custom models.
“As you pull the data out of the repository and the Snowflake Data Cloud, you can curate the data, prepare it, you can do the training, customize the models, and then you can host the models right there,” he said.
Nvidia’s ServiceNow Partnership
The Snowflake alliance comes a month after Nvidia announced a partnership with ServiceNow. Announced at the ServiceNow Knowledge 2023 conference, ServiceNow is creating custom LLMs trained for the Now Platform. Das explained that through that partnership, ServiceNow is using NeMo, to train custom models for each customer.
“In that engagement, the end customers of ServiceNow are building the models for each of those customers,” Das said. “In the case of Snowflake, it’s more of a general purpose, infrastructure data platform, so each of the customers of Snowflake are actually doing the model making. It’s not Snowflake doing the model-making. Snowflake is just providing a platform for doing the model making.”
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Jeffrey Schwartz or connect with him on LinkedIn. |
About the Author
You May Also Like