Data Cloud company Snowflake and Nvidia have announced that they are partnering to provide businesses of all sizes with an accelerated path to create customised generative AI applications, all securely within the Snowflake Data Cloud.
With Nvidia’s NeMo platform for developing large language models (LLMs) and its GPU-accelerated computing, Snowflake will enable enterprises to use data in their Snowflake accounts to make custom LLMs for advanced generative AI services, including chatbots, search and summarisation. Announced at Snowflake Summit 2023, the ability to customize LLMs without moving data enables proprietary information to remain fully secured and governed within the Snowflake platform.
“Snowflake’s partnership with Nvidia will bring high-performance machine learning and artificial intelligence to our vast volumes of proprietary and structured enterprise data, a new frontier to bringing unprecedented insights, predictions and prescriptions to the global world of business,” said Frank Slootman, Snowflake’s Chairman and CEO.
Nvidia and Snowflake announcement represents a new opportunity for enterprises
Nvidia and Snowflake’s collaboration represents a new opportunity for enterprises. It will enable them to use their proprietary data — which can range from hundreds of terabytes to petabytes of raw and curated business information — to create and fine-tune custom LLMs that power business-specific applications and services.
“Data is essential to creating generative AI applications that understand the complex operations and unique voice of every company,” said Jensen Huang, Founder and CEO of Nvidia. “Together, NVIDIA and Snowflake will create an AI factory that helps enterprises turn their own valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses.”
By integrating AI technology from Snowflake and Nvidia, customers will be enabled to quickly and easily build, deploy and manage customized applications that bring the power of generative AI to all parts of their business across a variety of use cases. In addition, expanding AI capabilities in the Data Cloud enables these customers to create generative AI applications where their governed data already resides, a benefit that significantly reduces cost and latency while maintaining the security of their data.
“More enterprises than we expected are training or at least fine-tuning their own AI models, as they increasingly appreciate the value of their own data assets,” said Alexander Harrowell, Principal Analyst for Advanced Computing for AI at technology research group Omdia. “Similarly, enterprises are beginning to operate more diverse fleets of AI models for business-specific applications. Supporting them in this trend is one of the biggest open opportunities in the sector.”