Nvidia Blackwell Aims to Continue Powering AI Acceleration
Founded in 1993, Nvidia’s invention of the graphics processing unit (GPU) in 1999 sparked the growth of the PC gaming market, redefining computer graphics. Today, the company’s solutions are igniting the era of modern AI, with chips such as its H100 being responsible for training large language models including OpenAI’s GPT4.
The company experienced huge AI and generative AI (Gen AI) breakthroughs in 2023, with CEO and Founder Jensen Huang announcing it expected to become the world’s first trillion-dollar semiconductor stock.
Now, at the company’s annual GTC conference, Nvidia has announced its latest generation Blackwell platform, which it says enables organisations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.
According to Nvidia, the Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and Gen AI.
At its conference, Nvidia also announced its latest HGX B200 and HGX B100 chips to propel data centres into a new era of accelerating computing and Gen AI. As a premier accelerated scale-up platform with up to 15X more inference performance than the previous generation, Blackwell-based HGX systems are designed for the most demanding generative AI, data analytics, and HPC workloads.
“For three decades we’ve pursued accelerated computing, with the goal of enabling transformative breakthroughs like deep learning and AI,” Huang said. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realise the promise of AI for every industry.”
Nvidia Blackwell to be used by the world’s largest technology companies, from AWS, Google, Microsoft and Meta
Named in honour of David Harold Blackwell – a mathematician who specialised in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences – Nvidia’s new architecture succeeds the Hopper architecture, launched two years ago.
Among the many organisations expected to adopt Blackwell are Amazon Web Services (AWS), Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.
At GTC, Google Cloud and Nvidia announced a deepened partnership to enable the machine learning (ML) community with technology that accelerates their efforts to easily build, scale and manage Gen AI applications.
“Scaling services like Search and Gmail to billions of users has taught us a lot about managing compute infrastructure,” comments Sundar Pichai, CEO of Alphabet and Google. “As we enter the AI platform shift, we continue to invest deeply in infrastructure for our own products and services, and for our Cloud customers.
“We are fortunate to have a longstanding partnership with Nvidia, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google, including Google DeepMind, to accelerate future discoveries.”
AWS will offer the Nvidia GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies’ long-standing strategic collaboration to deliver the most secure and advanced infrastructure, software, and services to help customers unlock new Gen AI capabilities.
“Our deep collaboration with Nvidia goes back more than 13 years, when we launched the world’s first GPU cloud instance on AWS,” Andy Jassy, President and CEO of Amazon says.
“Today we offer the widest range of GPU solutions available anywhere in the cloud, supporting the world’s most technologically advanced accelerated workloads. It's why the new Nvidia Blackwell GPU will run so well on AWS and the reason that Nvidia chose AWS to co-develop Project Ceiba, combining Nvidia’s next-generation Grace Blackwell Superchips with the AWS Nitro System's advanced virtualisation and ultra-fast Elastic Fabric Adapter networking, for Nvidia's own AI research and development. Through this joint effort between AWS and Nvidia engineers, we're continuing to innovate together to make AWS the best place for anyone to run Nvidia GPUs in the cloud.”
At GTC, Dell Technologies announced a set of complete Nvidia-powered solutions, including an AI Factory to help global enterprises accelerate AI adoption.
The Dell AI Factory with Nvidia integrates Dell’s leading compute, storage, networking, workstations and laptops, with Nvidia’s advanced AI infrastructure and Nvidia Enterprise AI software.
“Gen AI is critical to creating smarter, more reliable and efficient systems,” comments Michael Dell, Founder and CEO of Dell Technologies. “Dell Technologies and Nvidia are working together to shape the future of technology. With the launch of Blackwell, we will continue to deliver the next-generation of accelerated products and services to our customers, providing them with the tools they need to drive innovation across industries.”
Demis Hassabis, Cofounder and CEO of Google DeepMind comments: “The transformative potential of AI is incredible, and it will help us solve some of the world’s most important scientific problems. Blackwell’s breakthrough technological capabilities will provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries.”
According to Mark Zuckerberg, founder and CEO of Meta, the company will be using Nvidia’s Blackwell platform to help it train its LLMs. “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it's only going to get more important in the future. We're looking forward to using Nvidia's Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”
Meanwhile, Microsoft and Nvidia announced an expansion of their longstanding collaboration with powerful new integrations that leverage the latest Nvidia Gen AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.
“We are committed to offering our customers the most advanced infrastructure to power their AI workloads,” says Satya Nadella, Executive Chairman and CEO of Microsoft. “By bringing the GB200 Grace Blackwell processor to our data centres globally, we are building on our long-standing history of optimising Nvidia GPUs for our cloud, as we make the promise of AI real for organisations everywhere.”
Sam Altman, CEO of OpenAI, adds: “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with Nvidia to enhance AI compute.”
In more news, Oracle and Nvidia announced an expanded collaboration to deliver sovereign AI solutions to customers around the world. “Oracle’s close collaboration with Nvidia will enable qualitative and quantitative breakthroughs in AI, machine learning and data analytics,” explains Larry Ellison, Chairman and CTO of Oracle. “In order for customers to uncover more actionable insights, an even more powerful engine like Blackwell is needed, which is purpose-built for accelerated computing and Gen AI.”
******
Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
Technology Magazine is a BizClik brand