Meta Ramps up AI Efforts, Building Massive Compute Capacity
Meta is to build out massive compute infrastructure to help support its generative AI (Gen AI) ambitions, including the latest version of its open source Llama LLM, according to its CEO Mark Zuckerberg.
In a statement on Meta's Instagram and Threads platforms, Zuckerberg said that the company was bringing its AI research team ‘closer together’ and that it was building out its compute infrastructure to support its future roadmap, which includes a further push into AI and – like OpenAI – a move towards artificial general intelligence.
To meet this demand, Meta plans to have approximately 350,000 H100 GPUs from chip designer Nvidia by the end of 2024, Zuckerberg said.
This, in combination with equivalent chips from other suppliers, Meta will have around 600,000 total GPUs by the end of the year, he said: among the largest in the technology industry.
“Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said.
“We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 350,000 H100s by the end of this year - and overall almost 600,000 H100s equivalents of compute if you include other GPUs.”
Meta announcement comes after extensive semiconductor shortages, driven by increased AI demand
Meta’s ambitions to expand its compute capabilities comes after a recent spell of supply chain issues in the semiconductor industry. In 2023 TSMC Chairman Mark Liu suggested that supply constraints on AI chips could take about 18 months to ease, due to limited capacity in advanced chip packaging services. The company - the world’s largest chipmaker - is the sole manufacturer for Nvidia's H100 and A100 AI processors, which power AI tools like ChatGPT and Meta’s models.
A rapid rise in demand for AI models has led to a global shortage of AI chips - which are used to train the latest LLMs - prompting tech giants such as Amazon, Meta and Microsoft to develop their own silicon.
Nvidia itself announced an update to its H100 GPU – the H200 – in November, with it set to launch in the second quarter of 2024. Described by the company as ‘the world’s most powerful GPU for supercharging AI and HPC workloads,’ Nvidia says the H200 Tensor Core GPU will aim to supercharge Gen AI and high-performance computing (HPC) workloads with “game-changing performance and memory capabilities”.
In June 2023 Meta released the second iteration of its Llama AI model, with the hope to further promote responsible and safe use of AI and LLMs within the industry.
Meta’s previous Llama model was released earlier in 2023 and also aimed to allow researchers without substantial infrastructure required to study them, democratising access to the rapidly advancing field of AI.
******
Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
Technology Magazine is a BizClik brand
- Mendix & Snowflake: Unleashing the Power of Enterprise DataData & Data Analytics
- IBM & SAP Expanded Partnership to Supercharge Enterprise AIAI & Machine Learning
- ServiceNow & Microsoft Partnership Driving Enterprise Gen AIDigital Transformation
- NetApp Cloud Complexity: Reliable Data is Key to AI SuccessCloud & Cybersecurity