Meta Ramps up AI Efforts, Building Massive Compute Capacity

Meta has Announced Plans to Build out its Compute Infrastructure to Support its Future Roadmap, With AI Advancements Central to its Ambitions

Meta is to build out massive compute infrastructure to help support its generative AI (Gen AI) ambitions, including the latest version of its open source Llama LLM, according to its CEO Mark Zuckerberg.

In a statement on Meta's Instagram and Threads platforms, Zuckerberg said that the company was bringing its AI research team ‘closer together’ and that it was building out its compute infrastructure to support its future roadmap, which includes a further push into AI and – like OpenAI – a move towards artificial general intelligence.

To meet this demand, Meta plans to have approximately 350,000 H100 GPUs from chip designer Nvidia by the end of 2024, Zuckerberg said. 

This, in combination with equivalent chips from other suppliers, Meta will have around 600,000 total GPUs by the end of the year, he said: among the largest in the technology industry.

“Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. 

“We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 350,000 H100s by the end of this year - and overall almost 600,000 H100s equivalents of compute if you include other GPUs.”

Meta announcement comes after extensive semiconductor shortages, driven by increased AI demand

Meta’s ambitions to expand its compute capabilities comes after a recent spell of supply chain issues in the semiconductor industry. In 2023 TSMC Chairman Mark Liu suggested that supply constraints on AI chips could take about 18 months to ease, due to limited capacity in advanced chip packaging services. The company - the world’s largest chipmaker - is the sole manufacturer for Nvidia's H100 and A100 AI processors, which power AI tools like ChatGPT and Meta’s models.

A rapid rise in demand for AI models has led to a global shortage of AI chips - which are used to train the latest LLMs - prompting tech giants such as Amazon, Meta and Microsoft to develop their own silicon.

Nvidia itself announced an update to its H100 GPU – the H200 – in November, with it set to launch in the second quarter of 2024. Described by the company as ‘the world’s most powerful GPU for supercharging AI and HPC workloads,’ Nvidia says the H200 Tensor Core GPU will aim to supercharge Gen AI and high-performance computing (HPC) workloads with “game-changing performance and memory capabilities”. 

In June 2023 Meta released the second iteration of its Llama AI model, with the hope to further promote responsible and safe use of AI and LLMs within the industry.

Meta’s previous Llama model was released earlier in 2023 and also aimed to allow researchers without substantial infrastructure required to study them, democratising access to the rapidly advancing field of AI.

******

Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Technology Magazine is a BizClik brand

Share

Featured Articles

The Future of AI at Deltek with Bret Tushaus

We speak with Bret Tushaus, VP of Product Management at Deltek, about the company’s new AI tool, Dela, and how businesses can best harness AI

How Infosys is Driving a Next-Gen Formula E Fan Experience

Infosys and Formula E have struck a new partnership to enable next-gen fan experiences, powered by AI and digital innovations

ServiceNow & Microsoft Partnership Driving Enterprise Gen AI

ServiceNow and Microsoft have announced an expanded alliance that combines their powerful generative AI capabilities into a seamless enterprise experience

NetApp Cloud Complexity: Reliable Data is Key to AI Success

Cloud & Cybersecurity

Top 100 Women 2024: Karine Brunet, Capgemini - No. 9

Enterprise IT

AMD: Expansion, Growth and Doubling Down on AI Innovation

AI & Machine Learning