Why is OpenAI exploring making its own AI chips?
OpenAI is exploring making its own AI chips amid ongoing supply chain challenges, according to a report by Reuters.
According to the report, the ChatGPT creator has considered a number of options, from building its own AI chip to working more closely with other chip makers including Nvidia and diversifying its suppliers beyond Nvidia.
OpenAI’s release of ChatGPT last year has rapidly accelerated interest in generative AI, with the tool capable of interacting conversationally, answering follow-up questions, admitting its mistakes, challenging incorrect premises, and rejecting inappropriate requests.
Following previous investments in 2019 and 2021, in January Microsoft announced a multibillion-dollar investment in OpenAI, intended to accelerate AI breakthroughs. The report, Reuters suggests, could signal further distancing between the two companies - with Microsoft working on its own custom AI chip.
Rapid rise in demand of AI has led to global chip shortage
A rapid rise in demand, prompted by the launch of ChatGPT, has led to a global shortage of AI chips - which are used to train the latest large language models - prompting tech giants such as Amazon, Meta and Microsoft to develop their own silicon.
As Alex White, GM EMEA at Softbank-backed AI startup SambaNova Systems, told Technology Magazine, a shortage of Nvidia’s AI chips is undoubtedly hurting OpenAI’s scaling ambitions.
“OpenAI’s potential move into hardware and building its own AI chips comes as no surprise. OpenAI is trying to reinvent itself as an enterprise business, and that requires the ability to be able to fine tune or build bespoke large language models - and we all know that training models requires vastly more compute power than running the models.”
Earlier this year, TSMC Chairman Mark Liu suggested that supply constraints on AI chips could take about 18 months to ease, due to limited capacity in advanced chip packaging services. The company - the world’s largest chipmaker - is the sole manufacturer for Nvidia's H100 and A100 AI processors, which power AI tools like ChatGPT.
“There’s a clear advantage to owning the whole stack from hardware to software - including the models that run on top,” White comments. “But designing and manufacturing chips doesn't happen overnight, it requires huge levels of expertise, and resources that are in increasingly short supply. It took OpenAI over five years to develop GPT-4, which may be too long to wait for customers, I wouldn’t be surprised if hardware took a similar amount of time.”
******
For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.
Other magazines that may be of interest - AI Magazine | Cyber Magazine.
Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.
******
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.
- Mendix & Snowflake: Unleashing the Power of Enterprise DataData & Data Analytics
- IBM & SAP Expanded Partnership to Supercharge Enterprise AIAI & Machine Learning
- ServiceNow & Microsoft Partnership Driving Enterprise Gen AIDigital Transformation
- NetApp Cloud Complexity: Reliable Data is Key to AI SuccessCloud & Cybersecurity