May 17, 2020

Startup spotlight: Habana Labs’ AI chips

William Smith
2 min
Tel Aviv’s Habana Labs professes a narrow focus on providing artificially intelligent processors for data centres
AI is perhaps the most overused buzzword of all, bandied about so much as to dilute its power. Tel Aviv’s Habana Labs, however, resists that by profes...

AI is perhaps the most overused buzzword of all, bandied about so much as to dilute its power. Tel Aviv’s Habana Labs, however, resists that by professing a narrow focus on providing artificially intelligent processors for data centres.

Founded in 2016, the Israeli company has just received a huge vote of confidence from US-based semiconductor giant Intel, who have acquired the company for a cool $2bn. Providing reasoning for its purchase, the company cited the fast growing market for AI silicon, which it expects to be worth over $25bn by 2024.

Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel said: “This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center. More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

SEE ALSO:

Intel has bolstered its own AI capabilities with the purchase. One of the company’s offerings for data centres, its Xeon Scalable processors, already features “Deep Learning Boost” technology, which it says improves the speed of making inferences from data. Habana’s Goya technology is purpose-built for the same task, capable, the company says, of processing 15,453 images per second. Its other product, Gaudi is said to offer a 4x increase in training throughput compared to equivalent GPU-based systems.

“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”

Share article

Jun 11, 2021

Google AI Designs Next-Gen Chips In Under 6 Hours

Google
AI
Manufacturing
semiconductor
3 min
Google AI’s deep reinforcement learning algorithms can optimise chip floor plans exponentially faster than their human counterparts

In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product. 

 

“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates

 

Training the Algorithm 

In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own. 

 

Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could. 

 

As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks. 

 

Major Implications for the Semiconductor Sector 

Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.

 

Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”. 

 

All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society. 

 

 

Share article