Expert Insight: Google, IBM, SAP at the 2019 AI World Congress
There are a little over 52 days to go until the opening of the AI World Congress 2019. On October 14-15 at the Kensington Conference and Event Centre, the Congress will be a meeting of the top minds in the field of artificial intelligence. 25 industry thought leaders from the world’s largest and most influential tech companies will be speaking over the course of the 48 hour congress.
This year’s speakers have a wealth of expertise, with emphasis on bringing AI to the Internet of Things and consumer (as well as enterprise) smart devices. Here’s our breakdown of three of the event’s key speakers:
Dvir has spent five years at Google, previously serving as a Director of Business Transformation at Microsoft and a Senior Product Manager at Skype. Dvir works with organisations looking to change and transform by adopting a lean, agile and modern way of working powered by Google’s Cloud and Apps infrastructure and productivity suite.
While working at Skype, Dvir reportedly led the company’s digital transformation, which enabled the company “to reach its peak connected users, 10X mobile engagements, 50% more revenue and a significant uptick in NPS and customer experience.”
As Offering Leader for the Assistant for Connected Vehicles - an IBM Watson AI powered in-vehicle assistant aimed at enterprise customers - Gyimesi is working at the intersection of AI, the internet of things and smart consumer and enterprise devices.
He has worked for more than 13 years on automotive, mobility and transportation solutions, with a speciality in sales and fostering client relationships. His recent article focuses on the advent of enterprise-grade AI assistants and lauds recent developments of natural language processing and AI in vehicles. “These assistants will be embedded in every type of connected device, including, of course, our cars. Ultimately, I believe that the market for enterprise AI assistants will be four times as large as the one for consumers,” he writes.
With a career in technology reaching back to the mid-nineties, Candish’s current role sees him head up the global business for SAP IoT Connect 365 for the SAP Digital Interconnect organisation with the aim of facilitating enterprise-wide IoT integration and adoption for the biggest companies in the world.
Candish’s latest article focused on the necessity for security in IoT enabled systems. “The need for IoT to be securely connected has some serious implications. For the novelty appliances of the early days, if connectivity failed or security device was breached, the implications were limited. For the vast majority of IoT devices today, however, this is no longer the case. A failure in connectivity or security has big and costly implications.”
Google AI Designs Next-Gen Chips In Under 6 Hours
In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product.
“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates.
Training the Algorithm
In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own.
Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could.
As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks.
Major Implications for the Semiconductor Sector
Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.
Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”.
All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society.