Cisco survey has revealed that three-quarters of IoT projects are failing
The IDC ha...
A report by Cisco has revealed that around three-quarters of all IoT projects are failing, with 60% stalling at the proof of concepts stage.
The IDC has predicted that the installed base of Internet of Things endpoints will grow exponentially from the 14.9 billion at the end of 2016 to more than 82 billion in eight years time.
Surveying 1,845 IT and business decision-makers across the globe returned results that highlighted that, despite the forward momentum in the industry, a third of all completed projects were deemed a failure.
As well as this, only 26% of companies feel that they have produced an IoT initiative that they deem to be a success.
This is due partly because developing an IoT looks good in planning but as the proof of concept stage comes around, it's found to be more difficult than predicted to make it successful.
Underestimating the amount of time it would take to complete the project and having limited internal expertise on the subject matter were two of the main challenges across the stages of implementation.
Rowan Trollope, Senior Vice President and General Manager of IoT applications at Cisco, says that it's "not for the lack of trying."
"We want to make sure we can get projects out of the pilot phase and onto being a complete success, that's what we're aiming for."
The report found that despite IoT being perceived to be all about technology, the human factor matters. Three of the top four factors behind a successful IoT programme included having a top-down focus on technological culture and having a collaboration between the business side and the IT side of the company, with the latter being cited as the number one factor by 54% of participants.
In addition, organisations with the most successful IoT initiatives leveraged ecosystem partnerships most widely, using partners at every phase from strategic planning to data analytics after rollout.
Data is shown to be a pivotal component of building a successful IoT, as 73% used data to improve and develop their systems further. The main benefits included improved satisfaction, operational efficiencies and an overall improvement in the product and service provided.
Ultimately, it is the companies who have bouncebackability who managed to produce successful IoT projects, with 64% agreeing that learning from previous failed attempts helped to accelerate their organisation's investment and dedication to the programme.
Google AI Designs Next-Gen Chips In Under 6 Hours
In a Google-Nature paper published on Wednesday, the company announced that AI will be able to design chips in less than six hours. Humans currently take months to design and layout the intricate chip wiring. Although the tech giant has been working in silence on the technology for years, this is the first time that AI-optimised chips have hit the mainstream—and that the company will sell the result as a commercial product.
“Our method has been used in production to design the next generation of Google TPU (tensor processing unit chips)”, the paper’s authors, Azalea Mirhoseini and Anna Goldie wrote. The TPU v4 chips are the fastest Google system ever launched. “If you’re trying to train a large AI/ML system, and you’re using Google’s TensorFlow, this will be a big deal”, said Jack Gold, President and Principal Analyst at J.Gold Associates.
Training the Algorithm
In a process called reinforcement learning, Google engineers used a set of 10,000 chip floor plans to train the AI. Each example chip was assigned a score of sorts based on its efficiency and power usage, which the algorithm then used to distinguish between “good” and “bad” layouts. The more layouts it examines, the better it can generate versions of its own.
Designing floor plans, or the optimal layouts for a chip’s sub-systems, takes intense human effort. Yet floorplanning is similar to an elaborate game. It has rules, patterns, and logic. In fact, just like chess or Go, it’s the ideal task for machine learning. Machines, after all, don’t follow the same constraints or in-built conditions that humans do; they follow logic, not preconception of what a chip should look like. And this has allowed AI to optimise the latest chips in a way we never could.
As a result, AI-generated layouts look quite different to what a human would design. Instead of being neat and ordered, they look slightly more haphazard. Blurred photos of the carefully guarded chip designs show a slightly more chaotic wiring layout—but no one is questioning its efficiency. In fact, Google is starting to evaluate how it could use AI in architecture exploration and other cognitively intense tasks.
Major Implications for the Semiconductor Sector
Part of what’s impressive about Google’s breakthrough is that it could throw Moore’s Law, the axion that the number of transistors on a chip doubles every five years, out the window. The physical difficulty of squeezing more CPUs, GPUs, and memory on tiny silicon die will still exist, but AI optimisation may help speed up chip performance.
Any chance that AI can help speed up current chip production is welcome news. Though the U.S. Senate recently passed a US$52bn bill to supercharge domestic semiconductor supply chains, its largest tech firms remain far behind. According to Holger Mueller, principal analyst at Constellation Research, “the faster and cheaper AI will win in business and government, including with the military”.
All in all, AI chip optimisation could allow Google to pull ahead of its competitors such as AWS and Microsoft. And if we can speed up workflows, design better chips, and use humans to solve more complex, fluid, wicked problems, that’s a win—for the tech world and for society.