What 2018 holds for AI and deep learning
2018 is set to be an exciting year for businesses seeking to harness the power of deep learning on their journey towards intelligent enterprise. We have taken a look at some of the challenges to overcome and predictions for its implementation from experts in the field who envision it becoming more practical and useful, automating some jobs and augmenting many others, combining machine learning and big data for fresh actionable insights.
A deep learning system is, in short, a multi-layered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images of human faces, it recognises objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level; then eyes, mouths, and noses; entire faces; and finally, specific facial features.
Besides image recognition, deep learning offers the potential to approach complex challenges such as speech comprehension, human-machine conversation, language translation, and vehicle navigation, amongst others. How can we expect this technology to be implemented in the coming year?
Demystifying Neural Nets
“Deep neural networks, which mimic the human brain, have demonstrated their ability to ‘learn’ from image, audio, and text data,” says Anand Rao, Innovation Lead in PwC’s Analytics Group. However, he says that even after deep neural networks have been in use for over a decade, there is still a lot to learn such as how these networks learn and why they perform so well.
“That may be changing, thanks to a new theory that applies the principle of an information bottleneck to deep learning,” he says. “In essence, it suggests that after an initial fitting phase, a deep neural network will ‘forget’ and compress noisy data – that is, data sets containing a lot of additional meaningless information – while still preserving information about what the data represents. Understanding precisely how deep learning works enables its greater development and use. For example, it can yield insights into optimal network design and architecture choices, while providing increased transparency for safety-critical or regulatory applications.”
What, then, can we look out for in 2018? Rao says: “Expect to see more results from the exploration of this theory applied to other types of deep neural networks and deep neural network design.”
Wider application of AI in business
Markus Noga, Head of Machine Learning at SAP, is confident that businesses will be able to develop disruptive business models as mature machine learning algorithms develop. He explains: “They will force whole industries to realise that digital transformation is not just trend, but essential to remain competitive. Meanwhile, deep learning is established as the standard machine learning commodity, but will now strive for more efficiency and scalability within the systems”.
For this reason, we can expect a lot in the coming year. Noga continues: “We can await further breakthroughs in reinforcement learning and will see academia further adjust to industrial research to ensure their competitiveness.”
Neural networks on a smartphone
Robinson Piramuthu, Chief Scientist for Computer Vision at eBay, is confident that applications on smartphones will be capable of using AI on a large scale very soon, by running deep neural networks. He adds: “Friendly robots will start to emerge as more affordable and rise as the new platform at home. They will start to bridge vision, language and speech in such a way that the users will not be conscious about the difference between these communication modalities.”
Adaptation of technology
With AI here to stay, must technology adapt or die? Nicola Morini Bianzino, Managing Director of AI and Growth & Strategy Lead of Technology at Accenture, seems to think so. He feels organisations have no choice but to get on board and the only question remaining is how to do so.
“AI is going to affect 25% of technology spend going forward. The key topic is how organisations and the human workforce will cope with the changes that AI technologies will bring.”
Artificial intelligence from the lab to the bedside
In the healthcare industry, AI is set to have a more direct impact on the patient. Safwan Halabi is the Medical Director of Radiology Informatics at Stanford Children’s Health, Lucile Packard Children’s Hospital. He sees AI developing from the research lab right to the patient’s bedside. Halabi explains: “AI in imaging is reaching the peak of the ‘hype curve,’ and we will begin to see AI-enabled tools translate from the research lab to the radiologist workstation and ultimately the patient bedside. The not so glamorous use cases (for example, workflow tools, quality/safety, patient triage, etc.) for AI evaluation and implementation will start grabbing the attention of developers, insurance companies, healthcare organisations and institutions.”
However, Halabi is aware of the challenges involved, especially from a regulation perspective. “The FDA (US Food and Drug Administration) will need to find efficient and streamlined methodologies to vet and approve algorithms that will be used to screen, detect and diagnose disease.”
Smarter personal assistants
Already we are using personal assistants with some level of artificial intelligence to help us with daily tasks, and Alejandro Troccoli, Senior Research Scientist at NVIDIA feels these tools will only become more prevalent and developed.
“Personal assistant AIs will keep getting smarter,” he says. “As our personal assistants learn more about our daily routines, I can imagine the day I need not to worry about preparing dinner. My AI knows what I like, what I have in my pantry, which days of the week I like to cook at home, and makes sure that when I get back from work all my groceries are waiting at my doorstep, ready for me to prepare that delicious meal I had been craving.”
Less reliance on cards
Biometrics could also see credit cards and driving licenses becoming an archaic method of identification and payment. This is something Georges Nahon, CEO of Orange Silicon Valley and President of global research lab, the Orange Institute, sees as a viable part of our near future.
“Thanks to AI, the face will be the new credit card, the new driver’s licence and the new barcode,” he explains. “Facial recognition is already completely transforming security with biometric capabilities being adopted, and seeing how tech and retail are merging, like Amazon is with Whole Foods, I can see a near future where people will no longer need to stand in line at the store.”
Rise of the chatbots
“When it comes to artificial intelligence in 2018, companies will begin to hire individuals who can properly analyse algorithms,” says Timo Eliott, ‘Innovation Evangelist’ at SAP. “We will call these people ‘algorithm whisperers’,” he continues. “Chatbots will be assisting everyone – from being incorporated into mobile phones, to the bricks-and-mortar shopping experience. In the future, all products, services, and business processes will be self-improving.”
Despite the hopes harboured for the game-changing potential of deep learning through AI, Michael Morvan, Co-founder and CEO of infrastructure management specialist Cosmo Tech, offers words of warning for 2018: “The AI debate shifts from ‘is it good or evil’ to ‘is it ever going to be good enough’.
“If 2017 was the year warnings from Elon Musk and Stephen Hawking about the potential evil from AI clashed with predictions from Mark Zuckerberg and Bill Gates on its potential good, 2018 will be the year when the debate shifts to its practical utility. Much like other technologies that were lauded for their world-changing potential and then fizzled as the fog of the hype cleared, early adopters will find themselves disappointed by AI’s obvious limits. The broader public – familiar with Alexa, Siri and Google Home – will be similarly disillusioned as the experts acknowledge that there is only so much AI will be able to do, and for really complex problems, a new paradigm will be needed.”
1993 – Founding
Jensen Huang from AMD, and Chris Malachowsky and Curtis Priem from Sun Microsystems, saw a market to improve graphics performance with dedicated hardware. They sensed that computer games would become a huge market and set out with $40,000 to found Nvidia.
1993 – Funding
Having named the company after a file-naming system they had devised, the trio needed funding, which came in the shape of a $20 million venture capital round led by Sequoia Capital.
1998 – Breakthrough
Nvidia had some success but their breakthrough would come with the introduction of the RIVA TNT graphics adapter. The following year, the company released the GeForce 256, which had on-board transformation and lighting. The GeForce comfortably led competitors.
2000s – success
Nvidia won the contract to develop graphics hardware for Microsoft’s Xbox and would go on to provide similar services to Sony for the Playstation 3. A slew of acquisitions and awards made Nvidia a household name in graphics.
2020 – Cambridge-1
The benefits of using the awesome power of graphics hardware to process other data was not lost on Nvidia, which announced plans to build the Cambridge-1, the UK’s most powerful computer. The company’s future in AI hardware development is virtually secure.
Photo credit: Nvidia
Find out more
Caption. Credit: Getty/xxx