AI and the expectation gap

By Frank Palermo
The history of artificial intelligence has always been characterised by an ‘expectation gap’ – the gulf between what we’ve expected of it and wh...

The history of artificial intelligence has always been characterised by an ‘expectation gap’ – the gulf between what we’ve expected of it and what it’s capable of delivering. This has hindered AI’s overall progress as scepticism has set in, leading to cuts in funding and lower adoption of the technology.

AI isn’t necessarily new – ideas around it go back to Alan Turing’s famous test, which assessed the intelligence of machines. Since then, humans have consistently aimed to gauge technology on its humanity – think of how we’ve romanticised friendly robots in the Star Wars series, or feared their robotic counterparts in films like Terminator.

But as a society, we’ve never had a way of benchmarking AI’s potential capabilities against its reality. Yet if we want to avoid the scepticism that has surrounded it and make real progress, then we need one, and fast.

Winter of discontent

AI’s history can be seen as ‘boom and bust’ – initial hype cycles that lead to eventual disappointment, or even criticism of the technology that sees funding for it dry up. This is what’s known as an ‘AI winter’.

The first dates as far back as the 1950’s, where an intelligent machine translated several sentences from Russian into English, as part of the Georgetown experiment. Thrilled with this achievement, observers quickly began to expect that commercial translation services on this model would be available within a few short years. Yet after a decade of research failed to produce results, the funding froze. Indeed, it wasn’t until Google’s launch of its Neural Machine Translation service in 2016 that these initial expectations were realised.

Since the Georgetown experiment, we’ve seen two more major AI winters, both of which were caused by underestimating the difficulties in building intelligent machines, and by a misunderstanding of the limits of the technology that was available then. Essentially, this created a mismatch between expectation and reality.

See also:

An autonomous future

Fast forward to 2018, and the risk of another AI winter has cropped up again. Earlier this year, a fatal crash involving an autonomous car happened in California, raising serious questions about the safety of autonomous cars. While this was undoubtedly a tragedy, the car’s on-board autopilot did provide the driver with visual and audio indicators to help avoid the concrete divider, and its AI software did include in-built warnings that sound whenever a driver’s hands leave the wheel for six seconds or more.

On that basis, what should our reaction be when a driver hasn’t followed the critical safety guidelines? Were a driver to die due to not having worn a seatbelt, would we blame the car manufacturer? Moreover, if one driver skips the traffic lights, or makes any other type of fatal error, should we then ban all vehicles?

It’s vital that drivers understand the system capabilities that autonomous vehicles have. According to the SAE International standard J3016, there are six levels of automation for manufacturers, suppliers, and authorities to judge a system’s overall sophistication. Most of today’s autonomous vehicles sit at level two, or ‘partial automation’, in which manufacturers expect that drivers will need to keep their hands on the wheel.

Yet there’s quite a jump to level three – ‘conditional automation’, where the car looks after most of the actual driving, including monitoring its environment. Whenever the system comes up against a scenario it can’t manage, it prompts the driver to intervene. Audi has released the world’s first car that fits in this category; importantly, it has confirmed that it assumes full responsibility in the event of an accident.

While progress is therefore being made, it’s clear that we’re still some way off full autonomy. The car industry is now having to manage expectations – Ford, for example, has had to scale back from its prediction of a level four vehicle by 2021, which would feature no accelerator, no steering wheel, and no need for the passenger to ever take control.

Collaboration

All of this shows that car manufacturers need to be clearer on the current capabilities of autonomous vehicles, and how quickly we’ll really start to see vehicles at levels three or four of the J3016 standard. The car industry needs to be careful not to encourage the hype cycles that invariably lead to an expectation gap, and by extension, another AI winter. Importantly, it needs to stop implying that humans are the enemy of technological advancement in this case.

Instead, the industry should consider the idea of parallel autonomy, where the role of technology is akin to a guardian angel, preventing human drivers from having accidents. As Ford’s recent climbdown suggests, perhaps full autonomy isn’t a realistic short-term goal, and R&D budgets might be better served going into assistive technologies.

Frank Palermo, Senior Vice President, Technical Solutions Group, Virtusa

Share

Featured Articles

Celonis Data Insights Driving BMW's Sustainability Journey

Celonis and the BMW Group have strengthened their partnership to optimise the auto giant’s processes and increase efficiency, productivity & sustainability

How Zoom is Using AI Innovation to Reimagine Teamwork

Zoom has announced Workplace, its new AI-powered collaboration platform, to help reimagine teamwork, facilitate connections and improve productivity

How SAP is Accelerating Deutsche Telekom’s Cloud Journey

Europe’s largest telecommunications provider Deutsche Telekom is using RISE with SAP to accelerate its journey to the cloud

Alibaba Cloud’s Dr Li Feifei: Combining AI & Cloud Computing

Data & Data Analytics

Mustafa Suleyman: DeepMind Cofounder is new Microsoft AI CEO

AI & Machine Learning

Nvidia Blackwell Aims to Continue Powering AI Acceleration

AI & Machine Learning