May 17, 2020

AI and the expectation gap

AI
Artificial intelligence
Tech cycle
AI winter
Frank Palermo
4 min
AI
The history of artificial intelligence has always been characterised by an ‘expectation gap’ – the gulf between what we’ve expected of it and wh...

The history of artificial intelligence has always been characterised by an ‘expectation gap’ – the gulf between what we’ve expected of it and what it’s capable of delivering. This has hindered AI’s overall progress as scepticism has set in, leading to cuts in funding and lower adoption of the technology.

AI isn’t necessarily new – ideas around it go back to Alan Turing’s famous test, which assessed the intelligence of machines. Since then, humans have consistently aimed to gauge technology on its humanity – think of how we’ve romanticised friendly robots in the Star Wars series, or feared their robotic counterparts in films like Terminator.

But as a society, we’ve never had a way of benchmarking AI’s potential capabilities against its reality. Yet if we want to avoid the scepticism that has surrounded it and make real progress, then we need one, and fast.

Winter of discontent

AI’s history can be seen as ‘boom and bust’ – initial hype cycles that lead to eventual disappointment, or even criticism of the technology that sees funding for it dry up. This is what’s known as an ‘AI winter’.

The first dates as far back as the 1950’s, where an intelligent machine translated several sentences from Russian into English, as part of the Georgetown experiment. Thrilled with this achievement, observers quickly began to expect that commercial translation services on this model would be available within a few short years. Yet after a decade of research failed to produce results, the funding froze. Indeed, it wasn’t until Google’s launch of its Neural Machine Translation service in 2016 that these initial expectations were realised.

Since the Georgetown experiment, we’ve seen two more major AI winters, both of which were caused by underestimating the difficulties in building intelligent machines, and by a misunderstanding of the limits of the technology that was available then. Essentially, this created a mismatch between expectation and reality.

See also:

An autonomous future

Fast forward to 2018, and the risk of another AI winter has cropped up again. Earlier this year, a fatal crash involving an autonomous car happened in California, raising serious questions about the safety of autonomous cars. While this was undoubtedly a tragedy, the car’s on-board autopilot did provide the driver with visual and audio indicators to help avoid the concrete divider, and its AI software did include in-built warnings that sound whenever a driver’s hands leave the wheel for six seconds or more.

On that basis, what should our reaction be when a driver hasn’t followed the critical safety guidelines? Were a driver to die due to not having worn a seatbelt, would we blame the car manufacturer? Moreover, if one driver skips the traffic lights, or makes any other type of fatal error, should we then ban all vehicles?

It’s vital that drivers understand the system capabilities that autonomous vehicles have. According to the SAE International standard J3016, there are six levels of automation for manufacturers, suppliers, and authorities to judge a system’s overall sophistication. Most of today’s autonomous vehicles sit at level two, or ‘partial automation’, in which manufacturers expect that drivers will need to keep their hands on the wheel.

Yet there’s quite a jump to level three – ‘conditional automation’, where the car looks after most of the actual driving, including monitoring its environment. Whenever the system comes up against a scenario it can’t manage, it prompts the driver to intervene. Audi has released the world’s first car that fits in this category; importantly, it has confirmed that it assumes full responsibility in the event of an accident.

While progress is therefore being made, it’s clear that we’re still some way off full autonomy. The car industry is now having to manage expectations – Ford, for example, has had to scale back from its prediction of a level four vehicle by 2021, which would feature no accelerator, no steering wheel, and no need for the passenger to ever take control.

Collaboration

All of this shows that car manufacturers need to be clearer on the current capabilities of autonomous vehicles, and how quickly we’ll really start to see vehicles at levels three or four of the J3016 standard. The car industry needs to be careful not to encourage the hype cycles that invariably lead to an expectation gap, and by extension, another AI winter. Importantly, it needs to stop implying that humans are the enemy of technological advancement in this case.

Instead, the industry should consider the idea of parallel autonomy, where the role of technology is akin to a guardian angel, preventing human drivers from having accidents. As Ford’s recent climbdown suggests, perhaps full autonomy isn’t a realistic short-term goal, and R&D budgets might be better served going into assistive technologies.

Frank Palermo, Senior Vice President, Technical Solutions Group, Virtusa

Share article

Jul 14, 2021

Discord buys Sentropy to fight against hate and abuse online

Technology
Discord
Sentropy
AI
2 min
Sentropy is joining Discord to continue fighting against hate and abuse on the internet

Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.

First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

 

Cleanse platforms of online harassment and abuse

 

Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.

 

Share article