AI can change retail forever - if we can trust it...
As all good retailers know, the key to selling is understanding. It’s all about how much brands know about their customers and how they use this information to meet existing and future needs.
Typically, the biggest barrier to understanding has been scale. Sure, in the good old days it was easy enough for a one-man convenience store to remember individual customer preferences – for example how many children a customer had, or how old they were. But for a 50-chain store, it wouldn’t work like that anymore.
Technology provides a fix, but as anyone who has been haunted round the internet by banner ads for cheap lawn mowers will attest, broad-brush demographic data often provides as much insight into purchase intention as a random guess (unless you’re looking for cheap lawn mowers, that is).
Enter artificial intelligence. Provided it is fed with enough, relevant input, machine learning has the ability to pool vast amounts of data from transaction history, website and in-store behavior, and customer feedback and combine it with external resources. It thus can make highly accurate predictions about what a shopper may do next – way beyond what the previous generation of algorithms could come up with.
Increasingly, much of this data is collected on the frontline, by the machines themselves. As conversational commerce accelerates, and the use of chatbots becomes more widespread (itself driven by AI), companies are beginning to ask questions not only about automation technology, but also about how human employees and human customers interact with it.
The biggest challenge is trust. Connecting ourselves more meaningfully to the devices and their AI around us, means that the credibility of machine-driven decision-making is under increasing scrutiny. For consumers, access to more data often just means more questions. Is your home really at the optimum temperature? Is that really the best deal for holiday insurance? Has your resting heart rate really risen 20 percent?
As consumers we need to know that the decisions reached on our behalf by machine intelligence are trustworthy. As employees working alongside machine intelligence, we need to know that it can guide us to the right decision – in other words, that they truly augment our own experience and expertise.
As customers - and employees - interact more frequently with machine intelligence, behavioural preferences start to emerge. Customers tend to prefer knowing immediately that the agent with which they are interacting is virtual. What they don’t want is a system that stalls or obfuscates in order to mask its AI identity, especially when it can’t find an immediate solution to a customer problem. Transparency doesn’t necessarily build trust, but it sure does help prevent its disintegration.
Tone is critical, too. Taco Bell’s TacoBot is a conversational ‘chatbot’, taking customer orders on a messaging platform. The bot has been programmed to display a fun personality, answering questions and dealing with problems with wit and patience. Of course, TacoBot’s chatty sales patter will feel quite out of place in the midst of a complaint resolution. Irritated customers don’t respond well to faux-friendliness. At all. Offering customers an easy-to-access feedback option can help redress the balance, as well as giving them a fast route to a human-oriented solution.
Context is also crucial. The RBS chatbot, built using IBM’s Watson cognitive toolset, has been designed for simple tasks and retains a sober, functional tone, which – according to its makers – may in the future change, for example to accommodate customers that become frustrated during the conversation.
More specialized chatbots – designed with a limited vocabulary to work on specific subject matter – may also appear more trustworthy than generalist machines, and therefore more desirable, simply because in a narrow field of activity their success rate will be higher. Customers’ expectation levels rise when machine intelligence delivers good service, but they also drop, along with trust, when an activity fails.
Long-term trust can only be built through conversational design. That means finding a way to explain decisions rather than simply expecting customers (and employees) to be amenable. When a system tells us on what metrics it has based a decision, preferably in simple, natural language – and what confidence it has in being right (expressed through a percentage, for example) – it will feel much more natural to transact with it.
Such openness won’t always be necessary with a simple recommender algorithm, for example “customers who bought this also bought this”. But for more complex consumer recommendations, for example “your weekly shop could be healthier”, or “these are the right skis for you”, more transparency over how the decision was reached will always be helpful.
As we contemplate such disruptive change, we shouldn’t forget that the scale of technological transformation is often outweighed by the change management required to make it happen. In call centres, it may well be the case that machine intelligence - faster at retrieving accurate information and trained to detect human emotion - is more proficient at dispute resolution than human operatives. With the overall standard of service raised, the most successful organizations will be those that find different, better roles for humans to play – and provide the tools and training to ensure the transition is successful.
In the world of retail - where human employees can be assisted by machines with real-time, contextual insights at every step of the customer journey - AI will provide a layer of augmented intelligence that goes right to the heart of the retail experience. By guiding interactions, AI will help humans to focus on anticipating customers’ needs, building conversational relationships and deriving entirely new revenue streams from it, for example from additional products and services.
In effect, the machines can do the grunt work, while humans get back to the business of understanding. Sounds like a really good deal.
Ron Tolido is Global CTO for Capgemini’s Insights & Data organisation
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.