Understanding the risks of adopting AI
In today’s network era, interconnectedness is growing. The world’s virtual networks around trading platforms, commu...
There is a new geometry to risk
In today’s network era, interconnectedness is growing. The world’s virtual networks around trading platforms, communication hubs and the internet are increasingly connected to physical networks such as power grids, telecommunications networks and other underlying infrastructure. The word ‘hyperconnectivity’ has now entered the technical language of engineers, economists and risk managers, highlighting our dependence on being always connected. Shocks to these virtual and physical networks can take many forms: earthquakes, business or sovereign defaults, liquidity crises, cyber-security attacks, even solar storms. The interconnected nature of these networks increases the chances of cascades – shocks trigger other shocks, affecting supply chains, customers, investors and counterparts elsewhere. The impact of one of these shocks today is more widespread and costly than a decade ago. These are alarming systemic risks.
There is a shocking cost to catastrophic failure. The average stock returns for firms suffering serious disruptions is 40 percent. Shareholders lose an average of 10 percent of their stock value at the announcement of a disruption, and there is a 14 percent increase in equity risk in the following year.
They live amongst us
Artificial intelligence (AI) has been around in different forms for decades, from cybernetics and early neural networks in the 1950s, through natural language processing in the 60s, robotics in the 70s, expert systems in the 80s, intelligent agents in the 90s and deep learning and general AI in the 2000s. There is already a lot more AI around than most people realise, and successive generations of AI have found their way in to the physical and information technology systems of the power, telecommunications and water grids, the SCADA controls of oil, gas and transport infrastructures, the trading systems in financial and commodity markets and the operational systems of factories. For the last five years or so, the internet of things has brought more and smarter AI into the ‘clouds’, ‘fogs’ and their ‘edges’. And, of course, our homes. We have communities, populations and generations of AI living amongst us, quietly deciding and acting on the increasing volumes of data our instrumented society produces.
The Catch 22 of Artificial Intelligence
AI is swiftly becoming a foundational technology. Organizations are increasingly using AI, with automation, to predict and exploit market opportunities (for example, in financial services and retail) and to increase autonomy and operational performance (in factories and transport). We already on the second generation of ‘AI-rich’ highly scalable central systems and decentralist agents.
But AI could also destabilize markets and supply chains, particularly those with highly integrated, tightly coupled networks. The increasing use of ever-more powerful AI creates three ‘catch-22’ situations, which could combine to create alarming systemic instability:
- Improvements could result in less value for all. (Particularly in the finance, energy and insurance industries.) There are two effects. First, better discovery of opportunities through AI leads to less opportunity overall – algorithms illuminate more opportunities in an area of the market, which in turn attracts attention so that all opportunities are consumed. Second, more accurate AI could lead to the market becoming smaller, because the increased accuracy of analysis undermines risk pooling. For example, in insurance it could raise some risk premiums, which reduces the number of buyers.
- Smarter AI might lead to more risk. The more clever the algorithm, the more opaque its reasoning, which could obscure wider unintended impacts. If AI is trained in periods unrepresentative behaviour, such as periods of low volatility or low load, their actions may increase risk. For example, if a number of organisations are using opaque credit scoring
- AI supplier concentrations can create single points of failure. The more mature, usable AI software and AI-as-a-Service tends to be offered by a few large technology firms, with the potential for natural monopolies. There could be stability risks if these technology firms have too large a market share, and systemic effects if these large firms were to face a major disruption. Imagine if the AI from Google, Microsoft or Amazon had unforeseen hackable vulnerabilities and could be manipulated by a bad actor?
In the world of black swans, where shocks are increasingly hard to predict, it makes sense to understand and mitigate these paradoxes of using AI in hyperconnected networks and markets. Part of the answer is for regulators to map these interconnected networks, track concentrations of AI and introduce tests for before an organisation uses AI in specific activities. The rest of the answer lies with the risk management functions of firms themselves; surely the use of AI now merits oversight.
AI - Snatching defeat from the jaws of victory?
There is a certain irony in the unintelligent use of artificial intelligence. AI is often seen as opportunity to disrupt or diminish the power of incumbents – a smart David could defeat dumb corporate Goliaths. But if hit by a black swan event, the paradoxes of AI’s complex nature and increasing use could cause markets and businesses to break down. The problem we face is that when everyone has AI, AI will be too big for us to allow it to fail.
Bill Murray, Senior Researcher and Advisor, Leading Edge Forum
Discord buys Sentropy to fight against hate and abuse online
Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.
First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.
“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”
Cleanse platforms of online harassment and abuse
Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”
Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.
“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.