Understanding the risks of adopting AI
There is a new geometry to risk
In today’s network era, interconnectedness is growing. The world’s virtual networks around trading platforms, communication hubs and the internet are increasingly connected to physical networks such as power grids, telecommunications networks and other underlying infrastructure. The word ‘hyperconnectivity’ has now entered the technical language of engineers, economists and risk managers, highlighting our dependence on being always connected. Shocks to these virtual and physical networks can take many forms: earthquakes, business or sovereign defaults, liquidity crises, cyber-security attacks, even solar storms. The interconnected nature of these networks increases the chances of cascades – shocks trigger other shocks, affecting supply chains, customers, investors and counterparts elsewhere. The impact of one of these shocks today is more widespread and costly than a decade ago. These are alarming systemic risks.
There is a shocking cost to catastrophic failure. The average stock returns for firms suffering serious disruptions is 40 percent. Shareholders lose an average of 10 percent of their stock value at the announcement of a disruption, and there is a 14 percent increase in equity risk in the following year.
They live amongst us
Artificial intelligence (AI) has been around in different forms for decades, from cybernetics and early neural networks in the 1950s, through natural language processing in the 60s, robotics in the 70s, expert systems in the 80s, intelligent agents in the 90s and deep learning and general AI in the 2000s. There is already a lot more AI around than most people realise, and successive generations of AI have found their way in to the physical and information technology systems of the power, telecommunications and water grids, the SCADA controls of oil, gas and transport infrastructures, the trading systems in financial and commodity markets and the operational systems of factories. For the last five years or so, the internet of things has brought more and smarter AI into the ‘clouds’, ‘fogs’ and their ‘edges’. And, of course, our homes. We have communities, populations and generations of AI living amongst us, quietly deciding and acting on the increasing volumes of data our instrumented society produces.
The Catch 22 of Artificial Intelligence
AI is swiftly becoming a foundational technology. Organizations are increasingly using AI, with automation, to predict and exploit market opportunities (for example, in financial services and retail) and to increase autonomy and operational performance (in factories and transport). We already on the second generation of ‘AI-rich’ highly scalable central systems and decentralist agents.
But AI could also destabilize markets and supply chains, particularly those with highly integrated, tightly coupled networks. The increasing use of ever-more powerful AI creates three ‘catch-22’ situations, which could combine to create alarming systemic instability:
- Improvements could result in less value for all. (Particularly in the finance, energy and insurance industries.) There are two effects. First, better discovery of opportunities through AI leads to less opportunity overall – algorithms illuminate more opportunities in an area of the market, which in turn attracts attention so that all opportunities are consumed. Second, more accurate AI could lead to the market becoming smaller, because the increased accuracy of analysis undermines risk pooling. For example, in insurance it could raise some risk premiums, which reduces the number of buyers.
- Smarter AI might lead to more risk. The more clever the algorithm, the more opaque its reasoning, which could obscure wider unintended impacts. If AI is trained in periods unrepresentative behaviour, such as periods of low volatility or low load, their actions may increase risk. For example, if a number of organisations are using opaque credit scoring
- AI supplier concentrations can create single points of failure. The more mature, usable AI software and AI-as-a-Service tends to be offered by a few large technology firms, with the potential for natural monopolies. There could be stability risks if these technology firms have too large a market share, and systemic effects if these large firms were to face a major disruption. Imagine if the AI from Google, Microsoft or Amazon had unforeseen hackable vulnerabilities and could be manipulated by a bad actor?
In the world of black swans, where shocks are increasingly hard to predict, it makes sense to understand and mitigate these paradoxes of using AI in hyperconnected networks and markets. Part of the answer is for regulators to map these interconnected networks, track concentrations of AI and introduce tests for before an organisation uses AI in specific activities. The rest of the answer lies with the risk management functions of firms themselves; surely the use of AI now merits oversight.
AI - Snatching defeat from the jaws of victory?
There is a certain irony in the unintelligent use of artificial intelligence. AI is often seen as opportunity to disrupt or diminish the power of incumbents – a smart David could defeat dumb corporate Goliaths. But if hit by a black swan event, the paradoxes of AI’s complex nature and increasing use could cause markets and businesses to break down. The problem we face is that when everyone has AI, AI will be too big for us to allow it to fail.
Bill Murray, Senior Researcher and Advisor, Leading Edge Forum