Errol Gardner, EY Global Vice Chair – Consulting, recently told us that, while data and analytics are at the core of transformation, “businesses that graduate beyond these foundational technologies to invest in AI are the ones that reap greater rewards”.
“In an era when customers have come to expect services tailored to their needs and preferences, AI can unlock that much sought-after customer-centricity allowing businesses not just to react to changing customers’ expectations, but also anticipate and predict them.”
Technology Magazine’s July 2022 cover stars, digital transformation experts and Workday partners Kainos, argue that trustworthy technology is sustainable technology. In their recent report on trustworthy AI, they explore how the risks from misuse of artificial intelligence – much like the impact of humans on our planet’s climate – need to be addressed.
According to Kainos, ‘the move towards environmental sustainability has seen professionalisation, standardisation and mechanisms for disclosure – all to create confidence that the world economy can decarbonise. Artificial intelligence seems to be on a similar trajectory’.
The Belfast-based IT firm argues that governments and corporations are ‘considering ways of enforcing that the technologies are lawful, ethical and robust, so that their benefits are sustainable and can be realised in the long term.’
Campbell argues that, while international standards are coming, “we all need practical standards to follow that are available for us today to allow us to apply and implement”.
Kainos are employing their first-ever Data Ethicist to carry out this kind of work and, crucially, increase levels of trust while helping make AI explainable.
With headlines floating around about some of the harms that have been described regarding the deployment of AI, Campbell highlights the seriousness with which the company treats such headlines: “We see AI as the future. It's being described as a revolution, similar to an industrial revolution.
“If we don't act today in a responsible, ethical way in terms of how we develop AI and deploy it, and help users understand what it's doing and not doing for them, then that will limit or prevent the adoption of artificial intelligence over the next few years.
“So we want to act early to mitigate and prevent some of those issues arising,” said Campbell.
The quality assurance process whenever you develop a system that tests and retests against the specification is important to understand what outputs it is giving you, and whether that compares with what you expect. “That's no different with an AI model or AI system,” he added.
“It’s important for inputs to any AI system to be properly understood. We call that, in a technical sense, exploratory data analysis, which looks at the statistical variance and gaps with the data set to understand what we can reasonably do with this data and what we cannot?”
Software should be lawful, ethical and technically robust
“As technology and AI are becoming part of our lives more and more, we have to ensure that people view these tools, and the companies and institutions that utilise them, as lawful, ethically adherent and technically robust,'' said Benedetta Cevoli, Data Science Engineer, Speechmatics.
“AI should be honest and, consequently, trustworthy at all levels – from its inception to deployment. Creating such a system requires a deep understanding of its application in the real world and the extent to which individual differences of users, such as age, race, gender, location and language, influence their experience with the tech. Keeping these differences right at the centre of product development with a genuine drive to address these issues, before technology is disseminated throughout society, is a way to build trust with the public,” she said.
Bias has multiple shapes and forms, existing in many different aspects. Oftentimes, we are not even aware of it, according to Cevoli: “Biases can creep into technologies in numerous ways. Hence why monitoring and addressing bias is generally a hard problem to solve. Yet, this cannot be an excuse to use ‘tunnel vision’ when it comes to training and testing. AI and machine learning technologies are only as good as the datasets and the algorithms used to train them. Labelling data is extraordinarily time-consuming and therefore limiting, with the datasets created far too narrow to be entirely representative. Therefore, we cannot expect surveillance technology, for example, to be able to accurately identify cohorts of individuals never seen before”.
For trustworthy AI, Cevoli believes we need it to be “globally representative and to understand where bias will be compounded rather than reduced. Using unlabelled data has the potential to greatly reduce bias as the volume of data that a model can be trained on is increased by orders of magnitude. Testing beyond our own communities is paramount to designing technology that does not intentionally or accidentally favour certain groups in society over others. A wider testing culture is essential to addressing and mitigating bias”.
Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, part of Zoho Corporation, adds that “AI has been enabling enterprise software to move from just process automation to decision automation.
“Given how AI is automating decisions in mission-critical use cases, it's important to add some accountability to the whole system. This can be achieved by using explainable AI,” he said.
“Most modern-day AI is just a black box where the AI engine doesn't explain why it arrived at a particular decision,” argues Ramamoorthy. “But in an enterprise, usually there are processes built around decisions that involve a hierarchy of people or teams. When a decision is automated, it needs to be documented for future reference and due process has to be followed. An AI model that can explain its decision can help human beings understand and execute processes related to the decision – or even veto it, given how AI models are only 80% accurate on average,” he adds.
ManageEngine has deployed explainable AI wherever possible, and the use of AI features has gone up by 72% since they started adding explanations to their predictions for IT automation.
EY – only 35% of companies have a process in place to evaluate AI risks such as bias and errors
Greg Cudahy is EY Global Technology, Media & Entertainment and Telecommunications (TMT) Sector Leader. He said that, to counter concerns over AI, businesses and governments alike will need to apply “robust risk management principles and governance to ensure that the impact of AI is fair and trustworthy”.
In addition, he suggested that “AI programmes and initiatives should be continuously reviewed for unintended outcomes that could have a negative impact on the business, customers, and society”.
“While deploying AI at speed will no doubt give any organisation competitive advantage, leaders will find that it is by augmenting the intelligence of their people that they will realise their full potential,” said Cudahy.
We are still “scratching the surface” when it comes to AI and its true potential, according to Errol Gardner at EY: “Its use currently is mainly confined to automating manual processes and speeding up the analysis of large data sets to help decision making.”
The true potential of AI, however, lies elsewhere in Gardner’s opinion.
“AI is about more than just augmenting and improving other technologies. It is about augmenting and improving human intelligence to enable better choices. AI will be able to do this by taking a more sophisticated approach to analysis and problem solving through spotting patterns/ solutions that augment those of its human creators. The result will be a whole new way of looking at the world,” he argues.
To unlock the power, potential and promise of AI, Gardner adds that, similarly to Kainos, “businesses will need to set the right frameworks in place that will help stakeholders overcome their preconceptions around it, and fully embrace its power to transform systems, operations and services”.