Capgemini: Why do we need ethical AI?
Global consulting and research firm Capgemini just released a brand new report exploring the opinions of dozens of industry experts on the ethics of using artificial intelligence (AI). The report, Conversations, collects the responses of experts from Harvard, Oxford University, Bayer, AXA and more, who offer critical insights on a range of ethical questions that the proliferation of AI has unleashed.
“AI is set to radically change the way organisations manage their businesses, and is a revolutionary technology that will change the world we live in,” commented Jerome Buvat, Global Head of the Capgemini Research Institute. “The interviews with leaders and practitioners for this new report emphasised its far-reaching implications, and how there is a need to infuse ethics into the design of AI algorithms. They also placed immense importance on the need to make AI transparent, and understandable, in order to build greater trust.”
We’ve drawn together some of the report’s key findings and included commentary from some of the leading experts to be consulted.
Why do we need ethical AI?
“In a system where a machine makes a decision, we want to make sure that the decision of that system is done in a way that ensures people’s confidence in that decision.” - Daniela Rus, MIT CSAIL
“We need to ensure that AI is acting in such a way that we can hold it accountable and also respond if we determine it is acting in a way that we don’t believe to be consistent with our values and/or laws.” - Ryan Budish, Harvard University
SEE ALSO:
What happens when AI meets HR?
2020 vision: Accenture's top trends fueling innovation and growth
How to stop a porch pirate
“Trust in new technologies can only be gained by providing an ethical framework for their implementation. Our goal in healthcare is not to let AI take decisions, but to help doctors make better decisions. AI has its strengths – analyzing huge amounts of data and generating insights that a human being wouldn’t have thought of before. It is able to identify certain patterns, such as radiological images, and supports the diagnosis of a doctor. AI is meant to enhance or augment the capabilities of humans.” - Saskia Steinacker, Bayer
“Algorithmic systems are amoral… they do not have a moral compass. Yet, they can make decisions that have pervasive moral consequences.” - Nicolas Economou, H5
The ethics of AI needs to be a shared responsibility.
“In the case of ethics, this is not something where responsibility lies with any particular individual in the company,” says Michael Natusch, global head of AI at Prudential Plc. “It is a shared responsibility for all of us.”
Budish agrees that even when specific roles are created, responsibility is still shared in areas such as privacy. “Everyone in an organisation has an obligation to respect the privacy of customers or to protect their data,” he says. “Certainly, organisations have created positions like chief privacy officer to help ensure that the right policies and systems are in place. But the responsibility itself lies with everyone.”
The role of humans in ensuring that, even though AI can make decisions, its lack of an inherent or learned moral conception doesn’t result in unforeseen, even dangerous consequences.
- How Nvidia's AI Made It the World's Most Valuable FirmAI & Machine Learning
- Gartner: Why CIOs Struggle With Digital Initiative SuccessDigital Transformation
- How Palantir's AI Growth is Fuelling Investor ConfidenceAI & Machine Learning
- IAG: How AI is Impacting the Aviation IndustryAI & Machine Learning