Ethical and Responsible AI: Navigating Tech’s New Frontier

While the AI revolution holds incredible potential to drive innovation, the dramatic increase in the technology’s use also raises ethical challenges

The rapid development of AI has ushered in an era of technological capabilities for businesses globally. Today, AI systems are augmenting and automating decision-making across fields from healthcare and education to marketing. But while this AI revolution holds incredible potential to drive innovation and improve lives, the dramatic increase in the technology’s use also raises significant ethical challenges.

As AI algorithms are given increasing autonomy and authority to make judgments and choices that impact human lives, ensuring these systems operate reliably and align with moral principles is essential. Like the humans training them, large language models can exhibit biases and make costly mistakes if not injected with the right values and governance. 

Furthermore, integrating ethics into AI initiatives is not just a moral imperative but also a business one. Ethical AI practices can help companies avoid reputational damage, legal issues and financial losses. Consumers are becoming more aware and concerned about how businesses use AI, with many preferring to engage with companies that demonstrate a commitment to ethical practices. As consumers become more aware of these risks – with Salesforce research showing that three-quarters of people are concerned about the unethical use of the technology – organisations must act now to ensure AI is used safely.

Ensuring the ethical and responsible use of AI

For businesses and society to fully benefit from AI opportunities, proper safeguards must be put in place. Highlighting the ongoing importance of building ethics into the development of AI going forward, in March 2024 the European Parliament approved the world’s first comprehensive framework to regulate AI.

The EU AI Act aims to set a precedent for the world’s first legislation on AI technology, aiming to ensure it will comply with the protection of fundamental human rights. It places the EU at the forefront of global attempts to address AI-associated risks in a rapidly changing digital landscape.

With more regulations such as this to come in future – with similar acts planned in the US, UK and China – Douglas Dick, Head of Emerging Technology Risk at KPMG, describes why organisations must act now to ensure the responsible and ethical use of AI.

While regulation is important to keep in mind, he explains that organisations should already be striving to use technology in an ethical way. “This involves having suitable governance and controls in place for using emerging technologies and mitigating potential risks,” he says. “Not implementing effective governance and control frameworks from the outset can have significant reputational, financial and operational impacts, even at the early stages of AI development.”

Organisations also need to know that any technology they are developing or using will need to be compliant when regulations come into force. Otherwise, this could represent a wasted investment and put the business at risk of being scrutinised by regulators, as well as clients, customers and the media. 

“Businesses are thinking about how AI can complement and enrich the customer journey and experience, increasing empathy and insight, while removing the mundane from people’s jobs. Therefore, having a greater ethical impact on employees and the society in which the organisations serve.” 

Building a dedicated AI ethics and governance team 

As AI becomes more ingrained in business operations, creating dedicated teams for AI ethics and governance has become increasingly important. While Google, Twitch and Microsoft have previously been among the technology companies to cut their ethical AI teams, these teams play a crucial role in guiding the ethical use of AI, ensuring AI practices meet both ethical standards and regulatory requirements. 

“Hiring a dedicated AI ethics and governance team will be a challenge due to the general lack of AI skills; however, it would significantly benefit from the inclusion of an AI ethicist and upskilling colleagues from the get-go in the three lines of defence risk governance framework,” he says.

“The role of the team should be to continuously monitor and guide AI practices in line with regulatory requirements, define and implement ethical principles, revise the organisation's risk management strategies to account for potential issues related to data and AI, and strengthen privacy impact assessments. As the technology evolves, it is important to regularly review the responsibilities of the team.”

Cultivating an internal culture that embraces ethical AI practices

With KPMG’s 2023 CEO Outlook Survey finding that global CEOs cited ethical challenges as their number one concern in relation to the implementation of Gen AI, building a culture that values ethical AI within an organisation is also vital. As AI systems become more complex and their decisions more impactful, it’s important for every employee to understand and consider ethical aspects like fairness, transparency and privacy in their work with AI.

“Educate employees about how their jobs might change to alleviate their anxieties and start now,” Douglas advises. “By talking about the technology as your ‘new AI colleague’ it can help dispel the myth that AI will replace humans in certain roles.

“If you have the resources, hiring a dedicated person or team to train AI models and monitor for bias is incredibly valuable. Or have critical models independently evaluated if there is any danger of their output having a negative impact on the public.”


Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


Technology Magazine is a BizClik brand


Featured Articles

FC Barcelona & Fortinet: Cybersecurity Takes Centre Stage

Fortinet is deploying its Security Fabric platform as part of a partnership with FC Barcelona, aimed at providing cutting-edge cybersecurity

Google Cloud Generative AI Ops Drives Enterprise AI Adoption

Google Cloud is empowering enterprises to realise the transformative potential of Gen AI with its new Generative AI Ops offering

How Publicis Sapient Helps Your Digital Transformation

Publicis Sapient is a global leader in helping organisations from online banks to leading retail platforms achieve digital business transformation

How Google Cloud is Providing a Launchpad for Women in Tech

Cloud & Cybersecurity

Mendix & Snowflake: Unleashing the Power of Enterprise Data

Data & Data Analytics

We’re LIVE! Tech & AI LIVE London 2024

Digital Transformation