Gen AI Ethics: Ensuring Responsible Use and Implementation
Generative AI (Gen AI) is not merely a buzzword; it's a game-changer. With its immense potential, it is reshaping how we interact, work, and comprehend our world.
From drug discovery to software development, knowledge retrieval to creative arts, this technology can achieve remarkable feats. With that in mind, it’s no surprise that generative AI is projected to boost global productivity by trillions of dollars.
According to a report by KPMG, 70% of CEOs agree that generative AI remains high on their list of priorities, with most (52%) expecting to see a return on their investment in three to five years.
But despite a willingness to push forward with their investments, ethical challenges are more of the main risks in terms of the implementation of generative AI.
“When it comes to generative AI, CEOs are stuck between a rock and a hard place; they are eager to reap the benefits of the technology, yet regulatory and security concerns are holding them back from extracting the most value from it,” Ian West, Head of KPMG’s TMT Practice in the UK, said.
Ethical concerns and challenges with generative AI
As Kunal Purohit, Chief Digital Services Officer at Tech Mahindra explains, generative AI solutions and offerings are reshaping operational, functional, and strategic landscapes across industries. However, generative AI is relatively new and largely unregulated, leading to several potential misuse scenarios and ethics concerns.
“Numerous ethical concerns surround generative AI today,” he says. “These concerns include issues related to copyright or stolen data, hallucinations, inaccuracies, biases in training data, cybersecurity vulnerabilities, and environmental considerations, among others. Concerning the use of generative AI, issues like copyright, data protection and cyber vulnerability are complex. Enterprises will need to have the right governance mechanism both from system and process perspective.”
With 3.5 quintillion bytes of data generated daily, apprehension often arises about the use of AI models heavily reliant on user data. “Data privacy and security concerns emerge, particularly in sectors such as finance and healthcare. Personal and corporate data can inadvertently find its way into generative AI training algorithms, exposing users and organisations to potential data theft, loss, and privacy violations.”
The phenomenon of 'hallucinations' in AI, where models provide baseless or incorrect responses, also poses a unique challenge. “Furthermore, the advanced training of generative AI-powered tools allows them to convincingly manipulate humans through phishing attacks, introducing an unpredictable element to an already volatile cybersecurity landscape.
Bias in training data and the substantial energy consumption of AI models are other ethical considerations demanding attention. “It becomes a significant ethical concern when AI is used in decision-making processes like hiring, lending, and criminal justice,” Purohit describes. “Furthermore, generative AI models consume vast amounts of energy both during training and while handling user queries. As these models continue to grow in sophistication, their environmental impact is bound to increase unless stringent regulations are enforced.”
Ethical frameworks and guidelines are essential for generative AI
Purohit underlines the need for increased focus on accountability, ethics and fake detection in generative AI. The misuse of generative AI can lead to criminal and fraudulent activities, potentially causing social unrest, he describes.
“Ecosystem players must play a pivotal role in AI governance to ensure its responsible use,” Purohit says. “Regulatory bodies have a vital role to play, and technology creators must introduce interventions to guarantee the safety, security, and suitability of technology for various applications, including addressing copyright-related aspects.”
Generative AI can be harnessed thoughtfully and effectively within organisations when leadership is committed to implementing safeguards to protect both employees and customers from potential technological hazards.
Establishing an ethical framework and guidelines that highlight precautionary measures for using generative AI is essential. “These measures can help organisations prevent harmful biases and misinformation from proliferating, safeguarding customers, their data, proprietary corporate information, the environment, and the rights of creators over their work,” Purohit explains. “Clear guidelines regarding data diversity requirements, fairness measures, and the identification of advantageous and disadvantageous datasets can ensure the consistent and smooth operation of data and delivery processes. This end-to-end traceability and accountability serve as the foundation for real-time auditing, identification, and resolution of issues.”
Furthermore, while enterprises need to train data engineers, data scientists, ML modellers, and operations personnel, it is equally crucial to educate employees on the responsible use of generative AI.
“Those implementing this technology, including companies like Tech Mahindra or other service providers, must understand their roles in safeguarding it. They possess knowledge of how this technology operates and must take steps to ensure its responsible use. For instance, if certain data should not be used for a particular purpose, they must implement technical safeguards to prevent such usage. If the generated output can be harmful or offensive, filtering mechanisms should be in place. In cases of malicious content, proactive blocking measures must be enforced.”
The rising role of the ethics officer
The growing emphasis on ethics in generative AI has led to the creation of the ethics officer role in enterprises. This role ensures compliance and focuses on identifying and solving ethical issues across people, processes, and technology. “This dedicated position is responsible for ensuring compliance across all levels, encompassing people, processes, and technology,” Purohit says. “It allows companies to devote more time and effort to identifying problems and finding the best solutions.”
While designing and developing generative AI use cases, businesses should take a 'responsible-first' approach. “It is essential that they adhere to a comprehensive and structured assessment of responsible AI and follow a human-in-loop approach when making critical inferences and taking action.
Purohit concludes by urging a ‘responsible-first’ approach in the development and use of generative AI. He highlights the ongoing evolution of this technology, particularly evident in the rapid advancements from ChatGPT to its successors.
“The era of generative AI is only just beginning,” he says. “Since ChatGPT was rolled out in November 2022, we’ve already seen swathes of updates and fine-tuning; just four months later, ChatGPT 4 arrived with significantly improved capabilities. In other words, just as a full realisation of a technology’s benefits takes time - so does getting the ethical framework correct.
“It’s essential to remember that while generative AI can create issues, it can also resolve them. It’s like an antidote. While generative AI can lead to cyberattacks, it can also defend against them. Technology can cause disruption, but it can also provide protection.”
Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
**************
Technology Magazine is a BizClik brand