A Guide to Prevent Ethics-Washing in the Tech Sector

By Inma Pérez Ruiz, Regulatory Lead in the AI Notified Body of BSI
Share
Organisations should develop AI ethical policies grounded in existing laws and regulations
BSI’s Inma Perez Ruiz warns against tech "ethics washing", advocating for genuine ethical practices via greater transparency and independent auditing

Artificial Intelligence (AI) users are no longer limited to data scientists or software engineers, as ChatGPT demonstrated, AI applications are now widely accessible to the public. 

With this increased reach, we are seeing major ethical challenges arising from the misuse of AI, from AI bias against gender and sex to the spread of fake news. It is, therefore, no surprise that AI ethics is increasingly occupying the public consciousness. 

AI ethics has emerged as a mainstreamed response to the growing concern about the role AI is playing in our daily lives. In recent years, we have seen a vast proliferation of ethics guidelines, frameworks, and principles put forward by scholars, policymakers, governments, businesses, civil society members, and NGOs (non-government organisations). 

There are too many ethical guidelines around, causing confusion among organisations. They simply do not know which ones should apply to the point of encouraging a ‘shop around’ strategy, to only select principles that benefit them and enhance their public image the most.  

Debates over AI ethics continue to emerge

Confronting AI ethical dilemmas 

Commercial actors in the tech sector often recognise the growing consumer and regulatory demand for ethical AI practices. In response, they may develop and publicise AI ethics guidelines, ethical codes of conduct, or even establish dedicated ethics committees. 

However, behind these seemingly commendable efforts can lie a reality of shallow commitments and superficial implementations. The primary motivation may not be the genuine integration of ethical principles into the AI system lifecycle, but rather the enhancement of corporate reputation, the appeasement of stakeholders, or the exploitation of market opportunities. 

This instrumentalisation of ethics is also known as “ethics washing” (Wagner, 2018) which is not much more than the new “green” label. 

A common tactic in ethics-washing is the overemphasis on the positive aspects of AI, while down-playing, or even entirely ignoring, potential risks and negative implications. Organisations may use eloquent rhetoric about the responsible and ethical use of AI yet fail to provide substantive evidence or transparent practices to back up these claims. 

This dissonance between words and actions erodes trust and undermines the very principles of ethical AI that these companies purport to uphold. 

To combat ethics-washing and promote genuine ethical AI practices, organisations must adopt comprehensive strategies that go beyond mere lip service.

Establish an Ethics Board

Organisations can establish independent ethics boards comprised of individuals, including ethicists, technologists, lawyers, stakeholders, and representatives from the public to ensure alignment with ethical principles. 

Taking an interdisciplinary approach might help to take a more holistic perspective on AI ethics-related matters. It is no secret that the design choices of the technical teams can have ethical, legal, and societal implications downstream, and such interrelation must be considered. However, the responsibility of safeguarding users from AI offerings should not only rest on the shoulders of the technical teams.

At the same time, it is evident that lawyers and ethicists cannot provide adequate AI ethics solutions without the input from the technical side. Therefore, organisations that adopt an interdisciplinary approach might benefit from richer discussions that can lead to more valuable and meaningful outcomes. 

On the other hand, taking a multistakeholder approach might be highly valuable too. All actors holding an interest in the matter should have a seat at the table, thereby democratising the discussion around AI governance. This is especially true when including members of the public with viewpoints that are independent of the organisation’s interests.

Robust Reporting Structure

Organisations must empower all employees to report ethical concerns related to AI practices without fear of retaliation. This would also include ensuring transparency about the decision-making process to those affected by an AI decision, including the option to request a process of human review. 

By investing in a transparent reporting procedure, organisations can improve the overall quality of the product and enhance collaboration not only within the organisations but also with consumers.

Transparency is important when it comes to AI regulations

AI Policy Based on Law with Binding Commitments

Organisations should develop AI ethical policies grounded in existing laws and regulations. It is true that questions and critiques arise about the content of these AI ethics policies, whether they are implemented by companies, due to their non-binding nature. 

However, if these ethical policies are already enshrined in existing legislation, compliance is a prerequisite and not a voluntary commitment. 

For example, the upcoming European Artificial Intelligence Act (AI Act) enshrines the ethical principles proposed by the High-Level Expert Group on AI in its Guidelines. Among these, we can find the right to non-discrimination. This means that, for example, a medical device app that omits a minority group with relevant differences would not only be unethical, but also illegal under the AI Act. 

In any case, to avoid ethics washing and clear up any doubt, it is important that companies re-emphasise within their policies that their ethics guidelines do not intend to substitute any form of regulation. 

Keep the ethics discussion alive 

It is important for organisations to see ethics broadly in their socio-political context, rather than limited to a particular set of ethical principles. Ethical principles and guidelines are useful as they offer some guidance, however, what is ethical can change as this technology keeps evolving (think about what privacy meant before social media). 

Also, this technology is often applied in ways different from their original intended purpose. Therefore, organisations should be encouraged to keep the ethics discussion alive and reflect on the ethical basis of technology continuously, as the landscape keeps mutating. 

To avoid these initiatives from becoming merely good intentions, organisations should subject themselves to continuous checks and further scrutiny. This can be done internally, based on self-assessment. However, external audits, conducted by independent bodies (such as BSI), can address the confirmation bias that may prevent internal audits from recognizing an area of improvement that an independent auditor may identify. 

Auditing involves a holistic review that covers the technical, ethical, and operational aspects of AI systems, as well as data practices and decision-making processes. It scrutinises the entire lifecycle of the AI system from the design and development to the deployment phase. 

Certification, on the other hand, is a formal recognition by a third-party organisation that an organisation's AI practices meet specific ethical standards and criteria.

Independent auditing and certification processes play a crucial role in verifying ethical AI practices and preventing ethics-washing.

Mitigating bias with clear governance

These processes can help to identify and mitigate potential biases and ethical concerns, as well as ensure compliance with relevant laws, regulations and established ethical standards. This is especially important in fields like healthcare, where AI has the potential to either reinforce or reduce systemic biases. 

For example, ISO/IEC TR 24027:2023 is a standard which aims to assess machine learning model bias in AI-aided decision making. The standard provides an overview of fairness in the context of machine learning models (from human cognitive bias to data and engineering-related biases), as well as practical metrics to evaluate fairness. 

By conducting a voluntary, independent audit of ISO (International Standards Organization) 24027, organisations can be re-assured that fairness has been integrated in the AI development practice. 

By being certified under ISO 24027, or any other ethical standard, organisations demonstrate a genuine commitment to ethical AI practices, bringing trust to consumers, stakeholders, and regulatory bodies. 

As AI becomes increasingly widespread, it brings forth a host of opportunities, but also challenges and risks. Due to the increasing concerns about the adverse impact that AI systems could have on individuals and society, organisations have proposed AI principles and policies and made related commitments. 

However, organisations are susceptible to falling into the trap of ethics-washing, where genuine action gets replaced by superficial promises. 

To avoid this, organisations should proactively re-evaluate their structures, processes, and modes of AI governance. Organisations should take on the task of rethinking the scope of work of AI developers, lawyers, and ethicists, and see what they can bring to the table. 

Moreover, these processes should facilitate continuous ethical discussions including external participation by the public. 

Finally, the way to ensure that organisations go beyond words and ethics-washing is to go through independent, third-party auditing. 

Such audits provide objective assessments, holding the organisation accountable for their ethical claims.

About the Authors:

Inma Pérez Ruiz, Regulatory Lead in the AI Notified Body of BSI

Inma Pérez Ruiz is a Regulatory Lead in the AI Notified Body of BSI, handling communications with authorities and overseeing AI projects. Before joining BSI in 2022, she worked in Brussels as a consultant on EU AI Act regulation, representing tech industries, and as an advisor to the Spanish
Permanent Representation to the EU. Inma holds a Law degree, with a specialisation of Masters in European Law and is admitted to the Spanish Bar.

Alex Tazza, Technical Specialist Team Manager at BSI

Alex Tazza is an AI Technical Specialist Team Manager at BSI, responsible for staying current with AI standards and rigorously assessing the accuracy and regulatory compliance of AI models. Before joining BSI, he advised organizations on AI adoption and led AI model development and data science teams. He holds two M.Sc. degrees, in Business Information Technology and Artificial Intelligence. He has been an active member of the CEN-CENELEC Joint Technical Committee on AI, contributing to shaping AI standards with a focus on innovation and ethics.

Alex Shepherd, AI Client Manager at BSI

Alex Shepherd is an AI Client Manager at BSI and has a comprehensive experience in Data Science both in academia and industry (3+ years in the life science and pharma sectors). He is striving towards his mission of building trust in AI with his work as an AI Client Manager, as well as hosting Responsible AI hours podcasts. He is also writing a book for data professionals and business stakeholders to understand and incorporate responsible AI into their AI development practices.


Sources:

Wagner, B. (2018). Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping?.

Mökander, J. (2023). Auditing of AI: Legal, ethical and technical approaches. Digital Society, 2(3), 49.

E. Bietti, "From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics," in Journal of Social Computing, vol. 2, no. 3, pp. 266-283, September 2021, doi: 10.23919/JSC.2021.0031


Disclosure: This article is an advertorial and monetary payment was received. It has gone through editorial control and passed the assessment for being informative.

******

Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Technology Magazine is a BizClik brand

Share

Featured Articles

Will Trump’s Tariffs Threaten Global Technology Trade?

Global supply chains for renewable energy technology faces disruption as the US imposes new tariffs on Chinese imports, threatening price increases

Google Drops Diversity Targets as US Tech Firms Review DEI

Alphabet’s search and advertising unit joins Meta and Amazon in reassessing workplace programmes following Trump administration executive orders

How Quantum Computing Could Add £212bn to UK Economy by 2045

Oxford Economics research shows quantum technology could increase UK productivity by 7% by 2045, with pharmaceutical and defence sectors leading adoption

Why UK’s MoD is Investing £50m in AI and Data Analytics

Data & Data Analytics

SAP: Why The UK Faces AI Adoption Hurdles Amid Global Race

AI & Machine Learning

Quantinuum: The First Quantum-Generated Data For AI

AI & Machine Learning