IBM's Commitment to Data Governance for Positive AI Impact

Share
IBM’s data governance programme consists of a data clearance process
Tech giant IBM’s co-created Data Provenance Standards aim to improve transparency across the data ecosystem, prompting a safer foundation for AI growth

As businesses seek to deploy AI at a faster rate, data responsibility has never been more important. 

Big technology companies in particular are experiencing an overwhelming demand for data as they develop new AI capabilities. As a result, making sure that data processes are optimised is essential to meeting growing business needs. Cleaner data can result in improved efficiency for data teams, without sacrificing standards.

One such company looking to continually improve its attitude to data is IBM, a company that has faced surging demand for its services in recent years. The tech giant builds AI systems for a huge range of use cases and so boasts significant volumes of data that train and test its models. Now, the company is looking at developing its data governance processes to improve trust moving forward. 

Part of this process is co-creating the Data Provenance Standards, the first cross-industry standards for metadata. This standard is a partnership between IBM and 19 other enterprises, including the Data & Trust Alliance, to help describe data origin, lineage and suitability for purpose.

The initiative aims to develop the first universal cross-industry data transparency standards to foster a greater culture of trust when it comes to enterprise AI.

Harnessing clean data to create a positive future for AI

Aligning data standards enables teams within an enterprise to effectively access an expanding and diverse catalogue of high-quality data. This is instrumental in creating AI that is trustworthy and that can lead to tangible results for a business, with a recent Cloudera report finding that 90% of IT leaders believe that unifying data lifecycles is critical for AI and analytics development.

“AI has a massive potential for good. It will help make us more productive as people and as a society,” highlights IBM Chief Privacy & Trust Officer, Christina Montgomery. “But AI can also cause real harm if it is not built or deployed responsibly.”

IBM’s data governance programme consists of a data clearance process. With its focus on AI and enterprise data, the company recognises the need to manage, understand and protect the data that is used for AI models. 

Youtube Placeholder

Its data governance programme already includes a data clearance process that enables the company to apply relevant controls, document lineage and define guidelines for use and re-use. For instance, IBM’s Granite foundation models are some of the most transparent in the world, thanks in part to their conformity to data governance and risk criteria enabled through our data clearance review process. 

Now, the tech giant has been testing the Data Provenance Standards by comparing them to its own data intake processes. The company has also been evaluating how well they could be implemented in real-world scenarios and also providing feedback and recommendations for the standards moving forward. 

As a result, IBM found marked improvements in overall data clearance review time, suggesting that the Data Provenance Standards can improve overall data quality. With this in mind, IBM is continuing to place trust at the centre of its company ethos.

“As businesses begin adopting AI across a greater breadth of use cases, we need to find more efficient ways to ensure that the data used to train and test these models stay aligned with high standards for trust and transparency,” Christina Montgomery comments on LinkedIn. 

Christina Montgomery, IBM Chief Privacy & Trust Officer.

The dangers of enterprise AI bias

If AI is developed without a culture of data governance, the results could be flawed. During a time where AI is becoming more responsible for making decisions, training AI on clean data is essential. This will make sure that a company avoids faults that could lead to reputational damage or overwhelming loss of trust.

Likewise, biassed AI systems can cause risk and even harm to people, sparking further debates over the ethics of such digital systems. In fact, IBM even describes AI bias as systems that reflect and perpetuate human biases within a society, including current social inequalities.

A commitment to trust is essential to IBM’s work as it seeks to improve more responsible AI systems. Its strategy prides itself on strong enterprise data standards and governance practices as it advocates for enterprise responsibility.

“For IBM, building trustworthy AI means having clear principles for trust and transparency, putting those principles into practice, and embedding ethics into every facet of the AI lifecycle,” Christina adds on the IBM blog in relation to the report. “We are ready to support clients in implementing their own data governance frameworks.”

******

Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Technology Magazine is a BizClik brand

Share

Featured Articles

What Global Tech Leaders Think About The UK’s AI Action Plan

Global tech leaders including Nvidia, Dell, Siemens & ServiceNow, respond to the UK’s AI Action Plan to invest in infrastructure, upskilling & data centres

JLR & Tata: Advancing Software-Defined Vehicles

With the Tata Communications MOVE™ platform JLR is ensuring electric fleet connectivity, driving the future of software-defined automotive manufacturing

How Siemens is Reimagining the Energy System of Davos

Ahead of the 2025 WEF summit, Siemens has fitted host town Davos with an eco-friendly energy distribution system to help the WEF walk the walk sustainably

Capgemini: How Gen AI Drives Rise in Corporate Emissions

Digital Transformation

How Apple Says it is Using Siri to Protect User Data

Data & Data Analytics

WEF: How AI Will Reshape 86% of Businesses by 2030

AI & Machine Learning