“Don’t believe the hype.” While it may take many by surprise, that’s the fresh call to action among analysts paying close attention to how companies are – or aren’t – factoring artificial intelligence (AI) and machine learning (ML) into their data management plans and playbooks.
After years of reading sensational stories about the limitless potential of intelligent machines, stakeholders and C-suite, executives in particular appear to be confused about the best course of action to take. Commercial missteps and the total failure of some products have resulted. Experts say it doesn’t have to be this way.
“AI and ML has become crucial and necessary for nearly all businesses in every sector,” says Elliott Young, CTO, Dell Technologies UK. “In the same way that businesses have had to transform digitally and become digital-first, companies are going to need AI and ML to remain competitive. Those on the path towards this are already reaping the benefits of being able to make decisions driven by predictive analytics.”
But most boardrooms and bosses don’t fully understand the potential use cases for AI and ML. “Stakeholders often don’t know what to ask for in order to get the right benefits out of the technology,” says Young.“This means they don’t really know what their business could be missing out on.
“Likewise, leaders can be guilty of thinking that AI and ML sit siloed within a technology function, and that departments specialising in IT and tech will bring working AI, ML, and data strategies to them fully formed. This is unrealistic.”
This sense of corporate confusion has also been identified by Gartner, who say unrealistic expectations have led to poor decision making, disappointment among stakeholders, or outright failure for AI-related business products.
“Overhyped AI scares people and masks the real benefits AI can offer to humanity,” says Anthony J. Bradley, Gartner’s Group Vice President, Emerging Technologies and Trends Research. “This can lead to slower adoption, and even sociopolitical fear and government regulation that will stifle progress. There are real concerns over the appropriate and ethical use of AI/ML, but AI eliminating the need for humans is not one of them.”
Bradley also claims overhyped AI muddies the waters between human and machine intelligence, which can lead to misplaced efforts to automate humans out of the process. “It also stokes fears of machines replacing people rather than exploring the tremendous benefits of combining the two. In reality, human intelligence and machine intelligence are very complementary.”
Successful AI-data strategies must be built on firm foundations
The biggest challenge companies face is the depth and diversity of data across a wide range of platforms and systems, says Accenture’s Data Networks and Marketplace Lead, Prateek Peres-da-Silva. Alongside colleague Tyler Buffie, who works on Applied Intelligence within the company, Peres-da-Silva says a successful AI-data strategy is a lot harder to deliver than people might think – many companies still struggle to use existing data within a business, nevermind the deluge coming down the line thanks to AI/ML technology.
So, what do companies need to do today in order to take control of their data, AI and ML? Peres-da-Silva and Buffie both say it is critical that stakeholders get the foundations right. This might include prioritising a cloud-based, AI-data platform and the corresponding checks to ensure the entire ecosystem is compliant for data management and governance.
Human employees shouldn’t be overlooked and will, in fact, play a crucial role in the development of a successful AI-data strategy, say Peres-da-Silva and Buffie. “Creating a positive user experience is critical for the adoption of your data and AI strategy. Each part of the organisation may be looking at the same data but through a different lens.”
Specialists and champions required to bridge the AI/ML gap
Dell’s Young suggests business units need to work in collaboration with specialists to identify requirements and create solutions, while a dedicated AI/ML champion within each business unit is a necessity, as they can help bridge the gap between those with business knowledge and the data scientists building AI models.
IT and security departments need to continue ongoing work to manage data and mitigate against vulnerabilities, but now is also the time for all departments and business units to recognise their own responsibilities for helping keep data safe. As departments take on data governance as part of their own responsibilities, dedicated roles are required to ensure data management and governance is given the attention it requires, says Young.
“There may be a need for internal AI/ML data governance awareness courses as well as developmental plans for employees so that they come to understand their role and responsibilities. Some organisations have developed these in-house, while others have partnered with industry-recognised organisations for wider recognition. “Understanding data governance and how to use ML effectively is going to reach almost every department in every business.”
Hidden signals could create machine bias without human oversight
ML and AI models present their own, new ethical concerns that will have to be addressed. Some organisations remove so-called ‘protected characteristics’ – typically referring to gender, race, ethnicity or age – when importing data for processing by an ML algorithm, says Young.
“However, without that information, the machine could begin demonstrating unconscious bias in predictions based on patterns, which can then lead to harmful outputs and – in doing so – harm the protected characteristics that they were intending to protect,” explains Young. Hidden signals exist in data, so the final outputs require careful investigation and checking by a human. Businesses should introduce regular checks to any AI/ML model they are running for data processing.
One of the new roles for humans will be inspection and quality control over machines, particularly where bias is concerned, asserts Young. Humans will also be required to ensure ML models are deployed for the right reasons and not unintentionally being used in ways that could influence other aspects of the business in a negative way.
“Without these, it’s difficult to know whether a model is unintentionally creating bias or drifting,” he says. “It's important to understand that just removing sensitive data is not the answer, even if that might seem like the most obvious solution. Humans need to be on hand to intervene.”
- AWS reveals next-gen chip to train AI systemsAI & Machine Learning
- Orange cyber report: highest number of victims ever recordedCloud & Cybersecurity
- ChatGPT turns one: a comrade, a co-worker, a criminalCloud & Cybersecurity
- Carlsberg Group is using IoT to make data-driven decisionsData & Data Analytics