Data Privacy: Protecting Valuable Data in the age of Gen AI

With AI Adoption on the Rise, Organisations Today are Increasingly Concerned About how the Technology Could Affect Their Valuable Data

Today’s world is increasingly driven by data. By 2025, McKinsey predicts that nearly all employees will naturally and regularly leverage data to support their work. 

Every day, the world produces five exabytes of data. By 2025, this is set to rise to a rate of 463 exabytes per day, driven by increased adoption in AI. But as businesses continue to embrace AI at an accelerating pace, organisations today are increasingly concerned about how the technology could affect their valuable data. 

To combat this threat, more than one-quarter of organisations have banned the use of generative AI (Gen AI), highlighting the growing privacy concerns with Gen AI and the trust challenges facing organisations over their use of AI.

To coincide with 2024’s Data Privacy Day, Technology Magazine hears from experts who highlight the importance of data privacy in the age of Gen AI.

Data governance policies critical in building confidence in AI

According to Trevor Schulze, Chief Information Officer at Alteryx, clear data governance policies with defined roles are crucial in building confidence as the enterprise use cases for Gen AI continue to scale.  

“Powerful enterprise use cases for Gen AI are still being discovered, but so too are limitations in terms of data privacy and regulation,” he says. “The Information Commissioner’s Office recent review of how data protection laws should apply to such Gen AI applications was a key reminder of this. The EU AI Act, which is the first official AI regulation and requires AI systems deployed in the EU to be safe, transparent, traceable, non-discriminatory and environmentally friendly, is another good example of regulators moving at pace to react to AI concerns.” 

As Schulze explains, data privacy is cited by 47% of data leaders as the reason why AI capabilities have not yet been deployed within their organisations.  

“Clear data governance policies will be critical in building overall confidence in AI moving forward,” he adds. “Creating or reinforcing Data Steward roles within enterprises to advocate for secure AI use by creating, carrying out and enforcing data usage rules and regulations will display a commitment to data privacy and build company-wide confidence.” 

Businesses must defend against host of new potential threats

As Samir Desai, Vice President at GTT, explains, Data Privacy Day is another reminder of just how important it is for businesses to protect their data. 

“The rapid adoption of cloud computing, IoT/IIoT, mobile devices and remote work has increased both the size and complexity of the networking landscape and cybercriminals are taking advantage of this,” he says. “Alongside common threats – such as phishing – businesses today must defend against a whole new host of potential risks, such as how Gen AI can potentially super-charge phishing attempts by making it easier and faster for bad actors to craft convincing content.”  

Desai adds that to ensure data security for cloud-based apps while still providing reliable connectivity for hybrid workplaces and remote workers, the modern enterprise needs to invest in the right solutions. “This may require further collaboration with managed security and service partners to identify and implement the right technologies to protect the ever-expanding perimeter.

“For example, a zero trust networking approach which also combines network security and software-defined connectivity into a single cloud-based service experience, could be transformative. Its ‘always-on’ security capabilities means that data is protected, regardless of where resources or end-users reside across the enterprise environment.”

AI continues to be a game-changer in data privacy

As described by Keiron Holyome, VP UKI and Emerging Markets at BlackBerry Cybersecurity, AI continues to be a game-changer in data privacy and protection for businesses as well as individuals. 

“We have entered a phase where AI opens a powerful new armoury for those seeking to defend data. When trained to predict and protect, it is cybersecurity’s most commanding advantage. But it also equips those with malicious intent. Its large scale data collection in generative business and consumer applications raises valid concerns for data and communication privacy and protection that users need to be alert to and mitigate. 

“A big question at the moment is how legislation can be pervasive enough to offer peace of mind and protection against the growing Gen AI threats against data privacy, while not hindering those with responsibility for keeping data safe.” 

Holyome points to BlackBerry research which found that 92% of IT professionals believe governments have a responsibility to regulate advanced technologies, such as ChatGPT, though acknowledges that even the most watertight legislation can’t change reality. 

“That is, as the maturity of AI technologies and the hackers’ experience of putting it to work progress, it will get more and more difficult for organisations and institutions to raise their defences without using AI in their protective strategies.”

Need for proactive rather than reactive approach

Christine Bejerasco, CISO at WithSecure, calls for a reactive approach to data protection, recognising that threats have always followed technological trends.

“AI is unavoidable, even if we don't directly use it, the products and services that we use will still be using it. When it comes to new technologies, we are too focused on their new uses, rather than their potential for misuse– and we risk the same happening with AI.

“Threats have always followed technological trends – we’ve seen it with operating systems, internet communication protocols, and the internet of things. Only once we experience damage, do we take a step back and redesign. With the power of AI still being understood and continuing to develop, Data Privacy Day provides us an opportunity to take a number of steps so that we are proactive when it comes to data protection rather than reactive like in the past. 

“Firstly, all data stored in your organisation will be used to train AI models (even the data you’ve forgotten), so it’s really important that once you don’t need the data, it’s deleted. Take a look at data retention statements on the website and get your data deleted. Also, take a look at the security & privacy configurations of the services you subscribe to and tighten them, this should reduce the exposure of your data publicly and limit that within the service. If there is an option for your data to be used for various vague purposes, just opt out. By taking these very simple steps we can significantly reduce the exposure of our data.”

******

Make sure you check out the latest edition of Technology Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Technology Magazine is a BizClik brand

Share

Featured Articles

NetApp Cloud Complexity: Reliable Data is Key to AI Success

NetApp’s second Cloud Complexity study highlights the divide between AI leaders and AI laggards, illustrating the value of a unified data approach

Top 100 Women 2024: Karine Brunet, Capgemini - No. 9

Technology Magazine’s Top 100 Women in Technology honours Capgemini’s Karine Brunet at Number 9 for 2024

AMD: Expansion, Growth and Doubling Down on AI Innovation

With the AI chips market booming and set to grow to US$67bn in 2024, AMD is positioning itself for the new AI era as it celebrates its 55th birthday

Top 100 Women 2024: Miriam Murphy, NTT - No. 10

Data & Data Analytics

Dell at 40: A Long-Standing Commitment to Digital Innovation

Digital Transformation

Globant to Drive Formula 1’s Digital Transformation

Digital Transformation