AI and data privacy: protecting information in a new era

By Szymon Idziniak, Machine Learning Engineer, STX Next
With their usage growing dramatically in recent years, AI models must incorporate privacy protection into their design as a matter of course

Artificial intelligence (AI) models that are built on consumer data must also be built with data privacy in mind. It is understandable that some users are hesitant of automated systems that collect and use their data, so to remain viable, AI models must incorporate privacy protection into their design as a matter of course.

AI, business and privacy implications

Usage of AI has grown in recent years and is now present in some form across most industries. Using simple machine learning models, it is possible to automate tedious and repeatable processes such as data validation using text, images and tabular data. In addition to standard regression and classification problems using simple models, deep learning is becoming more prominent.

The three main specialisations of deep learning are image processing (eg disease detection or object segmentation to solve the problem of driving autonomous cars), natural language processing (sentiment analysis, summarisation, text translation) and recommendation systems (eg matching a reader or shopper to the most interesting or appropriate content).

Medicine is an area where AI is increasingly found. This is an ideal use for AI because there is a great deal of available data when it comes to patients' diseases and symptoms, which can be useful in situations where doctors are not able to fully spot and explain links between the two.

While these applications all show a lot of promise, it is also crucial that privacy is handled with special care when using AI. Many of the most privacy-sensitive data analytics – such as search algorithms, recommendation engines and ad networks – are driven by machine learning and decisions made by algorithms. With the growth of artificial intelligence, the possibility of using personal data in ways that may infringe on privacy is increasing.

How AI can threaten privacy

Today companies are demanding, collecting and working on more data than ever before. AI, or more precisely machine learning, feed on this data: the more of it we have, the more we are able to understand why it looks the way it does and how it interconnects.

When handling such vast quantities of data, businesses should always account for the risk of data leakage. Where more and more data is used to train models, there is a risk that they will learn something that should remain private.

In 2017, Genpact conducted a survey of more than 5,000 people from various countries and found that 63% of respondents valued privacy more than a positive customer experience, and wanted businesses to avoid using AI in case it invaded their privacy. The major concerns here are how an AI system can access a consumer's personal information, what kind of information it can access, and how significant a privacy infringement this could be.

However, there are also positive sides to AI when it comes to data privacy. AI can be used to minimise the risk of privacy breaches by encrypting personal data, reducing human error and detecting potential cybersecurity incidents.

What to take into account when using AI

First of all, business technology leaders should consider whether they need AI and whether their problems can't be solved by more conventional methods. There is nothing worse than the "I want ML/AI solutions in my business, but I don't know what for yet" approach.

To introduce AI you need to consider the entire architecture that will build, train and deploy models and consider how to collect and process large amounts of data. This requires assembling a good team, consisting of people such as data engineers, ML engineers and data scientists. It’s necessary to process large amounts of data and master many tools, so it is not as simple as writing a web application in a standard framework.

Tech leaders should also be aware that AI comes with risks. They will need more and more computing resources to build increasingly sophisticated AI platforms. They will need to stay constantly abreast of news from the world of AI where everything changes rapidly, and it may turn out that in six months a much better solution or model for a particular problem has already been created.

We should keep in mind that no AI solution is perfect and has a limit to its effectiveness, so it cannot be relied upon completely. To combat concept drift, models should be constantly tested and retrained so that they are effective based on current data.

What the future holds for AI and privacy

Big data is now more abundant than ever, and the rise of AI processing capabilities has the potential to fundamentally change the way we tackle it and how we view information privacy. The potential applications of AI technologies in the fields of healthcare, justice and government are vast. However, AI poses social, technological and legal difficulties in the way data privacy is protected, as did many other technologies before it. It is in the hands of technology leaders to ensure that we shape the entire field of AI so that it respects the key principles of privacy in the future.

Share

Featured Articles

Cognizant and Microsoft Partner to Drive Enterprise Gen AI

Cognizant and Microsoft have announced an expansion of their global partnership to drive the adoption of generative AI in the enterprise

Top 100 Women 2024: Safra Catz, Oracle - No. 7

Technology Magazine’s Top 100 Women in Technology honours Oracle’s Safra Catz at Number 7 for 2024

Microsoft, AWS & Oracle: Why Big Tech is Investing in Japan

We explore what Microsoft, Oracle, AWS and Google Cloud’s multi-billion dollar investments mean for the digital landscape in Japan

Advancing AI in Retail with Pick N Pay's Leon Van Niekerk

AI & Machine Learning

How Intel AI is Powering the 2024 Paris Olympic Games

AI & Machine Learning

OpenText’s Muhi Majzoub: Engineering Platform Growth with AI

Enterprise IT