Netskope: Sensitive enterprise data being shared to ChatGPT

Share
Netskope found that generative AI app usage is growing rapidly
Within the average large enterprise, research by Netskope finds sensitive data is being shared to generative AI apps every hour of the working day

Large enterprises are seeing increasing quantities of sensitive data being posted to ChatGPT each month, according to Netskope, a leader in Secure Access Service Edge (SASE).

The report shows that, for every 10,000 enterprise users, an enterprise organisation is experiencing approximately 183 incidents of sensitive data being posted to the app per month. 

The findings are part of Cloud & Threat Report: AI Apps in the Enterprise, Netskope Threat Labs’ first comprehensive analysis of AI usage in the enterprise and the security risks at play. Based on data from millions of enterprise users globally, Netskope found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of  users exposing sensitive data.

Growing AI chatbot app usage

Netskope found that organisations with 10,000 users or more use an average of five AI apps daily, with Open AI’s ChatGPT seeing more than eight times as many daily active users as any other generative AI app. At the current growth rate, the number of users accessing AI apps is expected to double within the next seven months.

Over the past two months, the fastest growing AI app was Google Bard, currently adding users at a rate of 7.1% per week, compared to 1.6% for ChatGPT. At current rates, Google Bard is not poised to catch up to ChatGPT for over a year, though the generative AI app space is expected to evolve significantly before then, with many more apps in development. 

Report finds users inputting sensitive data into ChatGPT

Netskope found that source code is posted to ChatGPT more than any other type of sensitive data, at a rate of 158 incidents per 10,000 users per month. Other sensitive data being shared in ChatGPT includes regulated data- including financial and healthcare data, personally identifiable information - along with intellectual property excluding source code, and, most concerningly, passwords and keys, usually embedded in source code. 

“It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” said Ray Canzanese, Threat Research Director, Netskope Threat Labs. “Therefore, it is imperative for organisations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Blocking or granting access to ChatGPT

Netskope Threat Labs is currently tracking ChatGPT proxies and more than 1,000 malicious URLs and domains from opportunistic attackers seeking to capitalise on the AI hype, including multiple phishing campaigns, malware distribution campaigns, and spam and fraud websites.  

Blocking access to AI related content and AI applications is a short term solution to mitigate risk, but comes at the expense of the potential benefits AI apps offer to supplement corporate innovation and employee productivity. Netskope’s data shows that in financial services and healthcare - both highly regulated industries - nearly one in 5 organisations have implemented a blanket ban on employee use of ChatGPT, while in the technology sector, only one in 20 organisations have done likewise. 

“As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity,” said James Robinson, Deputy Chief Information Security Officer at Netskope. “Organisations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

******

For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - AI Magazine | Cyber Magazine.

Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events

Share

Featured Articles

Nvidia & AWS’s AI Breakthroughs at Re:Invent 2024

Nvidia & AWS showcase groundbreaking AI, robotics & quantum computing solutions at re:Invent 2024, changing enterprise AI deployment across industries

SAP and AWS Partner on AI-Powered Cloud ERP Platform GROW

Partnership between enterprise software firm SAP and cloud computing leader Amazon Web Services aims to speed cloud software adoption with generative AI

SAVE THE DATE – Cyber LIVE London 2025

Cyber LIVE returns in 2025 for a one-day in-person event co-located with Tech & AI LIVE London Global Summit

Amazon's New AI Chip Challenges Nvidia's Dominance

AI & Machine Learning

Wipro Cloud Deal Marks Marelli’s Data Centre Transformation

Digital Transformation

SUBMISSIONS OPEN - Global Tech & AI Awards 2025

Digital Transformation