HomeNewsNetskope Threat Labs: Source Code Most Common Sensitive Data Shared to ChatGPT

    Netskope Threat Labs: Source Code Most Common Sensitive Data Shared to ChatGPT

    Within the average large enterprise, sensitive data is being shared to generative AI apps every hour of the working day

    Netskope, a leader in Secure Access Service Edge (SASE), today unveiled new research showing that for every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to the app per month. Source code accounts for the largest share of sensitive data being exposed.

    The findings are part of Cloud & Threat Report: AI Apps in the Enterprise, Netskope Threat Labs’ first comprehensive analysis of AI usage in the enterprise and the security risks at play. Based on data from millions of enterprise users globally, Netskope found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of  users exposing sensitive data.

    Growing AI App Usage

    Netskope found that organizations with 10,000 users or more use an average of 5 AI apps daily, with ChatGPT seeing more than 8 times as many daily active users as any other generative AI app. At the current growth rate, the number of users accessing AI apps is expected to double within the next seven months.

    Over the past two months, the fastest growing AI app was Google Bard, currently adding users at a rate of 7.1% per week, compared to 1.6% for ChatGPT. At current rates, Google Bard is not poised to catch up to ChatGPT for over a year, though the generative AI app space is expected to evolve significantly before then, with many more apps in development.

    Auto EV India

    Users Inputting Sensitive Data into ChatGPT

    Netskope found that source code is posted to ChatGPT more than any other type of sensitive data, at a rate of 158 incidents per 10,000 users per month. Other sensitive data being shared in ChatGPT includes regulated data- including financial and healthcare data, personally identifiable information – along with intellectual property excluding source code, and, most concerningly, passwords and keys, usually embedded in source code.

    “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” said Ray Canzanese, Threat Research Director, Netskope Threat Labs. “Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

    Blocking or Granting Access to ChatGPT

    Netskope Threat Labs is currently tracking ChatGPT proxies and more than 1,000 malicious URLs and domains from opportunistic attackers seeking to capitalize on the AI hype, including multiple phishing campaigns, malware distribution campaigns, and spam and fraud websites.

    Blocking access to AI related content and AI applications is a short term solution to mitigate risk, but comes at the expense of the potential benefits AI apps offer to supplement corporate innovation and employee productivity. Netskope’s data shows that in financial services and healthcare – both highly regulated industries – nearly 1 in 5 organizations have implemented a blanket ban on employee use of ChatGPT, while in the technology sector, only 1 in 20 organizations have done likewise.

    “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity,” said James Robinson, Deputy Chief Information Security Officer at Netskope. “Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

    In order for organizations to enable the safe adoption of AI apps, they must center their approach on identifying permissible apps and implementing controls that empower users to use them to their fullest potential, while safeguarding the organization from risks. Such an approach should include domain filtering, URL filtering, and content inspection to protect against attacks. Other steps to safeguard data and securely use AI tools include:

    • Block access to apps that do not serve any legitimate business purpose or that pose a disproportionate risk to the organization.
    • Employ user coaching to remind users of company policy surrounding the use of AI apps.
    • Use modern data loss prevention (DLP) technologies to detect posts containing potentially sensitive information.

    In conjunction with the report, Netskope announced new solution offerings from SkopeAI, the Netskope suite of artificial intelligence and machine learning (AI/ML) innovations. SkopeAI leverages the power of AI/ML to conquer the limitations of complex legacy tools and provide protection using AI-speed techniques not found in other SASE products.

    ELE Times Report
    ELE Times Reporthttps://www.eletimes.ai/
    ELE Times provides extensive global coverage of Electronics, Technology and the Market. In addition to providing in-depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build experience, drive traffic, communicate your contributions to the right audience, generate leads and market your products favourably.

    Related News

    Must Read

    Keysight to Demonstrate NR-NTN devices Mobility Testing at MWC 2026 in Collaboration with Samsung

    Keysight Technologies, Inc. will demonstrate lab-based validation of new...

    ROHM Strengthens Supply Capability for GaN Power Devices

    Combining TSMC’s Process Technology to Build an End-to-End, In-Group...

    element14 Community launches smart security and surveillance design challenge

    element14, an Avnet Community, in collaboration with ADI, has...

    R & S and LITEON demonstrate high‑throughput 5G femtocell testing with the PVT360A

    Rohde & Schwarz and LITEON collaborate to showcase a...

    Infineon presents MCU and sensor solutions for the future of AI, IoT, mobility, and robotics

    Next-generation embedded systems are essential for applications in the...

    R&S advances AI-RAN testing using digital twins in collaboration with NVIDIA

    Rohde & Schwarz will showcase a new milestone in...

    Top Seven Tech Trends in the semiconductor sector for 2026

    By: STMicroelectronics In 2026, a new class of intelligent machines...