Listen to the article
Organisations see more than doubling of data policy breaches tied to generative AI in 2025, revealing critical vulnerabilities and lagging protective measures despite rising adoption and risks.
The volume of data policy violations tied to generative AI (genAI) use surged through 2025, more than doubling year‑on‑year as organisations detected an average of 223 monthly attempts by employees to include sensitive material, regulated data, intellectual property, source code, passwords and keys, in genAI prompts or uploads. Industry data shows the top quartile of organisations saw far higher volumes, with incidents reaching into the thousands each month, a pattern Netskope Threat Labs attributes to rapid genAI adoption and the proliferation of available AI tools. [1][2][4][6]
Netskope’s analysis finds the proportion of workers using genAI tools monthly has climbed steeply and the number of prompts sent to genAI systems has expanded severalfold, with average monthly prompts rising from the low thousands to many tens of thousands and the number of distinct genAI tools tracked increasing to more than 1,600. Such scale multiplies opportunity for accidental or deliberate exposure of sensitive data. [1][2][6]
Compounding the risk is persistent shadow AI and widespread use of personal cloud apps at work. Netskope reports that nearly half of genAI users rely on unmanaged personal accounts while working, and almost one in three employees uploads data to personal cloud applications monthly, behaviours security teams frequently cannot see or control. The firm also notes that a majority of insider‑related incidents involve personal cloud usage. [1][4][6]
Adoption of protective measures, however, lags the threat. According to the report, only around half of organisations have deployed data loss prevention (DLP) controls capable of preventing sensitive data from leaking via genAI applications in real time, and roughly one in four lack real‑time controls to detect or block leaks to personal cloud services. The company claims many security teams remain “playing catch‑up” with AI‑driven changes to workflows. [1][2]
“Cloud and AI adoption are transforming organisation’s systems and employee behaviours at pace, bringing new risks and threats that have taken many security teams by surprise in their scope and complexity. It feels like many security teams are still playing catch‑up, and sometimes losing sight of some security basics. It is urgent that they upgrade their policies and guardrails, and expand the scope of existing tools like DLP, to foster a balance between innovation and security at all levels,” said Ray Canzanese, Director of Netskope Threat Labs, in the report. [1][2]
Sectoral analysis underscores the stakes: Netskope’s healthcare‑focused report found very high levels of genAI adoption within the sector, with the vast majority of organisations using genAI apps that leverage user data for training and most data policy violations in healthcare involving regulated data being uploaded to unapproved web or cloud locations. The findings underscore an acute exposure where regulated personal health information is concerned. [3]
Phishing and malware continue to provide adversaries with effective routes to exploit cloud‑centric trust. Netskope’s data shows employees still click phishing links at concerning rates, and attackers increasingly engineer campaigns to harvest cloud credentials via counterfeit login pages, malicious OAuth apps and brand impersonation, Microsoft is reported as the most spoofed brand in cloud‑targeting campaigns. Attackers also exploit familiar cloud services to distribute malware, with GitHub, Microsoft OneDrive and Google Drive frequently cited as delivery vectors. [1][5][6][7]
Faced with these converging trends, the report urges organisations to move beyond ad‑hoc defences. Netskope recommends expanding DLP and other real‑time data protection guardrails across cloud and AI environments and considering consolidated, unified security frameworks that reduce complexity while improving visibility and control. The company’s analysis suggests that without such broad approaches, security teams will continue to be outpaced by rapidly evolving genAI and cloud threats. [1]
📌 Reference Map:
Reference Map:
- [1] (Electronics Media / Netskope summary) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8
- [2] (Netskope Cloud & Threat Report 2025) – Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 5
- [3] (Netskope Threat Labs , Healthcare 2025) – Paragraph 6
- [4] (BetaNews) – Paragraph 1, Paragraph 3
- [5] (CyberMagazine summary) – Paragraph 7
- [6] (Netskope resources / Cloud & Threat Report page) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 7
- [7] (Netskope Threat Labs , Europe 2025) – Paragraph 7
Source: Fuse Wire Services


