Infosecurity Europe
3-5 June 2025
ExCeL London

How to Address GenAI Data Leakage in Your Organization

The use of generative AI tools in the workplace has surged as people get to know to how this technology can assist in everyday functions.

While there are huge productivity benefits of using generative AI for businesses – including in cybersecurity teams – it also provides significant risks.

One of these risks is accidental data leakage, whereby employees inadvertently post confidential company and personal data into public generative AI apps, such as OpenAI’s ChatGPT or Google’s Gemini, during their interactions. 

Shadow AI

This information can be used to train the AI models, which could consequently be revealed in subsequent responses to other users. This phenomenon is also known as shadow AI.

The scale of the problem is becoming apparent. RiverSafe research published in April 2024 found that one in five UK companies have had corporate data exposed via employee use of generative AI.

A report by Netskope made available in July 2024 found that proprietary source code sharing with generative AI apps accounted for 46% of all data policy violations.

As workplace generative AI becomes a reality across all industries, organisations must establish mechanisms to ensure these tools are used securely, reducing the risk of damaging data leaks and potential regulatory action occurring.



Why Banning GenAI is Not the Answer

In response to the high levels of sensitive data exposure in generative AI apps, many organizations have moved to ban their use by employees.

Cisco’s 2024 Data Privacy Benchmark Study found that 27% of organizations have banned, at least temporarily, the use of generative AI among their workforce over privacy and data security risks.

Smartphone manufacturer Samsung banned employees from using generative AI apps in May 2023 after some users leaked sensitive data via ChatGPT.

However, permanently banning generative AI apps is not a sustainable solution. Such action could put businesses at a competitive disadvantage compared to rivals who use these tools to improve efficiencies and reduce costs.

There is a critical need for strategies to be developed that enable businesses to enjoy the benefits of generative AI, without the risks.

GenAI Governance Strategies

Due to the unique data security risks posed by generative AI, experts recognise the need to develop new security governance models specifically for these tools. In July 2024, Checkmarx found that just 29% of organizations have established any form of governance for the use of generative AI.

These governance strategies should encompass several actions to prevent accidental data leakage in genAI. 

Key Actions to Safeguard Data in GenAI Systems

  • Block access to apps that do not serve legitimate business purposes or pose a disproportionate risk
  • Regularly review AI app activity, trends, behaviours and data sensitivity to identify risks to the organization.
  • Establish and communicate policies that address use of certain organizational data within public models and third-party applications
  • Use data loss prevention (DLP) policies to detect posts containing potentially sensitive information, including source code, regulated data, passwords and keys and intellectual property
  • Employ real-time user coaching to remind users of company policy surrounding the use of AI apps during interaction
  • Understand how third parties will use data from prompts and whether they will claim ownership of that data
  • Implement controls to secure the application interface and monitor user activity, such as the content and context of prompt inputs/outputs

For some organizations, it may make sense to build their own private generative AI tools that can be built on publicly available technologies, but which are specifically trained on internal data.

These tools are more secure than publicly available generative AI apps as they are pitched in the organisations’ own environment.


ADVERTISEMENT


Conclusion

Generative AI offers significant opportunities for organisations to increase efficiencies and profitability.

To realise this huge potential, data security risks must like the developing problem of accidental data leakage by employees must be mitigated.

A range of approaches can be used to tackle this issue and fitting these into a wider generative AI governance strategy needs to be a core focus for organisations as AI adoption accelerates.


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT