Posted on

5 actionable steps to prevent GenAI data leaks without completely blocking AI usage

5 actionable steps to prevent GenAI data leaks without completely blocking AI usage

Oct 01, 2024The Hacker NewsGenerative AI / data protection

Since its inception, generative AI has revolutionized business productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning and customer engagement. However, this business agility comes with significant risks, particularly the potential for loss of sensitive data. As companies try to balance productivity gains with security concerns, many have been forced to choose between unrestricted use of GenAI or a complete ban.

A new e-guide from LayerX titled “5 Actionable Steps to Prevent Data Leakage Using Generative AI Tools” aims to help companies overcome the challenges of GenAI use in the workplace. The guide provides practical steps for security managers to protect sensitive corporate data while leveraging the productivity benefits of GenAI tools like ChatGPT. This approach is intended to enable companies to find the right balance between innovation and security.

Why worry about ChatGPT?

The e-guide addresses growing concerns that unrestricted GenAI use could lead to inadvertent data disclosure. This was evident, for example, in incidents such as the Samsung data leak. In this case, employees accidentally exposed proprietary code while using ChatGPT, resulting in a complete ban on GenAI tools within the company. Such incidents highlight the need for companies to develop robust policies and controls to mitigate the risks associated with GenAI.

Our understanding of the risk is not just anecdotal. According to a study by LayerX Security:

  • 15% of enterprise users have inserted data into GenAI tools.
  • 6% of enterprise users have inserted sensitive Data such as source code, PII, or sensitive organizational information in GenAI tools.
  • Of the top 5% of GenAI users, who are the most intensive users, a full 50% are involved in research and development.
  • Source code is the primary type of sensitive data exposed, accounting for 31% of exposed data

Important steps for security managers

What can security managers do to enable the use of GenAI without exposing the organization to the risk of data exfiltration? Key highlights of the e-guide include the following steps:

  1. Mapping AI usage in the organization – Start by understanding what you need to protect. Determine who is using GenAI tools, how and for what purposes, and what types of data are being disclosed. This will be the basis of an effective risk management strategy.
  2. Restricting personal accounts – Next, take advantage of the protection that GenAI tools provide. GenAI corporate accounts offer built-in security measures that can significantly reduce the risk of sensitive data loss. These include restrictions on data use for training purposes, data retention restrictions, account sharing restrictions, anonymization, and more. Note that this requires enforcing the use of non-personal accounts when using GenAI (which requires a proprietary tool).
  3. Prompt user – The third step is to use the power of your own employees. Simple reminder messages displayed when using GenAI tools help raise employees’ awareness of the potential consequences of their actions and company policies. This can effectively reduce risky behavior.
  4. Block entry of sensitive information – Now is the time to introduce advanced technology. Implement automated controls that limit the entry of large amounts of sensitive data into GenAI tools. This is particularly effective at preventing employees from sharing source code, customer information, PII, financial data, and more.
  5. Limitation of GenAI browser extensions – Finally, prevent the risk of browser extensions. Automatically manage and classify AI browser extensions by risk to prevent unauthorized access to sensitive corporate data.

To fully realize the productivity benefits of generative AI, companies must find a balance between productivity and security. Therefore, GenAI security cannot be a binary decision about whether to allow or block all AI activities. Rather, a more nuanced and fine-tuned approach will enable companies to reap the business benefits without putting the organization at risk. For security managers, this is the path to becoming an important business partner and enabler.

Download the guide to learn how you can easily implement these steps right away.

Did you find this article interesting? This article is a contribution from one of our valued partners. Keep following us Twitter and LinkedIn to read more exclusive content we publish.