BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Embrace GenAI Without Exposing Your Data To Risk

Forbes Technology Council

Etay Maor is Senior Director, Security Strategy for Cato Networks, a developer of advanced cloud-native cybersecurity technologies.

2023 was undeniably the year of generative AI (GenAI), and ChatGPT dominated those headlines. OpenAI's chatbot reportedly drew in a staggering 100 million users weekly. Its widespread adoption and broad applications significantly transformed how people work. From code review assistance to streamlining customer service interactions, it's becoming a staple for organizations and their employees. Yet, many wonder if it’s safe.

Despite its promising outlook, security and privacy concerns run rampant alongside conspiracy theories—so much so that a group of AI experts and industry executives, including Elon Musk, called for a six-month pause in AI development last year. Some fears, such as OpenAI misusing user data for malicious purposes, may be unfounded. However, genuine risks do exist for enterprises and their employees using GenAI large language models.

Security And Privacy Risks Of GenAI

Last year, Samsung banned ChatGPT and other GenAI chatbots after discovering its employees had fed lines of confidential code to ChatGPT for bug fixes and code optimization. At that point, OpenAI did not offer any feature to disable chat history and training, which meant that the proprietary code became a part of its training data.

The risk of employees inadvertently revealing confidential data is just one concern. Each new GenAI tool or application added to the enterprise ecosystem expands the attack surface, making it more susceptible to third-party attacks. Weak authentication and insufficient privacy and control settings can result in unauthorized access and exposure of user data and conversations, including work-related chats. For instance, ChatGPT leaked conversations with details of a user’s proposals and presentations after an adversary hacked into several user accounts.

Similarly, vulnerabilities in the application or its integrations could expose sensitive information. It happened last year when a bug in ChatGPT’s source code revealed the private information of its active paid subscribers to other users. Such incidents can result in compliance failures and hefty penalties. The risk of third-party vulnerabilities is even greater with lesser-known and unauthorized GenAI tools, especially those outside IT's purview, otherwise known as shadow IT.

Enabling Secure Access To GenAI Platforms

Outright bans on GenAI not only hinder productivity and innovation but can backfire, as users resort to workarounds and alternative applications that can prove even riskier. I believe the most appropriate way forward is to enable secure access to these AI platforms. Here are some key security controls to consider.

1. Control usage through allowlists for apps and users.

Organizations need to control which applications are allowed and which users or user groups can access them. They achieve this by configuring a whitelist and tenant control via solutions like a next-gen firewall, identity and access management (IAM) or cloud access security broker (CASB). This is a classic defense-in-depth approach tailored for AI/LLM usage risk.

Using any of these solutions, admins can specify allowed tools in the GenAI tools category, for example, blocking all GenAI tools except those authorized for use. Then, for tenant control, they can configure which accounts can access the application.

For instance, organizations with ChatGPT Enterprise subscriptions can block users from accessing ChatGPT's free tier with personal accounts, allowing only corporate accounts with default security and data control policies.

2. Control data sent to GenAI apps.

Organizations can establish clear data governance policies to define rules and guidelines for handling data, including what types of data can be shared with GenAI apps and under what circumstances. Educating users about data security best practices and the importance of safeguarding sensitive information is also crucial and can help reduce the risk of unintentional data leakage to GenAI apps.

In addition, organizations can use data governance platforms and data loss prevention (DLP) to proactively control the type of data that users can share with GenAI apps. Advanced AI and ML-based DLP data classifiers allow organizations to enforce policies that block specific text inputs and documents like financial or legal documents containing sensitive information that can violate a company's privacy policies.

3. Enforce app-specific controls.

Different GenAI apps have introduced their own data privacy and security features. For example, ChatGPT retains users' chat history and uses that data for model training by default. However, users can explicitly opt out of data retention and model training.

To ensure corporate users have adequate privacy and security settings, admins can utilize granular controls offered by AI and ML-based tools like next-gen firewalls, IAM and CASBs. These tools can analyze the security and privacy settings of an account in real time to allow or block connections accordingly.

Striking The Right Balance Between GenAI Use And Security

Currently, I believe much of the perceived risk associated with GenAI is excessive fear-mongering. However, legitimate concerns and risks still exist when using AI tools in corporate settings. To ensure enterprises don't lag on the productivity benefits of GenAI, it is important to deploy comprehensive security controls to keep sensitive and proprietary data secure and private.

Organizations can choose to deploy solutions like DLP, CASB and next-generation firewalls or adopt a cloud-native architecture that converges these security controls into a single application such as secure access service edge (SASE), which brings together all these advanced AI-based security capabilities and more under a single dashboard. It also unifies security with networking for more visibility on GenAI usage across applications, enabling accurate risk analysis.

AI undoubtedly adds a new capability to organizations; however, keep in mind that when introducing AI to your company, you are also introducing your company to the AI. This is a two-way street that needs to be monitored, especially since we are in the early days and many people do not realize the risks.

GenAI capabilities and the surrounding threat landscape evolve much faster than organizations can often keep up with through manual methods. Leveraging a comprehensive suite of AI-powered security tools can help ensure security measures and controls remain relevant despite rapid advancements.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website