A Practical Guide to Safely Deploying Gen AI

Varonis and Jeff Pollard, Forrester Security and Risk Analyst, share insights into how to securely integrate generative AI into your organization.
Megan Garza
3 min read
Last updated June 7, 2024
keyboard showing AI trying to access a shield to represent data security

As the popularity of AI — especially generative AI — continues to grow, organizations face new challenges in protecting their data and managing risk.

We recently chatted with Forrester Security and Risk Analyst Jeff Pollard about the concerns associated with safely deploying AI-powered tools at your organization. Watch the full video here or read on for all the details.

Understanding the use cases of AI

The natural language feature of gen AI may be helpful for end-users, but as Jeff pointed out, it can also expose secrets and private information. In the past, threat actors had to break into a system using code, but with generative AI, attackers can access sensitive information simply by posing a question.

Security is essential to the business bottom line, and AI makes this more urgent than ever. This guide will show you how to deploy gen AI securely at your organization so you can boost efficiency while reducing risk.

To ensure the safe rollout of gen AI at your org, it’s crucial to understand how your team plans to use it. Are developers analyzing code? Is your marketing team searching for email tips?

Every use case has a different risk profile, and by understanding what those use cases are, you can identify any vulnerabilities and how malicious actors could exploit them.

Then, you can create guardrails around using gen AI at your org and set limits on who uses it and how.

AI’s impact on business objectives

AI's impact goes beyond just technological progress; it's connected to business goals like revenue growth and cost reduction. Security leaders must align security measures to these goals to effectively balance innovation with risk reduction.

The CISO's role is also changing. These leaders play a vital role in shaping how orgs assess risk both internally and externally as customers and suppliers demand better security from the companies they do business with. A poor security posture can impact revenue because customers will avoid doing business with organizations that put their data at risk.

AI’s impact goes beyond just technological progress; it’s connected to business goals like revenue growth and cost reduction. Security leaders must align security measures to these goals to effectively balance innovation with risk reduction.

Evangelizing AI

Educating security professionals and end-users is essential to ensure secure AI use.

Instead, teach your employees how to use AI properly and create secure gen AI policies. And on top of policies and education, orgs need to focus on reducing their blast radius — all the data a rogue employee or attacker can access.

Gen AI tools rely on a user's existing permissions to determine which files, emails, etc., are used to generate AI responses. If those permissions are excessive, there’s a real risk that gen AI copilots will surface sensitive data. Nearly 99% of permissions are unused, and more than half of those are high-risk, creating a massive blast radius of the potential damage a breach could cause.

practitioners must work with and coach department leaders to adopt AI responsibly — and that means right-sizing permissions and securing data so that employees can only access the information they need to do their jobs.

Get started with our Copilot Readiness Assessment.
Get your assessment
Microsoft_365_Copilot_Icon.svg (2)

Third-party apps and AI

As third-party vendors begin rolling out AI solutions within their applications and activating them by default, security leaders may not notice these changes and struggle to identify which apps could pose a security risk.

CISOs need to know the data third-party apps are accessing and how they use and collect it. They also need to identify whether and how these apps store, process, or transmit data.

Because these third-party updates are automatic, an out-of-band review can help identify gen AI features that may have been added after the app was implemented.

The ROI of AI

Embracing AI brings both opportunities and challenges, requiring a balance between increased efficiency and risk management. However, even when you factor in the training requirements, additional security controls, and computing resources, AI's potential return on investment justifies its use.

According to Forrester’s Total Economic Impact™ of Copilot for Microsoft 365, implementing the generative AI tool led to “increased revenues, lowered internal and external operating costs, and improved employee experience and company culture.” 

Generative AI can be a productivity game-changer for your organization, but security leaders who are not exploring safe ways to deploy AI will be caught flat-footed.

Security leaders who actively champion AI have a unique chance to drive innovation with a security mindset.

Companies must also improve their data security posture before, during, and after AI deployments. Otherwise, it’s akin to organizations expecting a gen AI solution to safeguard data they’re unprepared to handle or manage.

The future of AI

AI technology is advancing faster than its security. As the adoption of gen AI continues to grow, it’s crucial that you have a holistic approach to data security and specific controls for the copilots themselves.

Varonis helps organizations safely deploy generative AI and protects sensitive data from leaks by giving you complete oversight and management of AI tools and tasks. See how our cloud-native solution can cover all your AI security needs with a 30-minute demo.

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

generative-ai-security:-preparing-for-salesforce-einstein-copilot
Generative AI Security: Preparing for Salesforce Einstein Copilot
See how Salesforce Einstein Copilot’s security model works and the risks you must mitigate to ensure a safe and secure rollout.
the-biggest-security-risks-to-your-salesforce-org
The Biggest Security Risks to Your Salesforce Org
Discover how Salesforce professionals and security teams can combat the most concerning risks in their environments.
understanding-and-applying-the-shared-responsibility-model-at-your-organization
Understanding and Applying the Shared Responsibility Model at Your Organization
To avoid significant security gaps and risks to sensitive data, organizations need to understand the shared responsibility model used by many SaaS providers.
copilot-security:-ensuring-a-secure-microsoft-copilot-rollout
Copilot Security: Ensuring a Secure Microsoft Copilot Rollout
This article describes how Microsoft 365 Copilot's security model works and the risks that must be considered to ensure a safe rollout.