Breaks in the Cloud: Protecting Against Top LLM Risks

Understand the risk of Large Language Models (LLMs) to your cloud security and why it's important to be cautious when adopting new technologies.
Megan Garza
3 min read
Last updated September 30, 2024
Cloud image with warning symbols and alerts in Copilot, a LLM and AI related tool

Modern technology relies heavily on the cloud, and as companies migrate to SaaS, they increasingly use advanced features and tools like generative AI.

Varonis’ Brock Bauer, security architect, and Ellen Wilson, director of product marketing, recently discussed AI trends and explained how large language models (LLMs) help users manage vast information quickly.

However, with this enhanced productivity comes increased risk. As organizations transition to the cloud, they face a growing blast radius.

Continue reading for a detailed recap of the live session and learn why complete data visibility is crucial for safeguarding sensitive information.

Harnessing LLMs in the cloud

The cloud serves as the foundation for all cutting-edge technology, and cloud adoption has become essential.

The question for security teams has shifted from whether to move to SaaS to how and when to migrate.

“The cloud is the breeding ground for all new technologies these days,” Brock said.

As organizations transition to the cloud, they look to implement new features and toolsets, with AI generating the biggest buzz.

Everyone's making their way to the cloud to use new features and tool sets, and the hot one now is AI.

Brock Bauer, Varonis Security Architect

 

Many new and existing products now feature AI capabilities, revolutionizing how we interpret data. LLMs allow us to interact with data and receive information in natural language and are integrated into new copilots provided by Microsoft, Salesforce, and many other applications.

However, Brock said, the platforms, people, and tools that handle our data don't always prioritize the organization's interests. He warned audience attendees to be cautious when adopting new technologies like LLMs.

“We can increase productivity where AI can be asked to summarize emails and meetings, parse huge datasets or file systems, or even author email replies,” Brock said. “But we’ve got to understand the risk of putting these technologies in our users' hands.”

How AI adds to the challenges of data security

Attacks come from many different directions with all types of motivations, methods, and techniques. Regardless of the nature of the attack or breach, though, they all share a common goal: targeting your data.

Unfortunately, attackers have an advantage where they can try to breach your data a million times, and they only have to be right once, but we have to be successful all the time.

Ellen Wilson, Varonis Director of Product Marketing

 

Ellen pointed out that AI adoption has added another layer of complexity to the already intricate data security discipline.

“We see that as organizations move to the cloud, they have a growing blast radius," she said. "Imagine how these new AI copilots can add additional risks of overly exposing data."

Balancing productivity with data privacy

Gartner reports that 42% of IT and security executives are concerned about data privacy regarding AI. Organizations recognize the need to safeguard privacy while maintaining AI functionality but find it challenging to manage both.

“We want to leverage LLM technology,” Brock said. “We want to give the productivity capabilities to our users, but we also need to protect the privacy of the data they're accessing.

Ensuring regulatory compliance can also present a significant challenge in the field of AI.

Earlier this year saw the introduction of the European Union Artificial Intelligence Act, the world's first comprehensive AI regulation with violations resulting in fines of up to €35 million.

“I'm sure we'll see more regulations like this in coming years,” Brock said.

Ellen pointed out that with many companies facing implementing AI security measures might take a back seat.

When you're just trying to keep the lights on and not disrupt the business proactively, planning for your organization's AI data security challenges can sound impossible.

Ellen Wilson, Varonis Director of Product Marketing

 

Protect your org against AI risk.

To protect sensitive data from AI-related risks, you must first have complete visibility into your data.

“You need to know who has access, how they got there, who or what is using it, and how sensitive it is,” Brock said.

Knowing which copilot-enabled users or AI accounts can access your data is crucial. This includes having visibility into AI workloads across major data platforms like AWS, Azure, and Snowflake.

There's a new generation of application vulnerabilities spawned by AI, which we've got to protect our data from.

Brock Bauer, Varonis Security Architect

 

Monitoring abnormal AI usage — such as when AI processes or users violate policy or behave abnormally — is also crucial.

Lastly, is essential, allowing you to revoke unsafe and excessive permissions without disrupting business operations.

“This enables you to control and reduce your AI blast radius automatically,” Ellen said.

How Varonis can help

Varonis uses an automated, holistic approach to data security.

Our cloud-native platform assesses your blast radius, automatically addresses threats to ensure continuous least privilege, and alerts you to new threats for quicker action.

“We can detect exactly what Copilot is being used for,” Brock said. “And that's not just the prompts and responses we track, but also the files accessed and even drafted by AI.

“Varonis has had behavioral alerting around data for years, and now that we have LLM processing, we can identify fraudulent or malicious intent of a user, which gives us a leg up to catch AI-related attacks early and stop breaches before they happen.”

Watch Brock and Ellen's full discussion to learn more on LLM risks. 

Take the first step toward protecting your sensitive data from AI risks with a free Data Risk Assessment. In less than 24 hours, you’ll have a clear view of your risk posture and show you where sensitive data is exposed. 

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

speed-data:-the-commoditization-of-cybercrime-with-matt-radolec
Speed Data: The Commoditization of Cybercrime With Matt Radolec
Matt Radolec at Varonis discusses the future of cybersecurity, the rise of ransomware-as-a-service (RaaS), and what security risks keep him up at night.
women-in-tech:-the-anatomy-of-a-female-cybersecurity-leader
Women in Tech: The Anatomy of a Female Cybersecurity Leader
Learn more about the powerful women in tech as we look at a breakdown of today's female cybersecurity leaders! 
speed-data:-behind-the-scenes-of-cyber-insurance-recovery-with-scott-godes
Speed Data: Behind the Scenes of Cyber Insurance Recovery With Scott Godes
Scott Godes, Insurance Recovery Litigator for Barnes & Thornburg LLP, chats about the importance of cyber insurance, and how data privacy has evolved.
speed-data:-combating-the-cybersecurity-skills-shortage-with-bryan-chnowski
Speed Data: Combating the Cybersecurity Skills Shortage With Bryan Chnowski
Bryan Chnowski, Deputy CISO for Nuvance Health, explains why one of the most significant cybersecurity risks on the horizon is the shortage of workers.