Varonis announces strategic partnership with Microsoft to accelerate the secure adoption of Copilot.

Learn more

AI At Work: Three Steps To Prepare And Protect Your Business

Discover how your business can prepare and protect your sensitive data from the risks that generative AI presents.
Yaki Faitelson
3 min read
Last updated March 24, 2024
AI at Work with Microsoft Copilot example

In terms of hype, nothing is hotter than AI right now; blockchain has some weak links, the metaverse isn't singing in this part of the multiverse, and even big data seems small. As the CEO of a leading cybersecurity company, I get asked daily about AI and what it means for data security.

Like most new technologies, generative AI presents both opportunities and risks. AI is already boosting productivity by acting as a virtual assistant for employees. From a risk perspective, however, there are two dimensions to consider—self-inflicted risk and external risk.

Self-inflicted risk will occur when an organization's employees start using AI to suggest content, either through a query or in the context of what they're creating. Unless data is locked down, there's little to prevent AI from analyzing your data estate and revealing your secret road map, financial information or other precious data to all of the wrong people.

To help mitigate this risk, Microsoft recommends securing sensitive data before rolling out its AI assistant, Copilot. One step it suggests taking is "[making] sure your organization has the right information access controls and policies in place."

Unfortunately, getting the right access controls and policies in place proves far more challenging than most organizations realize. This will likely only become more difficult as AI further increases the data we create and must protect.

Without the right controls in place, AI won't know who should see what. Organizations will be exposed, just like they are when they activate enterprise search platforms before locking things down—only much worse. If this happens, employees won't even need to search for content they want to steal or sneak a peek at; AI will gladly expose it for them.

How attackers are leveraging AI

External risk will continue to increase as attackers learn to use AI. Unfortunately, they've already started. WormGPT and FraudGPT use large language models (LLMs) to help attackers craft convincing phishing emails and translate them into other languages.

Attackers now also create fake data sets based on past breaches and other available data; they claim they've stolen data from companies to bolster their reputation as capable attackers or potentially dupe these companies into paying a ransom. Generative AI could increase data volume and make it harder to tell the difference between a real and fake breach.

Researchers have already used AI to craft malware as a proof of concept, and we should expect to see AI-generated malware in the wild. Unfortunately, the use of AI will continue to lower the barriers to entry for all kinds of cyber villainy.

These are just some of the risks AI presents—and at the pace this technology is advancing, there will be many more to come. Soon, generative AI may devise new cyber threats all on its own.

Cyber defenders will get an AI boost

Thankfully, AI also presents enormous opportunities for cybersecurity.

AI is excellent at recognizing patterns. By analyzing the right things, AI and machine learning can provide insights about vulnerabilities and unwanted behaviors. When coupled with automation, AI will be able to take care of routine tasks, giving humans more time to take care of tasks that require their precious attention.

When human intervention is required, AI will help cyber defenders be more efficient by providing insights and speeding up investigations. These uses for AI are imminent, and many more are on the horizon. For example, generative AI could create troves of synthetic data to serve as bait for attackers—making it harder for the bad guys to know whether they've stolen anything valuable while giving defenders and the technologies they rely on more opportunities to catch cyber crooks in their tracks.

Preparing organizations for AI

  1. Conduct a Data Risk Assessment to identify sensitive and overly accessible data before it's surfaced by "friendly AI" or "unfriendly" attacker-run AI. Your data makes AI valuable, and that's what you need to protect. Organizations don't know enough about where their important data is stored or who can—and does—use it.
  2. Lock your data down, especially your critical data. Once an organization can see its data risks during an assessment, they almost always find critical data that's far too accessible, in the wrong places, and used or unused in surprising ways. Your employees and partners should have only the information they need to do their jobs and nothing more.
  3. Watch your data. We don't know what new AI techniques attackers will use, but we do know what they'll be using them for—to steal your data. It's never been more important to monitor how humans and applications use data to look for unwanted activity. Credit card companies and banks have been monitoring financial transactions for years to detect financial crime, and everyone with valuable data should be monitoring their data transactions for data-related crimes.

While some newer, trending technologies peak and slide into obsolescence, AI will almost certainly outlast the hype. If your data isn't locked down, AI (friendly or otherwise) could make a data breach more likely. As far as we know, not even AI can un-breach data, so protect your data first to ensure AI works for you rather than against you.

This article first appeared on Forbes.

What you should do now

Below are three ways we can help you begin your journey to reducing data risk at your company:

  1. Schedule a demo session with us, where we can show you around, answer your questions, and help you see if Varonis is right for you.
  2. Download our free report and learn the risks associated with SaaS data exposure.
  3. Share this blog post with someone you know who'd enjoy reading it. Share it with them via email, LinkedIn, Reddit, or Facebook.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

your-sales-data-is-mission-critical:-are-you-protecting-it?
Your Sales Data Is Mission-Critical: Are You Protecting It?
If you’re like many executives, you might assume your data is secure within those cloud applications. That’s a dangerous assumption, though. Cloud providers are responsible for everything that delivers their application (e.g., their data center); it’s your responsibility to protect the data inside it.
is-your-data-insider-proof?-five-steps-to-keep-your-secrets-safe
Is Your Data Insider-Proof? Five Steps To Keep Your Secrets Safe
This article explains the five steps you can take to see how prepared you are for a nefarious insider or an outside attacker that compromises an insider's account or computer.
cloud-applications-put-your-data-at-risk---here's-how-to-regain-control
Cloud Applications Put Your Data At Risk - Here's How To Regain Control
Cloud applications boost productivity and ease collaboration. But when it comes to keeping your organization safe from cyberattacks, they're also a big, growing risk. Your data is in more places...
polyrize-acquisition
Polyrize Acquisition
I’m excited to announce today our agreement to acquire Polyrize, a software company whose team and products are a natural fit as part of the Varonis family. This is the first company Varonis has acquired, and I want to give you some background on the strategic rationale for today’s announcement.