Rewards and Risks: What Generative AI Means for Security

As AI has grown in popularity, concerns are being raised about the risks involved with using the technology. Learn the rewards and risks of using generative AI.
Lexi Croisdale
8 min read
Last updated April 11, 2024
shield with person attempting to infiltrate it, depicting AI security risks

From countless news articles and posts on your social media feed to all-new tools built into your favorite software, artificial intelligence is everywhere.  

Although the technology is not new, generative AI has created a recent buzz with the November 2022 release of ChatGPT, a large language model (LLM) that uses prompt engineering to generate various outcomes for users. 

Almost immediately after ChatGPT’s release, similar generative AI tools launched, such as Google’s Bard and Microsoft’s Copilot, and the adoption of using AI to generate content, videos, photography, code, and more, has spread like wildfire.  

As AI has grown in popularity, concerns are being raised about the risks involved with using the technology. Cybercriminals have already found ways to exfiltrate data from different AI tools, including using platforms like WormGPT, an AI model trained on malware creation data and used for ill intent or to generate malicious code. 

In this blog, we’ll examine:  

  • The benefits generative AI can bring companies today 
  • What risks to be cautious of when using AI 
  • How you can confidently navigate the AI playing field  

We’ll highlight the biggest rewards of using generative AI and share what to watch out for. With the use of generative AI tools increasing and the average cost of a data breach totaling $4.35M in 2022, there is no better time to ensure your organization is protected.  

Generative AI Security: Preventing Microsoft Copilot Data Exposure
  • See real-world examples of how generative AI can access and create sensitive data.
  • Learn what data security risks generative AI can bring to companies.
  • Understand how to prepare your Microsoft 365 environment for a safe Copilot rollout.

Why generative AI security matters now  

Artificial intelligence dates back to the 1960s with the creation of the first AI chatbot, ELIZA, developed by Joseph Weizenbaum. So why is generative AI so popular now, more than 50 years later?  

The introduction of ChatGPT in late 2022 accelerated the development of generative AI and gave the world access to the powerful tool.  

“What ChatGPT has really done is commoditized AI and made it available to more and more people, essentially putting it on a search engine front end just means that more and more people can use it without understanding what the underlying technology is,” said Thomas Cock, a Security Architect on the Varonis Incident Response Team who presented a webinar on ChatGPT in early 2023. 

With many software corporations developing their own AI programs, security teams may be caught off guard when these tools are released and might not be aware of how to combat the risks they present.  

Microsoft Copilot, which is currently in an early-access phase, has the benefit of learning your organization in addition to its LLM design. Some use cases include Copilot joining your Teams meeting and taking notes in real time, it helping triage emails in Outlook and create replies, and even analyzing raw data in Excel for you. 

Varonis’ Rob Sobers and Mike Thompson presented a deep dive on generative AI and how Copilot’s security model works and highlighted the good and the bad to help security teams understand the tool before it’s released.

Copilot is being called the most powerful productivity tool on the planet, and if you've ever used gen AI tools, you probably can see why it's being called that. Imagine having a little ChatGPT that's built into all of your Office apps like Word, PowerPoint, Excel, and Microsoft Teams.
Rob Sobers, Chief Marketing Officer at Varonis

Read more: 

Enhancing security with generative AI tools

In addition to Copilot’s abilities, there are several aspects of gen AI tools that security teams can benefit from, including enhancing cybersecurity operations, threat detection, and defense mechanisms.  

Other beneficial uses of generative AI include: 

  • Blue team defenders: Just as a threat actor may use AI tools for harm, businesses can use them for good. Thomas shared how ChatGPT has simplified ways users can check malicious code, detect specific vulnerabilities, and summarize outputs almost instantly.  
  • Malware analysis: Generative AI can assist in generating variants of known malware samples, aiding cybersecurity professionals in creating more comprehensive malware detection and analysis systems. 
  • Deception and honeypots: Generative AI can help create realistic decoy systems or honeypots that appear enticing to attackers. This allows security teams to monitor and analyze attack techniques, gather threat intelligence, and divert attackers away from real assets. 
  • Automated response generation: When an attack is detected, generative AI can assist in generating automated responses to mitigate the threat. This can include generating firewall rules, deploying countermeasures, and isolating compromised systems. It can help save time for analysts responding to the threats as well.  
  • Adaptive security measures: Generative AI can aid in developing security mechanisms that adapt to evolving threats. By continuously learning from new attack techniques, these systems can evolve and improve their defense strategies over time.  
  • Visualizing attacks: Generative AI can assist in visualizing complex attack patterns and behaviors, making it easier for security analysts to understand how attacks are executed and identify patterns that might not be immediately apparent. 

Risks and challenges of generative AI in cybersecurity 

There are two sides to every story. While generative AI offers many benefits in addition to those listed above, there are also challenges and risks associated with the tool. 

Generative AI introduces several security risks that need to be carefully considered when implementing and using the technology.  

According to research conducted by Forrester, security is a top hurdle for companies adopting AI, with 64% of respondents reporting they don’t know how to evaluate the security of generative AI tools. 

One of the top concerns of Microsoft Copilot is how its security model uses permissions and can access all the files and information that a user can. The problem is most users in an organization already have too much access to information they shouldn’t. 

“One thing that every organization has in common is this huge spike in organization-wide access,” Mike said. “This is really the biggest risk that we think goes unaddressed for most organizations and is what will most directly translate to risk with Copilot because that's what it's leveraging. The permissions as defined through Sharepoint and One Drive. It's your responsibility to enforce the least privilege model internally, and you know we've given you the model to do that. But how many people are really doing that effectively?” 

Near the end of Rob and Mike’s Microsoft Copilot breakdown, 76% of the attendees said while they are concerned over the risk of using generative AI tools, they still want to explore using them. Without proper training or proactive security measures implemented, companies run the risk of their crucial information being shared with these tools and, potentially, the entire internet.  

As the adoption of AI tools develops, humans will get lazier and potentially over-trust AI to do security checks they should be doing. For example, an employee could ask Microsoft Copilot to generate a proposal using existing documents and meeting notes, eliminating hours of work for an employee. They might skim the result and think it’s fine, but sensitive information from the original documentation could sneak its way in if it isn’t thoroughly reviewed.  

Aside from the internal security concerns, threat actors will use AI to write malicious code, find vulnerabilities, and launch large-scale campaigns.  

Attackers will also use AI to generate fake data sets and use them to try to extort businesses (or at a minimum, waste their time). 

Attackers are going to get good at prompt engineering instead of learning Powershell or Python. If they know they can compromise a user and that they'll have access to an AI tool, why not get better at prompt engineering?
Rob Sobers, Chief Marketing Officer at Varonis

Other security concerns and risks associated with generative AI include: 

  • Cyberattack campaigns on demand: Attackers can harness generative AI to automate the creation of malware, phishing campaigns, or other cyber threats, making it easier to scale and launch attacks. In Thomas’ presentation on ChatGPT, he shared an example of how ChatGPT can personalize an email to appeal to Elon Musk about investing in X, formerly known as Twitter. Including information about the target, in this case Elon Musk, in the prompt can help threat actors write messages that are more appealing and more likely to result in users taking action. AI tools could also be prompted with information such as age, gender, education, company information, and more.   
  • No tool-proofing: AI tools also run the risk of being manipulated to produce incorrect or malicious outputs. Some AI tools have ethical standards in place to help combat improper use of the product, but threat actors have found ways around them.  
  • Leaking sensitive information: Generative AI models often learn from large datasets, which might contain sensitive data, depending on what information is shared. If not properly handled, there's a risk of inadvertently revealing confidential information through generated outputs. AI models can store this information too, making your sensitive data accessible to anyone who accesses your user account with different AI tools.  
  • Intellectual property theft: Generative models often pull in a massive amount of publicly available information, including exposed proprietary data. There's a real risk that generative AI could infringe upon others’ intellectual property rights and be subject to lawsuits. For example, image-based AI tools have been implementing Getty’s watermark on images because the AI-generated photos are created based on using Getty’s multitude of public data. Additionally, there is also a risk that your intellectual property could end up being used in AI tools if it’s not secure.  
  • Identity risk and deepfakes: Generative AI can be used to create convincing fake images, videos, or audio clips, leading to identity theft, impersonation, and the creation of deepfake content that can spread misinformation. The tools can also make phishing campaigns seem more human and appeal to their target. An image of the Pope wearing a Balenciaga jacket went viral before it was shared that the image was created using AI tools, proving the likelihood of AI imagery and deepfake videos being believable is at an all-time high.  

ChatGPT, in particular, is designed to create a believable human interaction, making it the perfect tool for phishing campaigns. Threat actors also use the LLM to package malware into fake applications, which was popular during the rise of ChatGPT before parent company OpenAI had issued an iOS application.  

“Even if you search the Chrome web store for ChatGPT and include the word ‘official,’ you still get over 1,000 results, and none of these are legitimate, first-party applications,” Thomas said. “Not all of them will be malicious, but you have to wonder why people are paying for you to use the API on their back end. What are they gaining from it? What information are they taking from you?”  

Get started with our world-famous data risk assessment.
Book your free assessment

How to safeguard your organization with generative AI security measures

If you wait until a data breach occurs to start implementing security measures around AI, you’ll be coming from behind. 

One of the first steps leaders can take to address concerns around employees using generative AI is to properly train them on what is acceptable to share and what isn’t.  

Some people may find it harmless to include customer data in ChatGPT prompts, for example, but this is exactly the type of action threat actors are hoping your employees take. All it takes is one employee accessing a fake ChatGPT site and entering sensitive information for your company to be at risk.  

As new generative AI tools are released, companies must educate their teams on how to properly use them and stay aware of the security concerns as they are discovered.  

Having a Data Security Platform (DSP) in place can also prevent employees from having access to sensitive data they shouldn’t in the first place. DSPs can help security teams automatically discover, classify, and label sensitive data, enforce least privilege, and continuously remediate data exposure and misconfigurations. 

If you have good visibility of what people are doing, where the data sits, the sensitivity of that data, and where you have concentrations of sensitive data, it's much easier to reduce that blast radius and make sure only the right people have access.
Thomas Cock, Security Architect at Varonis

Varonis’ world-class global incident response team also investigates abnormal activity on your behalf. If an employee is accessing an abundance of information they shouldn’t be, you’re alerted instantly. Our automation capabilities help reduce the time to detection, allowing us to respond and investigate quickly. 

Start with a free Data Risk Assessment customized to your organization's needs, regulations, and configurations. Our assessments will give you concrete steps to prioritize and fix major security risk and compliance issues in your data.   

In closing 

There is no denying that AI has taken the world by storm, and the technology will continue to evolve in the years to come.  

Understanding the benefits and risks involved with AI, training staff to use different AI tools properly, and setting parameters around what is acceptable and not acceptable to share is the starting point.  

"The big thing for me is the privacy and compliance impacts of AI. It's not going anywhere,” Thomas said. “It's something that we're just going to see more and more of, so it's making sure that you have the policies and procedures around the usage of AI and that you provide guidance for employees on the potential impacts using AI tools can bring.” 

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:


Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.


See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.


Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

Understanding and Applying the Shared Responsibility Model at Your Organization
To avoid significant security gaps and risks to sensitive data, organizations need to understand the shared responsibility model used by many SaaS providers.
Salesforce Einstein Copilot: Boosting Productivity With a Focus on Security
AI tools like Salesforce Einstein Copilot can improve efficiency, but also increase risk. Check out these tips on preparing for a Copilot rollout.
Generative AI Security: Preparing for Salesforce Einstein Copilot
See how Salesforce Einstein Copilot’s security model works and the risks you must mitigate to ensure a safe and secure rollout.
Why Your Org Needs a Copilot Security Scan Before Deploying AI Tools
Assessing your security posture before deploying gen AI tools like Copilot for Microsoft 365 is a crucial first step.