Creating Custom GPTs and Agents That Balance Security and Productivity

Custom GPTs and AI agents compound productivity, but with that comes added risk. Learn about ChatGPT's custom GPTs and how to build them with data security in mind.
4 min read
Last updated July 17, 2025
Create secure GPTs and AI agents

The marketplace of large language models (LLMs) continues to grow, and each has its own functionalities. Microsoft’s Copilot, for example, allows you to create custom agents to refine your workflow. Similarly, OpenAI’s ChatGPT 4 enables users to create custom GPTs to boost their productivity both in and out of the workplace.

In spite of their productivity benefits, their connection to your data creates security risks — you may inadvertently expose sensitive information if proper safeguards aren’t in place.

In this article, we’ll go over what custom GPTs are and how you can safely implement them in your workflows.

What is a custom GPT?

Generative Pre-trained Transformers (GPTs) are powerful language models that can understand and generate human-like text. In a workplace context, Custom GPTs are specialized agents that users can build on top of existing LLMs.

The advantage of using one allows you to add custom instructions and data to tailor your agent for better performance on specific tasks compared to the default model. You can further configure your custom GPT with uploaded files. Combined with its ability to remember information across sessions, a custom GPT can be a powerful tool that understands specific workflows, company jargon and user preferences.

What are the risks of custom GPTs?

Though tailoring your GPT offers a more robust experience, it also heightens the security risks. Custom GPTs suffer from the same pitfalls base GPTs have.

Without guardrails, your custom GPT risks include:

  • Data leakage: The potential to expose sensitive data such as employee PII, contracts, internal strategy docs, etc.,
  • Model memory misuse: When persistent memory retains sensitive prompts or documents across users or sessions
  • Insider threats: Malicious or employees who unknowingly create GPTs that extract or misuse uploaded company data
  • Legal and compliance risks: Improperly secured GPTs may violate compliance laws depending on your industry (HIPAA, GDPR, PCI DSS, etc.)

How to build a secure GPT

Despite the risks custom GPTs have, it’s possible to build and use custom GPTs in a way that balances security and productivity.

When using an AI agent, it’s important to only give it the information it needs to function, and allow it to retain sensitive data for specific lengths of time.

Here’s what to keep in mind when building a custom GPT:

  • Solidify use case: Start with a clear objective for your custom GPT that doesn’t require sensitive data — then gradually expand its scope as needed
  • Understand instruction design: Avoid including confidential info in the system instructions and be explicit about what not to do. An example instruction could be, “Do not retain customer information.”
  • Memory settings: Disable the memory unless it’s necessary; if enabled, audit its memory regularly
  • Data hygiene: When testing and prompting, use synthetic or anonymized data and avoid uploading raw spreadsheets, CRM records and unredacted reports
  • Permissions: Only allow GPTs to be shared within appropriate org units. For example, an HR GPT should stay siloed with the HR team and a marketing GPT should stay with marketing
  • Audits: Periodically review logs, inputs/outputs and model behavior to determine if data is being shared safely. Audits and continuous monitoring of AI tools can also help test for prompt injection and surface users pushing the AI’s boundaries
  • Education: Create internal guidelines for building and using GPTs in your organization. Require security reviews and approvals for GPTs connected to sensitive systems
Get started with our free AI Security Fundamentals Course.
Take the course
Badge_AISecurityFundamentals_202506_V1

Below is a best practices checklist that you and your organization can reference when implementing an AI agent:

Custom GPT best practices checklist

  • Is prompt memory off or closely managed?
  • Are instructions sanitized?
  • Is access limited to necessary users?
  • Is the data being used reviewed for sensitivity?
  • Have outputs been tested for hallucination, bias, or policy violations?

How Varonis secures AI tools for enterprises

Organizations need deep, continuous visibility on where sensitive data lives, who can access it, and how it’s being secured to use the full scale of AI tools without risking their data. However, this isn’t achievable manually; your organization needs a tool that safeguards your data automatically, and Varonis is here to help.

Organizations will gain complete visibility into how sensitive data is being used and accessed, and Varonis can automatically remediate overexposed data, ensuring GPTs and their users only access what users are authorized to see.

Varonis for ChatGPT Enterprise

Varonis for ChatGPT Enterprise not only reduces the risk of shadow AI in your organization, but also gives your organization the ability to:

Audit GPT interactions

Varonis monitors all ChatGPT Enterprise prompts and responses to ensure sensitive data hasn’t been uploaded. The platform also offers a live and comprehensive feed of AI events that is easily searchable and provides a complete history of files that were uploaded into conversations.

Monitor for abnormal behavior

Varonis’ built-in threat detection capabilities automatically alert security teams to potential threats without any configuration required. So when...

  • Sensitive files are interacted with using ChatGPT on Onedrive
  • Large amounts of files are uploaded to ChatGPT
  • New owners or admins are assigned to ChatGPT

Varonis lets teams know so they can remediate these situations quickly.

Reduce the blast radius

With more and more attackers logging into enterprise accounts rather than older means of hacking (phishing, malware, software vulnerabilities, etc.), the importance of securing your accounts has grown significantly.

If a large percentage of employees hold unnecessary access to all sensitive data across your estate, the risk of a data breach is multiplied versus a scenario where access is heavily monitored, and each account has access to just enough data necessary to perform their duties.

Varonis constantly detects and classifies sensitive data shared with ChatGPT in prompts and subsequent outputs, maintaining industry-leading accuracy and scale. Varonis helps organizations understand which copilot-enabled users and AI accounts can access sensitive data, and automatically revokes stale or excessive permissions without interrupting business.

Varonis for Microsoft Copilot

Building off its existing Microsoft 365 security suite, Varonis for Microsoft Copilot helps teams securely deploy Microsoft’s AI-powered productivity tool.

Varonis can continuously enforce security polices to limit Copilot’s access to sensitive information, give teams a real-time view of usage data, send notifications for abnormal or suspicious interactions and more.

To gain actionable recommendations for a secure and successful Copilot deployment, access our free Microsoft 365 Copilot Security Scan.

It’s not all doom and gloom

Custom AI agents can be transformative, but they must be implemented securely from the start.

You should treat custom agents with as much oversight as other internal tools. Collaborating with IT and security teams while empowering and educating employees on using them responsibly ensures the security of sensitive data.

To increase your knowledge about AI security, consider taking Varonis’ free AI Security Fundamentals course. You’ll gain an understanding of different AI solutions, learn the risks associated with each type of AI and learn effective practices to keep your data safe.

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

from-snowden-to-signalgate:-what-we-still-haven’t-fixed-in-cybersecurity
From Snowden to Signalgate: What We Still Haven’t Fixed in Cybersecurity
Explore major data breaches, their common thread, and practical solutions for data-centric security.
varonis-incident-response:-preventing-pii-exposure-in-box 
Varonis Incident Response: Preventing PII Exposure in Box 
Learn how the Varonis Incident Response team prevented PII from being compromised, and what this means for data in cloud collaboration platforms like Box.
how-to-prepare-for-major-shift-in-chatgpt-enterprise-data-access
How to Prepare for Major Shift in ChatGPT Enterprise Data Access
ChatGPT Enterprise is changing in the way it retrieves data and surfaces information to users in prompt responses. Learn about the new connectors and risks.