I've been watching the security industry react to Anthropic's Project Glasswing announcement, and what I'm seeing falls into two camps. One says the sky is falling. AI can now autonomously find and exploit vulnerabilities, and defenders can't keep up. The other says to calm down because context still favors the defender, and the threat is overblown. The conversation will continue with OpenAI's latest model release.
Both camps are arguing about the door. Let's talk about what's behind it.
What Claude Mythos means
Anthropic has built a model that can autonomously discover zero-day vulnerabilities in major operating systems and browsers. Vulnerabilities that survived decades of human review and millions of automated tests. That's a real capability jump, and it's only a matter of time before other models can do the same.
Critics are right that AI attackers start context-poor. They're probing from the outside. They don't know your architecture. They can't read your data or your proprietary source code.
But attackers don't stay context-poor. The switch from "outside the perimeter" to full situational awareness can flip in an instant.
Beyond the CVE explosion
The security industry's response to Glasswing has been focused on CVEs. Patch faster. Reduce attack surface. Build AI into your AppSec program. This is solid advice.
What's missing is what happens after a vulnerability is exploited. When a Mythos-class model finds a zero-day in the Linux kernel and chains it to privilege escalation, the exploit isn't the target; it's the foothold. The blast radius — what data an attacker can access, exfiltrate, or poison from that position — is what determines the damage.
The average attacker already dwells inside an environment for weeks before detection, and most data that an identity can access is overprivileged. When AI compresses the time from exploit to breach from days to hours, both of those problems become critical. You can't patch your way out of them.
There are two ways to make a breach survivable. One is to prevent attackers from getting in — the door lock. The other is to make sure that getting in doesn't mean getting everything. In an AI-accelerated threat environment, the second capability isn't optional. It's the one that determines whether a breach becomes a headline.
AI changes the speed, not the fundamentals
Here's what we've learned from building Varonis: the fundamentals of data security don't change when the threat landscape shifts. What changes is the cost of getting them wrong.
Data oversharing has always been dangerous. Excessive permissions have always expanded the blast radius. Unmonitored access has always been how attackers move laterally undetected. AI doesn't invent these problems — it removes the friction that used to slow attackers down while exploiting them.
Today, Mythos focuses on identifying vulnerabilities in code. But the same pattern-recognition capability applied to identity graphs, permission models, and sensitive data classifications will eventually surface the toxic combinations that turn a minor foothold into a catastrophic breach. The organizations that haven't addressed their data exposure won't need an attacker to find it for them, the model will do it faster than any human red team ever could.
This is why we've invested so heavily in AI security. Unless you’re starving AI of the data it needs to be useful, the non-deterministic systems inside your organization are creating new attack paths to data you may not even know exists. Every AI agent you deploy has permissions. Every model you connect to training data or a RAG pipeline has a blast radius.
What to do right now
First, know what data is exposed. In most organizations, this number is shocking. Sensitive data accessible to everyone in the company. Cloud storage with no expiration on access grants. AI service accounts with admin rights to production databases. Map it now, before an attacker does it for you.
Second, reduce the blast radius before the breach, not after. If an attacker authenticated as a random employee, what could they reach? That gap is your risk. Continuous least-privilege enforcement is the holy grail.
Third, an instrument for speed. As AI compresses the time from foothold to exfiltration, your detection must compress too. Behavioral baselines, anomaly detection, automated response operating at AI speed.
Code is where the Glasswing story begins. Data is where the story ends. And your ending is determined long before the CVE is published — by the decisions you make today about access, exposure, and visibility.
Your AI systems are a target, too
One thing the Glasswing conversation hasn't surfaced enough: the AI systems inside your organization are themselves a new attack surface that Mythos-class models will learn to exploit.
Your agents are making decisions about data access. Your RAG pipelines are retrieving documents. Your coding assistants are reading source code. Each one has a permission model designed for speed, not security. Prompt injection, data exfiltration through model outputs, and agent impersonation. These aren't theoretical. They're the frontier a Mythos-class attacker will probe once the infrastructure vulnerabilities are patched.
The AI attack surface isn't just about software vulnerabilities. It's the data those systems can reach, and the paths an attacker can walk through them. That's the map. Make sure you've seen it before they have.
The door is harder to defend. Make sure you know what's behind it.
What should I do now?
Below are three ways you can continue your journey to reduce data risk at your company:
Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.
See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.
Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.