All posts by David Gibson

DHS Emergency Directive 19-01: How to Detect DNS Attacks

DHS Emergency Directive 19-01: How to Detect DNS Attacks

On January 22, 2019, the United State Department of Homeland Security (DHS) released a warning for a DNS infrastructure hijacking attack against US government agencies.

Let’s dig into the specifics of the DHS warning and look at how you can better protect and monitor your DNS services.

What is a DNS Infrastructure Hijacking Attack?

The Emergency Directive 19-01 calls this attack a DNS Infrastructure Hijacking attack. DHS says that the attackers stole user credentials powerful enough to alter DNS records, and then used those credentials to manipulate DNS records for key servers.

The result? Once attackers had control of DNS, they redirected user traffic so they could intercept and record some or all of it, acting as a “man in the middle.” By intercepting user traffic, attackers can intercept information, like user name and password pairs.

For example, when victims send an email (to their secure email server for handling), they will connect to the attacker’s server first. From the victim’s perspective, it looks like everything is fine because they send and receive emails as usual. Things are not fine, however, because the attackers can eavesdrop, even when the connection appears encrypted (more on that below).

It’s important to note that manipulating DNS records to hijack connections isn’t new, and it doesn’t even take a compromised admin account to do it. This is because DNS is a very old protocol, with many vulnerabilities (for more on DNS and how it works, check out this primer). Not only can attackers exploit DNS to hijack connections – they can use it for reconnaissance, as a command and control channel, and to covertly exfiltrate data.

This attack is more worrisome than some other DNS hijacking attacks for two reasons:

  1. The attackers appear to have compromised authoritative records (instead of just cached records) on DNS servers “upstream” of the victim’s local DNS servers. This means that even if a victim’s local DNS servers aren’t compromised, they can still be vulnerable. For example, if an attacker changed the authoritative record for to route to their own server, we’d all be going for an interesting ride, even though nothing changed on our computers or local DNS servers.
  2. The attackers used their power over authoritative DNS records to set up fake certificates that appear valid to end users. This means that the victim’s computer negotiates an encrypted connection with the attacker’s servers, and the attacker’s servers decrypt and re-encrypt the traffic, relaying, or “proxying” it to the destination server. If the victims are using webmail, this setup can even give victims the comforting lock symbol in their browser:

What Do I Need To Do Right Now?

Per Emergency Directive 19-01, government agencies have to:

  • Audit all public DNS records
  • Change all passwords that can access DNS records
  • Implement multi-factor authentication (MFA) for all accounts that have rights to change DNS records
  • Monitor Certificate Transparency logs for new certificates your agency did not request

The scope of this remediation depends on several factors:

  1. How many public facing DNS records need auditing?
  2. How many accounts can change DNS records?
  3. How many of those accounts have MFA already?

Verifying that every DNS record is correct will be a lot of work for organizations with a lot of records. It’s best to take steps to prevent further breaches and make sure things stay fixed.

How to reduce the risk of DNS tampering on your DNS servers

  • Restrict administrative access to DNS servers (on Windows for example, monitor changes and review members of the dnsadmin group, and other administrative groups)
  • Use static DNS records or DHCP reservations for key records, and monitor changes to those records
  • Monitor DNS servers for signs of cache poisoning, reconnaissance, command and control, or data exfiltration
  • If you’re running a windows environment, identify, remediate and monitor other Active Directory vulnerabilities

How to protect yourself from compromised DNS records on servers that aren’t yours:

  • Enforce multi-factor authentication for external services wherever possible
  • Monitor DNS queries and other perimeter devices (e.g. web proxies) for attempted connections to sites with poor reputation scores

How does Varonis help?

Varonis starts by monitoring the right ingredients: First, data — who can touch it, who does touch it and what’s important – on-premises and in the cloud. Second, Active Directory – who’s using which devices with which accounts and how. Third, DNS and Edge devices like web proxies and VPN concentrators– who is getting in, and who and what is going out, and are any of the destinations untrustworthy.

Next, Varonis combines the ingredients and builds context. What kinds of accounts are there and who do they belong to, who uses which devices and which data, when are they active and from where. Varonis automatically builds a baseline, or “peace-time profiles” over hours, days and weeks for every user and device, so when they behave strangely they get noticed.

Varonis customers quickly identify and analyze things like:

Want to learn more? Get a 1:1 demo to see how Varonis can detect DNS attacks – and prevent data breaches.

Our 2018 Cybersecurity Predictions

Our 2018 Cybersecurity Predictions

Looking back, 2017 had all the twists and turns of a good disaster movie. Hackers steal and leak the NSA’s powerful exploit kit that’s then unleashed on the world through a Dr. Evilish ransomware-worm hybrid.  Later, a top U.S. credit reporting agency discloses a breach involving the social security numbers of 143 million Americans. Meanwhile, a $1.8 billion legal battle is being waged between two tech giants over stolen software for self-driving cars. In the trial, a letter comes to light that claims the defendant was “responsible for acts of corporate espionage, the theft of trade secrets, the bribery of foreign officials and various means of unlawful surveillance.”

Sounds like Lex Luthor had a busy year. While you can make a good case that data security predictions should be made by Hollywood scriptwriters, we decided to put on our wizard’s cap yet again to come up with the following predictions for 2018.

Blended Attacks Will Force More Critical Systems Offline

As bad as WannaCry was – and because many who were hit may have kept quiet – we may never truly know the full extent of the damage. The characteristics involving the payment of the ransom were haphazard, suggesting that these attacks were meant to test the NSA’s exploits’ power and reach when “blended” with other attack vectors, like phishing and ransomware.  In 2018, we should expect more blended, crippling attacks in more countries, and they may well be longer and more severe. As we saw in 2017, expect them to throw a wrench into the daily lives of millions — affecting anything from transportation to shopping to using an ATM.

The IoT Will Bring More Bad News

Brands have been quick to jump on the IoT bandwagon, but they will have their hands full. In 2017, we saw KRACK and BlueBorne exploit WiFi and bluetooth, opening fresh holes in our already battered perimeters. Hackers will continue to leverage unprotected devices to spy on their users and break into home and corporate networks. Multiple botnets exploiting vulnerable IoT devices will be new minions in DDOS attacks, and threaten to take down news and government websites. Millions of consumers will remain unaware that their IoT devices and home networks are being exploited until they finally get to the bottom of why Stranger Things is so slow to download, and unplug their internet-connected toothbrush. Manufacturers will start to address these security faults or risk losing to the companies that bake-in security from the start. GDPR may save the day in the long run–forcing businesses to reconsider personal data collection via IoT, but we won’t see this effect until at least 2019.

Fear the Wiper

A recent survey revealed that 45% of organizations think they will be breached in the next year. In 2018, more organizations will be hit by ransomware, or worse. While ransomware is a scary thought for the C-Suite to consider, the unlucky organizations — those that haven’t prepared and without adequate backups in place — will be hit by wipers that will destroy information and systems with no hope for retrieval. Other unlucky organizations will realize they’ve been hit with APT’s, or Advanced Persistent Threats, that have been siphoning out valuable information for months or longer, like Intellectual Property, public filings, M&A plans, and other trade secrets. The unluckiest probably won’t realize they’ve been hit in 2018 at all, as attackers access their information as if it were their own. In 2018, a widespread wiper at tack, likely driven by political motivations, will hit at least one government agency and many other organizations. Companies will rethink how they’re protecting their critical information as they continue to realize how porous their perimeters have become.

You’ve Got Mail: Buckle up for a Wild Political Season

All 435 seats in the House and a third of the seats in the Senate will be up for grabs in November 2018. With so much at stake, expect a series of revealing leaks affecting candidates in key congressional districts. At least one candidate will drop out of the race based on the contents of old emails. Multiple incumbents will also be forced out of office.

The Rise of Cryptocurrencies

We’ve seen bubbles before: From dutch tulips in the 1600s to dot-com high fliers at the turn of the 21st century, unbridled enthusiasm drives up prices to unrealistic and unsustainable levels. Bitcoin is enjoying such a bubble. Could this be the year for a correction? China is cracking down and other nations appear to be seeking to regulate Bitcoin and their exchanges. If cryptocurrency continues to be associated with monetizing cybercrime and other illegal activity, it will become stigmatized, and its use for legitimate purposes may decline.

“A Treasure Trove for Hackers” : The U.S. Gets GDPR Envy

“Consumers don’t have a choice over what information Equifax… or Transunion or Experian have collected, stored and sold,” said Illinois Congresswoman Jan Schakowsky, during the House Energy and Commerce Subcommittee Hearing on the Equifax data breach, one of the biggest consumer breaches in history. “What if I want to opt out of Equifax?” Ms. Schakowsky asked. “I want to be in control of my information. I never opted in, I never said it was OK to have all my information, and now I want out. I want to lock out Equifax. Can I do that?”

In May 2018, a sweeping set of data-focused privacy rules for EU citizens will go into effect — they will get a choice. As GDPR takes effect, we’ll see GDPR envy in the U.S. and consumers will demand the same kinds of privacy rights that EU residents receive under GDPR. With the deadline looming, organizations are going to go through an adjustment period — especially ones that collect and leverage user data in innovative, and sometimes controversial, ways, like credit bureaus.

4 Step Guide to Managing Network Share Permissions

4 Step Guide to Managing Network Share Permissions

Setting up network file sharing is one of those core IT practices that every Windows admin knows about and has implemented as part of their daily work. The basic mechanics of this have not dramatically changed since Windows Server 2003 and are relatively straightforward. However, after configuring the resource shares and the individual NTFS permissions for each folder, admins sometimes lose sight of the big picture as they handle daily permission requests on an ad-hoc basis.

Over time, as permissions are added to folders, the result is that permissions are set too broadly—to the delight of hackers and internal data thieves. The key reason is that admins and IT are generally not equipped to keep track of the current roles of workers, organizational changes that shift group authorizations, and job terminations—three of the most common occurrences that impact user access to file content.

It’s not for lack of focus or commitment on the part of IT, but simply that it’s hard to visualize and understand the mappings between users and their file permissions. This is often the result of complex permission hierarchies that make it difficult for IT staff to work this out quickly on their own without help from software automation.

Admins, of course, can review file activity records to see who is actually accessing records, and then decide whether the user should have access. As a rule most companies don’t set up file auditing—it’s a resource hog—and even if this is done for a short period, the log results can overwhelm the abilities of admins to parse the trails and come up with the appropriate follow-up actions. However, there is a way out of this permission trap. In this post, we’ll explore a four step strategy that will make it far easier for IT admins to manage file sharing and folder permissions.

1. Toward A Binary Model For Permissions And Sharing

Rather than working on an ad-hoc basic, it’s important for admins to have a foundational policy—the simpler the better. Experts recommend thinking about folder permissions as having three states:

  • Directly applied permissions —every access control entry is directly applied to the assets control list
  • Inherited permissions — permissions are inherited from the parent directory
  • Hybrid— both directly and inherited permissions

When looking at your current implementation, work out which one of the above states the folders you’re interested in taming are currently in. Don’t be surprised to find many of the folders in a hybrid state—it’s not at all unusual. However, your goal should be to eliminate the hybrids and move toward a twostate or binary model: the folders should either be inheriting all, or none of their permissions. The next step is to standardize your existing group permissions.

It’s worth pointing out that you should only have group permissions. They are far easier to manage than having individual permissions. Is it acceptable to have a group of only one? The answer is yes since it is likely that the group will eventually grow and you’ll have established a policy that will continue forward.

Here again a simple binary group policy is better: place users into either a read group or a read-write group. Of course, there should also be a separate administrative group, but 99% of users will fall into one of those two groups. One of the reasons it’s hard to work out the actual permissions on a specific folder is that you most likely nested groups inside other groups. Our advice is to try to avoid nesting. It’s better to assign a domain local or universal group to the ACL and add users to this group. In some cases, nested groups may be best (following Microsoft’s recommended AGLP strategy), especially when there’s a group already created that contains the right users, and will be maintained by a group owner.

Over the years, there’s been some confusion about how to handle the combination of NTFS permissions and Windows sharing permissions. Experts agree it’s best to standardize share permissions and use the NTFS permissions to granularly manage access. For example, you’ll want to set sharing permissions so that they are accessible to all authenticated users, and then use the NTFS permissions to determine on a more granular basis who has access (whether over the network or directly on the server). As with groups, it’s best to avoid ‘nested shares’ – ultimately it just introduces unnecessary complexity.

The final element is to set up traverse permissions correctly for the shares. For example, if you’re trying to give someone access to a folder that’s several levels below a share, they’ll need traverse permissions all the way down the tree. Rather than trying to do that manually, it’s better to use an automated solution that keeps track of these and sets them correctly.

With the permissions now squared away, can we simplify the actual structure of the shared areas? The answer that IT experts give is also to take a simple binary approach. They suggest using large departmental or divisional shares and then use specific project shares to allow employees from different departments to work together on as-needed basis.

2. Data Owners Are The True Access Guardians

Part of the reason that data permissions are set too broadly is that IT can often only guess at whether a user is truly authorized to access content. So admins will err on the side of inclusiveness. A better approach is for IT to work more closely with the data owners—the users, generally managers, from the business side who know the context about the data, and are best positioned in the organization to say who should have access.

IT should initiate an initial entitlement review process with the data owners. This would involve the owners reviewing who currently has access to a folder— typically by reviewing current group structures and possibly audit logs—and then deciding whether to remove users from a group. For IT, this is often a complex process—especially tracing users to groups—so automated solutions will make this easier.

It’s important to keep in mind that entitlement reviews are not a one-time fix, instead they need to be continually performed to keep pace with changing user roles. As an example, it’s common for some users to be given temporary access to project folders—perhaps they were hired as a short-term consultant or they’re an employee assigned to a group on as-needed basis. When the project is finished, access should be revoked.

Unfortunately, managers often forget to contact IT or assume that IT will remove access for them. These kind of changes fall through the cracks and lead to permissions that don’t reflect current organizational structure, and ultimately are broader than necessary. But with regular entitlement reviews—perhaps on a quarterly basis—these lapses can be addressed by the owners.

3. Always Be Monitoring

There’s still more work for IT to do after setting up the folder access policies and engaging in periodic entitlement reviews. They also should be continuously monitoring shared folders. Why? Making a resource available on the network is a great way to boost collaboration between employees, but this also comes with security obligations.

With data breaches now a common occurrence, IT staff should be analyzing network file activity for signs that outside hackers or malware have taken over the credentials of internal users, or that internal users may be up to no good. In other words, IT should be reviewing file access activity with an eye towards looking for unusual patterns—for example, spikes in activity, permission changes to existing folders, and sensitive content that’s experiencing above average viewings. Here again the use of automation, especially real-time alerting mechanisms, is a far better way to implement monitoring then manually reviewing logs.

On a more operational level, IT should also analyze shared activity as a way to tighten up permissions –for example, users and groups that have folder access permissions that are never used—or to spot whether sensitive data is accessible and/or being viewed by non-authorized employees. The results of this analysis can be then be brought up during entitlement reviews to help tighten up access.

4. Don’t Forget Retention

While it’s natural for IT to be busy thinking about setting up network file shares and managing existing shares, sometime life cycle issues can be pushed into the background. Remember: all data has a life-span and the older the contents gets, the less relevant it becomes. So IT should have in place data retention policies as well. This is not just a matter of saving on disk space by removing and archiving stale data, but this also has data security implications.

There’s an approach to data security known as privacy by design, which has had a strong influence on data compliance—both industry standards as well as legal regulations. One of the ideas in privacy by design is that companies should minimize the data they collect and then set retention limits for files and folders. The security advantage of putting a shelf life on data is that there would be less for thieves to steal. This is a basic defensive strategy, but an effective one.

To help put some bite into the retention limits, IT pros suggest you charge users on a per byte basis for storage. If department heads or group managers then don’t want to pay for their slice of shared storage from their budgets, IT can remove it or copy the data to secondary storage.

To start you thinking about a retention policy, we list below a few factors that should be taken into account:

  • Determine the age at which each type of data that has not been accessed would be considered stale – 1 year? 2 years? 5 years?
  • Implement a solution that can identify where stale data is located based on actual usage (not just file timestamps)
  • Automate the classification of data based on content, activity, accessibility, data sensitivity and data owner involvement
  • Automatically archive or delete data that is meets your retention guidelines
  • Automatically migrate data that is stale but contains sensitive information to a secure folder or archive with access limited to only those people who need to have access (e.g. the General Counsel)
  • Make sure your solution can provide evidence (e.g. reports) of your defensible data retention and disposal policy


Network file sharing is an essential service in any organization and the starting point for implementing collaborative solutions. However, shared content also comes with its own administrative and security overhead. Overall, IT should have in places policies for file sharing that encompass the ideas in this paper. We’ve discussed a basic model for folder permissions and groups, but your organization may evolve its own strategies—mileage may vary. But even in the simplest policies, the complexity for managing folder access rights for more than a few users would require automation in order to ensure the policies are effectively enforced.

New Survey Places Varonis among Readers’ Top Choices for Data Loss Preven...

Varonis was recently named a “Readers’ Top Five” pick for Data Loss Prevention (DLP) solutions in a newly released survey of nearly 5,000 TechTarget readers. The survey respondents are IT and security professionals who are attempting to protect their organizations from the onslaught of data breaches, meet compliance and audit requirements, and protect intellectual property., a TechTarget site, conducted the reader survey in October. When asked “Which DLP vendors are you considering for your data protection project?” four responses were by far the most common: three very large, broadly focused companies (Symantec, McAfee/Intel Security Group and Microsoft) along with Varonis, the focused pioneer of unstructured data management and protection.

In the survey of 4,635 readers, the most important drivers for future data protection projects were:

  • “Meeting compliance/audit requirements” (69%)
  • “Attempting to avoid future data breach” (53%)
  • “Protection of intellectual property” (46%).

We’re pleased to see continued awareness increase of the market’s need for the kind of solutions we have been developing and perfecting over the past decade. In a recent market guide, Gartner highlighted the advantages of using User and Entity Behavior Analytics (UEBA) to detect malicious insider behavior that often goes unnoticed by other technologies. Varonis solutions are unique as we combine DLP capabilities with permissions context and what we believe are the most advanced user behavior analytics available to help organizations protect their file systems and unstructured data from insider threats.

You can read the complete survey results here.  Click here to learn more about Varonis UBA offerings.

Six IT Predictions for 2016

Six IT Predictions for 2016

1. The U.S. Presidential campaign will be affected by a cyber attack.

Hillary Clinton’s private email server has already brought cybersecurity into the U.S. Presidential race. In 2016, a cyberattack will strike the campaign, causing a major data breach that will expose donors’ personal identities, credit card numbers and previously private political preferences. Imagine being a donor with an assumption of anonymity. Or a candidate whose “ground game” depends on big data analytics about voter demographics and factors affecting turnout – data that turns from an asset to a liability if it isn’t protected. The breach will affect the campaign not only as a setback for the unfortunate candidate or party affected, but by bringing the issue of cybersecurity prominently into the campaign as a major issue that is closely related to geopolitical threats such as the spread of terrorism. Campaign data is a gold mine for hackers (donor lists, strategies, demographics, sentiment, opposition research), and an event like this will serve as another wake-up call to the U.S. government that cybersecurity needs to be a continual, central focus and investment at the highest levels. The candidate who demonstrates knowledge and command of cybersecurity threats and government readiness will win the election.

2. The frequency of public data breaches will increase substantially.

The Identity Theft Resource Center (ITRC) reports a total of 641 data breaches recorded   publicly in 2015 through November 3. Most organizations know this number represents the tip of the iceberg. The frequency of known data breaches will increase in  2016, due not only to increasing privacy and breach disclosure laws but also the increasing failure of traditional perimeter-focused security investments to protect valuable data. Employees’ use of mobile devices and companies’ migration of IT workloads to the cloud will also contribute to a sharp rise in breaches. Over time, this should help to shift priorities toward investing in more proactive data-centric protection, but it’s likely things will become worse before they get better.

3. Ransomware damage will double.

With CryptoWall expected to become the first ransomware to extract $500 million from its victims, this lucrative approach will become cybercriminals’ hottest growth market. As a result . . .

4. End-user education and monitoring will become the focal point of data security efforts.

Insiders are the new malware. Executives and IT professionals are becoming as afraid of their own employees – as innocent vessels for outside attackers with dangerous levels of access to sensitive data – as they are of outside attackers. Companies will turn to the importance of end-user education in 2016 as they realize that, no matter how intensely they invest in security, they hit a dead end if their users don’t drive by the rules of the road. They need to be involved in the security processes, observe classification and disposition policies (that need to be defined) and know to stop clicking on phishing emails. Employees are crucial to the security process, and have more power in controlling it than they realize. You can’t patch users but you can educate them. You can also monitor and analyze how they use data to spot unwanted attacks.

5. At least five more C-level executives will be fired because of a data breach.

In recent years we have seen the careers of several top executives suffer in the wake of cyberattacks. Target CEO Gregg Steinhafel and CIO Beth Jacob, U.S. Office of Personnel Management Director Katherine Archuleta, Sony Pictures’ Amy Pascal and others were either fired or forced to resign after massive data leaks cost their organizations money, customers and credibility. This will accelerate in 2016.  Blame for data breaches is shifting from IT to the C-suite. Data impacts every facet of an organization. If management is not investing in and focusing heavily on securing data and its use, it is now understood that they are putting the entire company and its stakeholders at risk.

6. Increasing false positives in data security bring to light the need for limited, accurate information.

Organizations will get much more serious about how much data they collect and their deletion efforts. When Target suffered its massive breach during the 2013 holiday season, the alerting capabilities of its IT team had generated months of warnings.  Still, no one caught it. This remains a common problem today. Why? The plethora of security tools installed in most companies overwhelms IT security. Their teams are strapped and the amount of false positives generated by exponentially growing volumes of information cause these teams to miss crucial vulnerabilities. In 2016, smart IT teams will focus on signal-to-noise ratio improvements in the analysis and alerting solutions they deploy.

Varonis Named “Representative Vendor” in Gartner’s New Market Guide f...

Today we’re pleased to share that we’ve been named a “Representative Vendor” in Gartner’s brand new Market Guide for User and Entity Behavior Analytics (UEBA) that highlights the advantages of using UEBA to detect malicious or abusive behavior that often goes unnoticed by existing monitoring systems such as SIEM and DLP.

Among its recommendations, Gartner says that “CIOs, chief information security offices (CISOs) and security managers should:

• Use UEBA to detect insider threats and external hackers, choose vendors with solutions that align with your use cases, for example, security monitoring or data exfiltration.
• Operationalize UEBA by sending alerts to security orchestration, ticketing and workflow systems.
• Favor UEBA vendors who profile multiple entities including users and their peer groups, and devices, and who use machine learning to detect anomalies. These features enable more accurate detection of malicious or abusive users.”

Authored by Gartner analyst Avivah Litan, the guide predicts: “Over the next three years, leading UEBA platforms will become preferred systems for security operations and investigations at some of the organizations they serve. It will be – and in some cases already is – much easier to discover some security events and analyze individual offenders in UEBA than it is in many legacy security monitoring systems.”

In assessing the market growth of UEBA technologies, the Gartner report states: “The UEBA market grew faster and matured more quickly than Gartner anticipated a year ago. Gartner expects UEBA market revenue will climb to almost $200 million by the end of 2017, up from less than $50 million today.”

Varonis has always used user behavior analytics through our recommendations and alerts, but this is just a subset of what we do. Our Metadata Framework is the basis for a wide range of use cases including UEBA, sensitive data classification and remediation, identity and access rights management, enterprise search, and storage reduction. Thousands of organizations around the world rely on Varonis to curtail over-exposure of their most valuable and sensitive data and help prevent the inevitable network breaches for causing harm.

We are pleased to be included by Gartner as serving this important category, and are especially encouraged that our solutions are so closely aligned with Gartner’s recommendations. We look forward to continuing to add capabilities and use cases to our portfolio and helping many more organizations detect potential threats before they cause serious damage.

Source: Gartner, Market Guide for User and Entity Behavior Analytics, September 22, 2015

How to Detect and Clean CryptoLocker Infections

How to Detect and Clean CryptoLocker Infections

CryptoLocker is by now a well known piece of malware that can be especially damaging for any data-driven organization. Once the code has been executed, it encrypts files on desktops and network shares and “holds them for ransom”, prompting any user that tries to open the file to pay a fee to decrypt them. For this reason, CryptoLocker and its variants have come to be known as “ransomware.”

Malware like CryptoLocker can enter a protected network through many vectors, including email, file sharing sites, and downloads. New variants have successfully eluded anti-virus and firewall technologies, and it’s reasonable to expect that more will continue to emerge that are able to bypass preventative measures. In addition to limiting the scope of what an infected host can corrupt through buttressing access controls, detective and corrective controls are recommended as a next line of defense.

FYI, this article is CryptoLocker specific. If you’re interested in reading about ransomware in general, we’ve written A Complete Guide To Ransomware that is very in-depth.

CryptoLocker Behavior

On execution, CryptoLocker begins to scan mapped network drives that the host is connected to for folders and documents (see affected file-types), and renames and encrypts those that it has permission to modify, as determined by the credentials of the user who executes the code. CryptoLocker uses an RSA 2048-bit key to encrypt the files, and renames the files by appending an extension, such as, .encrypted or .cryptolocker or .[7 random characters], depending on the variant. Finally, the malware creates a file in each affected directory linking to a web page with decryption instructions that require the user to make a payment (e.g. via bitcoin). Instruction file names are typically DECRYPT_INSTRUCTION.txt or DECRYPT_INSTRUCTIONS.html.

As new variants are uncovered, information will be added to the Varonis Connect discussion on Ransomware.  For example, a variant known as “CTB-Locker” creates a single file in the directory where it first begins to encrypt files, named, !Decrypt-All-Files-[RANDOM 7 chars].TXT or !Decrypt-All-Files-[RANDOM 7 chars].BMP.

Mitigation Tips

Prevent What’s Preventable

The more files a user account has access to, the more damage malware can inflict. Restricting access is therefore a prudent course of action, as it will limit the scope of what can be encrypted. In addition to offering a line of defense for malware, it will mitigate potential exposure to other attacks from both internal and external actors.

While getting to a least privilege model is not a quick fix, it’s possible to reduce exposure quickly by removing unnecessary global access groups from access control lists. Groups like “Everyone,” “Authenticated Users,” and “Domain Users,” when used on data containers (like folders and SharePoint sites) can expose entire hierarchies to all users in a company.  In addition to being easy targets for theft or misuse, these exposed data sets are very likely to be damaged in a malware attack. On file servers, these folders are known as “open shares,” if both file system and sharing permissions are accessible via a global access group.

Although it’s easiest to use technologies designed to find and eliminate global access groups, it is possible to spot open shares by creating a user with no group memberships, and using that account’s credentials to “scan” the file sharing environment. For example, even basic net commands from a windows cmd shell can be used to enumerate and test shares for accessibility:

  • net view (enumerates nearby hosts)
  • net view \\host (enumerates shares)
  • net use X: \\host\share (maps a drive to the share)
  • dir /s (enumerates all the files readable by the user under the share)

These commands can be easily combined in a batch script to identify widely accessible folders and files. Remediating these without automation, unfortunately, can be a time-consuming and risky endeavor, as it’s easy to affect normal business activity if you’re not careful. If you uncover a large amount of accessible folders, consider an automated solution. Automated solutions can also help you go farther than eliminating global access, making it possible to achieve a true least-privilege model and eliminate manual, ineffective access-control management at the same time.

Detect What You Can Detect

If file access activity is being monitored on affected files servers, these behaviors generate very large numbers of open, modify, and create events at a very rapid pace, and are fairly easy to spot with automation, providing a valuable detective control. For example, if a single user account modifies 100 files within a minute, it’s a good bet something automated is going on. Configure your monitoring solution to trigger an alert when this behavior is observed. Instructions for configuring an automated alert with Varonis are available here (login required).

If you don’t have an automated solution to monitor file access activity, you may be forced to enable native auditing. Native auditing, unfortunately, taxes monitored systems and the output is difficult to decipher. Instead of attempting to enable and collect native audit logs on each system, prioritize particularly sensitive areas and consider setting up a file share honeypot.

A file share honeypot is an accessible file share that contains files that look normal or valuable, but in reality are fake. As no legitimate user activity should be associated with a honeypot file share, any activity observed should be scrutinized carefully. If you’re stuck with manual methods, you’ll need to enable native auditing to record access activity, and create a script to alert you when events are written to the security event log (e.g. using dumpel.exe).

If you’re PowerShell inclined, we’ve written a bit on how to combat CryptoLocker with PowerShell.

Correct What You Detect Faster with Automation

If your detective control mechanism can trigger an automated response, such as disabling the user account, the attack is effectively stopped before inflicting further damage. For example, a response to a user that generates more than 100 modify events within a minute might include:

  • Notifying IT and security administrators (include the affected username and machine)
  • Checking the machine’s registry for known keys/values that CryptoLocker creates:
    • Get-Item HKCU:\Software\CryptoLocker\Files).GetValueNames()
  • if value exists, disable user automatically.

Recover with Confidence

If recorded access activity is preserved and adequately searchable, it becomes invaluable in recovery efforts, as it provides a complete record of all affected files, user accounts, and (potentially) hosts.  Varonis customers can use the output from report 1a (as described here) to restore files from a backup or shadow copy.

Depending on the variant of CryptoLocker, encryption may be reversible with a real-time disassembler.

Need Help?

Contact us if you have questions, or if you’d like to set up a free consultation.

Want more helpful tips like this including in-depth articles and scripts that we don’t post publicly? Visit the Security Corner in our Varonis Connect community.

*.zip ; *.rar ; *.7z ; *.tar ; *.gzip ; *.jpg ; *.jpeg ; *.tif ; *.psd ; *.cdr ; *.dwg ; *.max ; *.bmp ; *.gif ; *.png ; *.doc ; *.docx ; *.xls ; *.xlsx ; *.ppt ; *.pptx ; *.txt ; *.pdf ; *.djvu ; *.htm ; *.html ; *.mdb ; *.cer ; *.p12 ; *.pfx ; *.kwm ; *.pwm ; *.1cd ; *.md ; *.mdf ; *.dbf ; *.odt ; *.vob ; *.iso ; *.ifo ; *.csv ; *.torrent ; *.mov ; *.m2v ; *.3gp ; *.mpeg ; *.mpg ; *.flv ; *.avi ; *.mp4 ; *.wmv ; *.divx ; *.mkv ; *.mp3 ; *.wav ; *.flac ; *.ape ; *.wma ; *.ac3 ; *.epub ; *.eps ; *.ai ; *.pps ; *.pptm ; *.accdb ; *.pst ; *.dwg ; *.dxf ; *.dxg ; *.wpd ; *.dcr ; *.kdc ; *.p7b ; *.p7c ; *.raw ; *.cdr ; *.qbb ; *.indd ; *.qbw

PCI DSS Explained: Our New White Paper Decodes the Complexity

PCI DSS Explained: Our New White Paper Decodes the Complexity

The Payment Card Industry Data Security Standard (PCI DSS) is not just another list of requirements for protecting data. In 2013, the number of credit and debit card transactions worldwide reached over 100 billion—that’s lots of swipes and 16-digit numbers entered! With its almost 300 controls, PCI DSS provides the rules of the road for protecting and securing credit card data for every bank, retailer, or ecommerce site.

But does the average IT security person who’s charged with implementing its security safeguards really understand this complex standard?

Likely not! And that’s why we came up with PCI DSS for IT Pros and Other Humans. Our white paper simplifies the 12 core controls and condenses them into three higher-level steps.

Why simplify? Our approach is based on the PCI’s Council’s own best practices advice, which puts monitoring, assessment, and mitigation at the center of a real-world data security program.

To find out why strictly following the DSS controls is just not enough, you’ll want to read our paper.


Tips From the Pros: Sharing 250 Million Folders With 100,000 Users

Tips From the Pros: Sharing 250 Million Folders With 100,000 Users

Q: How many users and how much data are you managing?

We have in excess of 100,000 actual people, 1.5 million accounts in AD, and 250,000,000 data folders.

Q: Can you describe your overall strategy for designing shared folders?

For us, everything is based on Active Directory group membership. Every “share,” or “root folder,” has AD groups for read-only access, read-write access, and full control. When a root folder is created, groups are created and assigned and the NTFS permissions never change. All subfolders are set to inherit the same permissions. All root folders are created at the same level in the folder hierarchy. If a subfolder needs different permissions, it has to be moved and made into its own root folder. By the way, even though we call these folders “shares,” in reality they are not the actual SMB/CIFS shared folder but rather folders directly under an actual SMB/CIFS share. Under the actual share we’ll have up to 100 root folders, and these are what users will map to.

Q: How many root folders do you have?

We have about 10,000 roots—I’m not sure how many we add per week.

Q: What happens when someone needs a new share?

We have an internal request system where anyone can request a new share. When we get the request, a new root folder is created, along with 3 new AD groups based on project name. The person that requests the share will be set as the “owner” in the groups’ “managedby” AD fields, and a DFS link is created for the folder.

Q: Who usually makes a share request?

It could be anybody, but any amount of data will be charged back to the business. Someone in the business unit that has financial authority needs to approve the storage cost.

Q: How do you decide when a folder is no longer needed?

Pre-Varonis, someone needed to tell us. Now our records management team is working to tie in Varonis stale data reports with our retention policies.

Q: How are the share permissions set?

Authenticated users have modify access on all shares. The NTFS permissions on the SMB shared folder are set to Everyone RWL—this set-up means we don’t need to worry about handling traverse permissions, because everyone can see the root folders.

Q: What about group nesting?

We don’t nest groups for end user access; we only nest groups for IT system admin access, and we nest them based on geographical scope. For example, a group will grant admin access to NY servers, nested in that group will be North America admins, and nested in that will be world admins.

Q: How do you deal with mapped drives?

We use DFS to abstract paths. Everyone has a drive letter that is mapped to their local DFS name space. This is set by login script. DFS paths are grouped by region, and we have replication between DFS name spaces. People see all the roots underneath their region, but NTFS permissions restrict what they can access.

Q: Who deals with the login scripts?

These are maintained by desktop team.

Q: Do you ever apply users to ACLs?


Q: How often do people review permissions?

Group memberships are reviewed whenever a user leaves us or changes departments. Important folders are also recertified annually. We use DatAdvantage to make sure permissions are set according to standard and watch permissions for changes. We plan to use DatAdvantage recommendations to help data owners identify stale group memberships during their reviews.

Tips From the Pros: Best Practices for Managing Large Amounts of Shared Dat...

In our “Tips from the Pros” series, we’ll be the presenting interviews we’ve conducted with working IT professionals. These are the admins and managers responsible for security, access, and control of  human-generated data—the fast growing digital element in organizations today. In this inaugural post, we spoke recently with one of our customers about managing large file shares and permissions.

Q: How many users do you have?

A: About 40,000

What is your domain like?

We have 3 forests with 5 domains in each, with some trusts between domains across forests.

 How much CIFS shared data do you have?

We have about 1.5 Petabytes of shared data on CIFS.

That’s a lot of data. How many folders have unique permissions in your environment?

We now have 5500 managed folders and 2000 data owners, who do about 1600 access approvals each month and about 1000 revocations each month. Every folder with unique permissions that contains business data is managed by a data owner using DataPrivilege.

Where and how do you apply share permissions (as opposed to NTFS permissions)?

Share permissions are all set so that administrators have full access and authenticated users have modify access. The real control is done with NTFS permissions.

Do you have any nested shares (not counting the administrative shares)?

Yes. Sometimes there can be nested business shares, and sometimes this can cause confusion, as there are multiple logical paths for the same physical path.

Why do have them?

 Sometimes end users want a shorter logical path. Also, by being more direct, you’re hiding non-relevant information from end users that don’t need it.

What’s the process like for creating a new share?

Shares are created upon end user request, with an approval process.

Do you have owners for shares?

 No, we’re only tracking ownership on the folders themselves.

How do you handle inter-departmental collaboration?

We create folders & shares when needed. If there’s a project, they’ll have a dedicated site or folders. When users are just sharing a few files they will sometimes use email. When it becomes a project, they will make request for a SharePoint site or shared folder.

So let’s talk about NTFS permissions. Where do you block inheritance?

We block inheritance on every folder with unique permissions. Every managed base folder or managed subfolder has inheritance blocked.

Why is that?

Since we’re delegating access control to the business via managed groups, it would be too difficult for them to differentiate between the groups that are inherited and those that are directly applied. Any folder that has unique permissions should be protected in our environment– folders never have a mix of directly applied and inherited permissions.

How did you get to that point?

We programmatically identified every unique folder and all the groups and permissions that were applied to them. Then we turned off inheritance and directly applied any groups that were on their ACL’s.

Then, we added new groups (DataPrivilege groups) with the same masks as the original groups and added all the users from the corresponding old groups. Later we removed the old groups from the ACL. This left with us with only DP groups on each managed folder.

Who decides when a subfolder needs protection/unique permissions?

Ideally, the folder owner decides.

Which permissions masks do you use?

In general, we use two masks for non-admin groups: read+execute and read+write.

Do you use AGLP/UGLY?

For shared folders, by default we use ALP, but on request if owner approves and acknowledges the potential risks we will also use AGLP

Do you use domain local/global/universal groups?

We use either global or universal for the G in AGLP

How do you deal with the traverse permissions?

We let DataPrivilege deal with it for us. Traverse permissions are set automatically all the way up to the administrative share.

Do you run into Kerberos token size issues? If so, what do you do about them?

We have. We did increase the token size capacity. We also removed users who were in both read and modify groups for the same folder. Now when it happens, we work with users to remove unnecessary memberships. Interestingly, the vast majority of the offenders are in technology.

For example, some service desk groups were permissioned to many folders—these people had token size issues.

How do you deal with mapped drives?

This is one of the biggest end user challenges we have – the bane of our existence. The amount of work that has to happen to figure out what someone’s T drive is is ridiculous.

They’re managed by login script. The service desk has a way of figuring out what login script someone gets and then figures out what their mappings are from that script.

Do you use DFS?

Yes, for both replication and for logical name spaces.

Do you ever apply users to ACLs?


What would be your top tips for someone designing a file sharing infrastructure?

1.  You need to have owners and a life cycle management processes for everything. If the owners can manage the right things by themselves, the infrastructure will evolve in the right way.

2. When everything was visible and managed by the owners, it is much more rational. End users are more informed about and aligned with the hierarchy.

3. End user communication strategy is critical. One of the biggest lessons we learned was that when you’re rolling out self-service, it’s better to present it as an option rather than enforcing its use. If you advertise it as a faster way to get access, people will adopt it more quickly and be happier.

4. If you use AGLP, only use it when the global groups are already there for a business purpose— don’t create them just to follow the AGLP model. If you get into this mentality of needing to follow AGLP everywhere you wind up having global groups as resource groups, and you end up having a domain local group for every domain.

Fixing the Open Shares Problem

I recently spoke with an IT administrator who had started a manual open share cleanup project—finding and locking down folders and SharePoint sites open to global access groups like Everyone, Domain Users and Authenticated Users. After removing the everyone group from several folders, they began to receive help desk calls from people who had been actively accessing data through those global access groups prior to their removal, and were now unable to perform their daily activities because they had lost access. This went on for two weeks or so—each time someone called, they had to apologize for the disruption, and quickly add that user to a group on the folder’s ACL.

According to the administrator, the manual process took about 6 hours per folder. With the number of folders they had found, this would mean about 3 months of work for 4 people–quite a time consuming effort. How were they going about fixing these manually? Here is a rough outline of the steps they used:

  1. Identify folders open to the global access groups, like everyone, authenticated users, domain users, and users
  2. Turn on object access success auditing for those folders and collect as much audit data as the server could stand
  3. Analyze the audit activity to try to create a list of users that access these folders
  4. Determine the users that have no way to access those folders other than the global access group you’re trying to remove
  5. Add users from step 4 to a group that’s on the folder’s ACL, or create a new group and add the users (assuming those users are supposed to have access)
  6. Remove the global access group
  7. Wait by the phone

Despite their painstaking process, the voluminous audit logs and the complexity of their permissions made it impossible to remove global access groups without disrupting their users’ workflow. That’s a lot of effort to go through to end up with unhappy users. This is one example, but IT often finds itself in this dilemma when trying to fix open shares: leave the data exposed and run the risk of data theft, loss, or misuse, or lock the folders down and risk productivity should a user or users be cut off from data they need.

In a future post we’ll talk about how to clean up open shares using the simulation capabilities available with a metadata framework.