Tag Archives: pci

A Few Thoughts on Data Security Standards

A Few Thoughts on Data Security Standards

Did you know that the 462-page NIST 800-53 data security standard has 206 controls with over 400 sub-controls1?  By the way, you can gaze upon the convenient XML-formatted version here. PCI DSS is no slouch either with hundreds of sub-controls in its requirements’ document. And then there’s the sprawling IS0 27001 data standard.

Let’s not forget about security frameworks, such as COBIT and NIST CSF, which are kind of meta-standards that map into other security controls. For organizations in health or finance that are subject to US federal data security rules, HIPAA and GLBA’s data regulations need to be considered as well. And if you’re involved in the EU market, there’s GDPR; in Canada, it’s PIPEDA; in the Philippines, it’s this, etc., etc.

There’s enough technical and legal complexity out there to keep teams of IT security pros, privacy attorneys, auditors, and diplomats busy till the end of time.

As a security blogger, I’ve also puzzled and pondered over the aforementioned standards and regulations. I’m not the first to notice the obvious: data security standards fall into patterns that make them all very similar.

Security Control Connections

If you’ve mastered and implemented one, then very likely you’re compliant to others as well. In fact, that’s one good reason for having frameworks. For example, with, say NIST CSF, you can leverage your investment in ISO 27001 or ISA 62443 through their cross-mapped control matrix (below).

Got ISO 27001? Then you’re compliant with NIST CSF!

I think we can all agree that most organizations will find it impossible to implement all the controls in a typical data standard with the same degree of attention— when was last time you checked the physical access audit logs to your data transmission assets (NIST 800-53, PE-3b)?

So to make it easier for companies and the humans that work there, some of the standards group have issued further guidelines that break the huge list of controls into more achievable goals.

The PCI group has a prioritized approach to dealing with their DSS—they have six practical milestones that are broken into a smaller subset of relevant controls. They also have a best practices guide that views — and this is important — security controls into three broader functional areas: assessment, remediation, and monitoring.

In fact, we wrote a fascinating white paper explaining these best practices, and how you should be feeding back the results of monitoring into the next round of assessments. In short: you’re always in a security process.

NIST CSF, which itself is a pared down version of NIST 800-53, also has a similar breakdown of its controls into broader categories, including identification, protection, and detection. If you look more closely at the CSF identification controls, which mostly involve inventorying your IT data assets and systems, you’ll see that the main goal in this area is to evaluate or assess the security risks of the assets that you’ve collected.

File-Oriented Risk Assessments

In my mind, the trio of assess, protect, and monitor is a good way to organize and view just about any data security standard.

In dealing with these data standards, organizations can also take a practical short-cut through these controls based on what we know about the kinds of threats appearing in our world — and not the one that data standards authors were facing when they wrote the controls!

We’re now in a new era of stealthy attackers who enter systems undetected, often though phish mails, leveraging previously stolen credentials, or zero-day vulnerabilities. Once inside, they can fly under the monitoring radar with malware-free techniques, find monetizable data, and then remove or exfiltrate it.

Of course it’s important to assess, protect and monitor network infrastructure, but these new attack techniques suggest that the focus should be inside the company.

And we’re back to a favorite IOS blog theme. You should really be making it much harder for hackers to find the valuable data — like credit card or account numbers, corporate IP — in your file systems, and detect and stop the attackers as soon as possible.

Therefore, when looking at the how to apply typical data security controls, think file systems!

For, say, NIST 800.53, that means scanning file systems, looking for sensitive data, examining the ALCs or permissions and then assessing the risks (CM-8, RA-2,RA-3). For remediation or protection, this would involve reorganizing Active Directory groups and resetting ACLs to be more exclusive (AC-6). For detection, you’ll want to watch for unusual file system accesses that likely indicate hackers borrowing employee credentials (SI-4).

I think the most important point is not to view these data standards as just an enormous list of disconnected controls, but instead to consider them in the context of assess-protect-monitor, and then apply them to your file systems.

I’ll have more to say on a data or file-focused view of data security controls in the coming weeks.

1 How did I know that NIST 800-53 has over 400 sub-controls? I took the XML file and ran this amazing two lines of PowerShell:

[xml]$books = Get-Content 800-53-controls.xml
$books.controls.control|%{$_.statement.statement.number}| measure -line

 

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for least-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate. If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 4a report I mentioned in the last last post is used for finding sensitive data in folders with global permissions. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws. And with our new GDPR Patterns there are now filters  —  over 250! —covering phone, license, and national IDs for EU countries

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07
Continue reading the next post in "Data Security Compliance and DatAdvantage"

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 4f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 4a report nicely shows where these files are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,CM-8
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Records of Processing (Article 30), Security of Processing (Article 32) and Impact Assessments (Article 35)
Continue reading the next post in "Data Security Compliance and DatAdvantage"

The Essential Guide to Identifying Your Organization’s Most Sensitive Con...

The Essential Guide to Identifying Your Organization’s Most Sensitive Content

What do hackers want? If you answered money — always a safe bet — then you’d be right. According to the Verizon Data Breach Investigations Report (DBIR), financial gain still is the motivation for over 75% of incidents it had investigated.

A better answer to the above question is that hackers want data — either monetizeable or sensitive content — that is scattered across large corporate file systems. These are the unencrypted user-generated files (internal documents, presentations, spreadsheets) that are part of the work environment. Or if not directly created by users, these files can be exported from structured databases containing customer accounts, financial data, sales projections, and more.

Our demand for this data has grown enormously and so have our data storage assets. Almost 90% of the world’s data was created over the last 2 years alone, and by 2020 data will increase by 4,300% — that works out to lots of spreadsheets!

Challenges of Data Security

Unfortunately, the basic tools that IT admins use to manage corporate content – often those that are bundled with the operating systems — are not up to the task of finding and securing the data.

While you’ll need outside vendors to help protect your data assets,  it doesn’t necessarily mean there’s been an agreement on the best way to do this. Sure you can try to lock the virtual doors through firewalls and intrusions systems — simply preventing anyone from getting in.

Or you can take a more realistic approach and assume the hackers will get in.

Security From the Inside Out

What we’ve learned over the last few years after a string of successful attacks against well-defended companies is that it’s impossible to build an intrusion-proof wall around your data.

Why?

Hackers are being invited in by employees–they enter in through the front door. The weapon of choice is phish mail: the attacker, pretending to represent a well-known brand (FedEx, UPS, Microsoft), sends an email to an employee containing a file attachment that appears to be an invoice or other business document

When employees click, they are in fact launching the malware payload, which is embedded in the attachment.

Verizon’s DBIR has been tracking real-world hacker attack techniques for years. Social engineering, which includes phishing as well as other pretexting methods. has been exploding (see graph).

dbir 2016 threats

Social attack are on the rise according to the Verizon DBIR.

What Happens Once They’re in?

The DBIR team also points out that hackers can get in quickly (in just a few days or less). More often than not, IT security departments then take months to discover the attack and learn what was taken.

The secret to the hacker’s stealthiness once they gain a foot hold is that the use stealthy fully undetectable (FUD) malware.  They can fly under the radar of virus scanners while collecting employee key strokes, probing file content, and then removing or exlfiltrating data.

Or if they don’t use a phishing attack to enter, they can find vulnerabilities in public facing web properties and exploit them using SQL injection and other techniques.

And finally, hackers have been quite good at guessing passwords due to poor password creation practices — they can simply log in as the employee.

Bottom line: the attackers enter and leave without triggering any alarms.

A strategy we recommend for real-world defense against this new breed of hackers is to focus on the sensitive data first and then work out your defenses and mitigations from that point — an approach known as “inside out security”.

Three Steps to Inside Out Security

Step One: Taking Inventory of Your IT Infrastructure and Data

Before you can create an inside out security mindset, a good first step is simply to take an inventory of your IT infrastructure.

It’s a requirement found in  many data standards or best practices, such as ISO 27001, Center for Internet Security Critical Security Controls, NIST Critical Infrastructure Cybersecurity (CIS) Framework, or PCI DSS.

Many standards have controls that typically go under the name of asset management or asset categorization. The goal is to force you to first know what’s in your system: you can’t protect what you don’t know about!

Along with the usual hardware (routers, servers, laptops, file server, etc.), asset categorization must also account for the digital elements —important software, apps, OSes, and, of course, data or information assets.

For example, the US National Institute of Standards and Technology (NIST) has its Critical Infrastructure Cybersecurity  Framework, which is a voluntary guideline for protecting IT of power plants, transportation, and other essential services. As with all frameworks, CIS provides an overall structure in which various specific standards are mapped.

The first part of this framework has an “Identify” component, which includes asset inventory subcategories — see ID.AM 1 -6 — and content categorization—see ID.RA-1 and ID.RA-2.

NIST--threats

NIST CIS Framework controls for asset discovery.

Or if you look at PCI DSS 3.x, there are controls to identify storage hardware and more specifically sensitive card holder data – see Requirement 2.4.

pci dss identify asset

PCI DSS: Inventory your assets.

Step Two: Reducing Risk With Access Rights

Along with finding sensitive content, security standards and best practices have additional controls, usually under a risk assessment category, that ask you to look at the access rights of this data. The goal is to learn whether this sensitive data can be accessed by unauthorized users, and then to make adjustments to ensure that this doesn’t happen.

Again referring to the CIS Framework, there’s a “Protect” function that has sub-categories for access controls – see PR.AC-1 to PR.AC-4.

Specifically, there is a control for implementing least privilege access (AC.4), which is a way to limit authorized access by given minimum access rights based on job functions. It’s sometimes referred to as role-based access controls or RBAC.

You can find similar access controls in other standards as well.

The more important point is that you should (or it’s highly recommended) that you implement a continual process of looking at the data, determining risks, and making adjustments to access controls and taking other security measures. This is referred to as continual risk assessment.

Step 3: Data Minimization

In addition to identifying and limiting access, standards and best practices have additional controls to remove or archive PII and other sensitive data that’s no longer needed.

You can find this retention limit in PCI DSS 3.1 – “Keep cardholder data storage to a minimum” found in requirement 3.  The new EU General Data Protection Regulation (GDPR), a law that covers consumer data in the EU zone, also calls for putting a time limit on storing consumer data — it’s mentioned in the key protection by design and default section (article 25).

Ideas for minimizing both data collection and retention as a way to reduce risk are also part of another best practice known as Privacy by Design, which is an important IT security guideline.

The Hard Part: Data Categorization at Scale

The first step, finding the relevant sensitive data and categorizing it, is easier said than done.

Traditionally, categorization of unstructured data has involved a brute force scanning of the relevant parts of the file system, matching against known patterns using regular expressions and other criteria, and then logging those files that match the patterns.

This process, of course, then has to be repeated — new files are being created all the time and old ones updated.

But a brute force approach would start completely from scratch for each scan — beginning at the first file in its list and continuing until the last file on the server is reached.

In other words, the scan doesn’t leverage any information from the last time it crawled through the file system.  So if your file system has 10 million files, which have remained unchanged, and one new has been added since the last scan, then – you guessed it!—the next scan would have to examine 10 million plus one files.

A slightly better approach is to check the modification file times of the files and then only search the contents of those files that have been updated since the last scan. It’s the same strategy that an incremental backup system would use — that is, check the modification times and other metadata that’s associated with the file.

Even so, this is a resource intensive process — CPU and disk accesses — and with large corporate file systems in the tens and hundreds of terabyte range, it may not be very practical. The system would still have to look at at every file’s last access time metadata.

A better idea is to use true incremental scanning. This means that you don’t check each file’s modification date to see if it has changed.

Instead, this optimized technique works from a known list of changed file objects provided by the underlying OS.  In other words, if you can track every file change event—information that an OS kernel has access to – then you can generate a list of just the file objects that should be scanned.

This is a far better approach than a standard (but slow) crawl of the entire file system.

To accomplish this, you’ll need access to the core metadata contained in the file system — minimally, file modification time stamps but ideally other fields related to user access and group access ids.

Solutions and Conclusions

Is there a way to do true incremental scanning as part of a data-centric risk assessment program?

Welcome to the Varonis IDU (Intelligent Data Use) Classification Framework!

Here’s how it works.

The Classification Framework is built on a core Varonis technology, the Varonis Metadata Framework. We have access to internal OS metadata and can track all file and directory events – creation, update, copy, move, and deletes. This is not just another app running on top of the operating system, instead the Metadata Framework integrates at a low-level with the OS while not adding any appreciable overhead.

With this metadata, the Classification Framework can now perform speedy incremental scanning. The Framework is scanning on a small subset of file objects — just the ones that have been changed or newly created—thereby allowing it to jump directly to these files rather than having to scan the complete system

The Varonis IDU Classification Framework is then able to quickly classify the file content using either our own Varonis classification engine or classification metadata from a third-party source, such as RSA.

The IDU Classification Framework works with our DatAdvantage product, which uses file metadata to determine who is the true owner of the sensitive content.

Ultimately, the Varonis solution enables data owners — the managers really in charge of the content and most knowledgeable about proper access rights — to set appropriate file permissions that reduce or eliminate data exposure risk.

 

 

A Guide to PCI DSS 3.2 Compliance: A Dos and Don’ts Checklist

A Guide to PCI DSS 3.2 Compliance: A Dos and Don’ts Checklist

Before you begin, download the PCI Compliance Checklist PDF and follow along!

Table of Contents

Overview

PCI DSS compliance is a requirement for any business that stores, processes, or transmits cardholder data. It can be tough to dig through the hundreds of pages and sub-requirements provided by the Security Standards Council (SSC). Add some version updates, and we have quite a behemoth to tackle.

As a famous galactic guide once said, “Don’t Panic!”

This guide and corresponding checklist will help you down the path to PCI DSS 3.2 compliance. Learn what changes have come with the 3.2 update, how to approach PCI’s 12 compliance requirements, and the Dos and Don’ts to keep in mind during the process.

PCI DSS 3.2 Evolving Requirements – High Level Review

PCI DSS 3.2 has a multitude of changes and clarifications with the recent update. Let’s discuss them from a bird’s eye view.

New Compliance Deadlines – Get Your Calendars Out

November 1, 2016

PCI DSS 3.1 will be retired as the standard on November 1st. All assessments from this date forward will be based on 3.2.

February 1, 2018

Numerous evolving requirements have been outlined in 3.2 (read further below). The additional requirements will be considered best practice until February 1, 2018 when they become compliance requirements.

July 1, 2018

Right now, SSL and TLS versions 1.0 and earlier are no longer protected security protocols. Implementing updated protocols is considered best practice until July 1, 2018 when it becomes a requirement.

Note* – PCI DSS 3.1 had previously set a deadline of June 30, 2016, but 3.2 has extended it.

DO:

  • Migrate to an updated version TLS security protocol as soon as possible.

DON’T:

  • Wait until the July 1, 2018 deadline to make the adjustment. Though keeping SSL/early TLS for 2016-2017 does not indicate non-compliance, PCI Security Standards Council has determined these protocols are no longer secure. Why risk a breach?

Multi-factor Authentication for Everybody (8.3)

PCI DSS 3.2 has changed the two-factor authentication requirement to multi-factor, clarifying that you’re not limited to only two. Additionally, this requirement no longer applies to just employees working remotely, but anyone with non-console admin access to the cardholder data environment (CDE), regardless of location.

You must pick a minimum of 2 of these authentication methods to be compliant:

  • Something you know, such as a password or passphrase
  • Something you have, such as a token device or smart card
  • Something you are, such as a fingerprint or retinal scan

Primary Account Number (PAN) Masking and Visibility (3.3)

PCI DSS 3.2 clarifies that primary account numbers must be masked when displayed, meaning they can only show a maximum of the first 6 and last 4 digits. A list must also be created with the roles and reasons for any employees who must see more digits than the masked PAN allows.

DO:

  • Check to ensure stricter requirements don’t supersede the PAN masking limit above (e.g., card brands that limit PAN masking for point-of-sale receipts).

DON’T:

  • Display the maximum number of first and last digits (6 & 4) if it’s not a business necessity.
    • Solution: Determine the minimum PAN digits your organization needs to function and mask the rest.

Stricter Reporting for Service Providers

PCI DSS 3.2 introduced a number of new requirements for service providers and has also included the Designated Entities Supplemental Validation (DESV) in its appendix. Here’s the catch – payment brands will designate when a service provider is required to fulfill the additional DESV validation.

First things first – what should all service providers do?

DO:

  • Interview personnel to confirm cryptographic architecture is documented and security controls detect critical system failures and alert those data owners. (3.5.1, 10.8)
  • Create a change management process that documents changes to the system as well as their impact to PCI DSS requirements. (6.4.6)
  • Repair security control failures in a timely manner, documenting the failure duration, the cause, risk assessment results, and the new controls preventing future failures. (10.8.1)
  • (If you use segmentation) Perform penetration testing on segmentation controls after every significant change, or at least every 6 months. (11.3.4.1)
  • Perform quarterly reviews (at a minimum) to confirm personnel are following security policies and procedures. (12.11)

*Note: PCI DSS 3.2 has outlined that personnel must be reviewed on daily log reviews, firewall rule-set reviews, application of config standards to new systems, security alert responses, and conformity to change management processes.

  • Document quarterly review results including signatures from the personnel responsible for the PCI DSS compliance program. (12.11.1)
  • Ensure your executive management establishes responsibility for the protection of cardholder data and PCI DSS 3.2 compliance, including a written acknowledgement of responsibility for all service providers. (12.4.1, 12.8.2)

If an Acquirer or Payment Brand determines a service provider needs additional DESV validation, what does that mean?

The DESV provides greater assurance that service providers are maintaining PCI DSS controls regularly and more effectively. The 5 validation steps outlined in the DESV sound awfully similar to the new service provider requirements: implement a PCI DSS program, control access to the CDE, etc. It’s best to think of the DESV as the “helicopter parent” to PCI DSS’s “free-range parent.” DESV walks you through every step of building a PCI DSS compliance program and will constantly check to ensure the initiatives are maintained consistently, even down to reporting on meeting minutes.

You can access the full DESV on the PCI SSC site.

PCI’s 12 Requirement Program Made Simple

PCI DSS outlines its requirements in 12 broad steps. Sounds easy, right? Flash forward 139 pages of size 10 font when you’re asking yourself, “What did I just read?”

Let’s take a step back and see if we can group these requirements into something more digestible. How about 4 defense tactics broken down by checklist? We recommend you download our free checklist PDF to track your progress.

Defend your cardholder data

The first step is an obvious one. Customer credit card data is what this is all about, so let’s make sure that data itself is secure before working on the perimeter.

Applicable DSS requirements:

DSS Requirement 3 – Protect stored cardholder data

DO:

  • Implement documented data retention and disposal policies to minimize cardholder data you collect and how long it is retained. (3.1)
  • Interview your employees to confirm policies are being maintained and quarterly processes are in place to remove cardholder data outside of your retention policy. (3.1.b)
  • Make sure the stored data and data in-transit is unreadable. (3.4, 4.1)
  • Encrypt card data and protect the encrypted keys. (3.5)
  • Mask your PAN data when it must be viewed (see above) using the fewest digits possible (under the 6 First, 4 Last display maximum). (3.3)
  • Create a cardholder flow diagram for all data flows through your organization. (1.1.3)
  • Use a data discovery tool to find misplaced sensitive data in your environment.

DON’T:

  • Store sensitive authentication data after authorization. (3.2)
    • Exception: Your organization is an issuer and has business justification.
  • Store masked PAN data.
    • Solution: Encrypt it instead.

DSS Requirement 4 – Encrypt transmission of cardholder data across open, public networks

DO:

  • Identify where you send cardholder data and ensure your policies are not violated in the journey and only trusted keys or certificates are used. (4.1)
  • Select a sample of inbound and outbound transmissions and verify cryptography is maintained during transit. (4.1.c)

DON’T:

  • Send PANs by end-user messaging tech like email, SMS, or IM. (4.2)
  • Use new technologies that utilize SSL/early TLS. (version 1.0 or earlier)
  • Migrate cardholder data to systems using SSL/early TLS. (version 1.0 or earlier)

Defend against the external threat

So now your data itself has been defended. It is encrypted, masked when it must be displayed, and travels only through mapped cardholder data flows. Unfortunately, this is not enough to truly protect your cardholder data. Beyond the perimeter of your environment, the world is a scary place, full of malicious hackers and profiteers.

PCI DSS wants to ensure that you build your walls high and strong (I promise we’ll get away from these castle defense metaphors) by strengthening your firewalls and anti-virus measures, changing default security parameters, and maintaining secure systems with a strong change control process.

Applicable DSS Requirements:

DSS Requirement 1 – Install and maintain a firewall configuration to protect cardholder data

DO:

  • Install a firewall at each internet connection (every device) and between any demilitarized zone (DMZ) and internal network zone. (1.1.4, 1.4)
  • Configure your firewalls with a description of groups responsible for network components and business justifications for all services/protocols/ports in the configuration. (1.1.5, 1.1.6)
  • Review firewall and router configuration at least every 6 months and confirm all other, non-config traffic (inbound or outbound) is denied. (1.1.7, 1.2.1)
  • Configure routers to block connections between untrusted parts of the network and cardholder data. (1.2, 1.3)
  • Assign responsibility for someone to check firewall logs daily.

DON’T:

  • Store cardholder data in the DMZ or any untrusted network.
    • Solution: Create a secure internal network zone. (1.3.6)

DSS Requirement 2 – Do not use vendor-supplied defaults for system passwords and other security parameters

DO:

  • Identify a sys admin to be responsible for system components. (2.2.4)
  • Maintain an inventory list of all system components in scope for PCI DSS. (2.4)
  • Document policies to change vendor-supplied default passwords, default wireless settings and remove default accounts before installing a system on your network. (2.1, 2.1.1, 2.5)
  • Document system component config standards that address security weaknesses, limit service/protocol access based on need, and follow hardening standards. (2.2, 2.2.2)

DON’T:

  • Implement multiple functions to a single server as this can create permission conflicts. (2.2.1)
  • Assume your vendors are maintaining anti-virus scanning.
    • Requirement: It’s your responsibility to confirm vendors are up-to-date and scanning regularly.

DSS Requirement 5 – Protect all systems against malware and regularly update anti-virus software or programs

DO:

  • Regularly update ant-virus software on your commonly affected systems and evaluate whether additional systems are at risk/need anti-virus. (5.1, 5.1.1, 5.1.2)
  • Automate anti-virus scans and maintain anti-virus audit logs for your systems. (5.2)
  • Ensure only admins can alter or disable anti-virus. (5.3)
  • Document procedures for protecting against malware. (5.4)

DON’T:

  • Wait to identify Malware by observing the damage it causes.
    • Solution: Install software that can observe behavioral anomalies and alert the necessary personnel.

DSS Requirement 6 – Develop and maintain secure systems and applications

DO:

  • Establish a process to keep up-to-date with the latest security vulnerabilities and identify the risk level. (6.1)
  • Install all vendor-supplied security patches. (6.2.a)
  • Document the impact, authorizer, functionality testing, and back-out procedures of all change control procedures. (6.4.5)
  • Use strict development processes and secure coding guidelines (outlined in DSS) when developing software in-house. (6.3)

DON’T:

  • Wait longer than 1 month to install vendor-supplied security patches for risk levels identified as critical. (6.2.b)
  • Test in-house software in your production environment, use production data during testing, or leave test accounts/IDs after software release. (6.3.1, 6.4.1, 6.4.3)

Defend against the internal threat

The cardholder data has been secured. The external barrier has been reinforced. Why can’t we pack up and call it a day? Because there is still a risk that the enemy is one of us…

Sometimes the internal breach can be an infiltrator who joined the organization for the sake of exploiting data or a turn-cloak who has maliciously switched teams (you probably shouldn’t have cracked that joke about Bob’s cargo shorts on casual Friday). Just as often, though, it’s the accidental insider who has access to more than they should and unknowingly creates a security vulnerability.

PCI DSS has a number of requirements in place to combat the risk the insider by restricting both virtual and physical access to cardholder data within the organization.

Applicable DSS Requirements:

DSS Requirement 7 – Restrict access to cardholder data by business need to know

DO:

  • Create a list of roles with access to the CDE that includes the definition of each role, their privilege level, and what permissions are required for each role to function. (7.1, 7.3)
  • Create a least-privilege policy for all employees and a default “deny-all” setting on all access control settings. (7.1.2, 7.2.3)
  • Require documented approval by authorizers for any privilege assignments or privilege changes in the CDE. (7.1.4)

DON’T:

  • Give excessive permissions to a role for that “rainy day” when they might require it.
    • Solution: Use a least privilege model where permissions are granted only by business need.
    • Solution: Grant access only for the period of time that it’s needed.

DSS Requirement 8 – Identify and authenticate access to system components

DO:

  • Define and document procedures for user identification and authentication on all system components. (8.1, 8.4)
  • Assign unique IDs to all users, test those privilege controls, and revoke access on inactive/terminated users. (8.1.1, 8.1.2, 8.1.3, 8.1.4)
  • Monitor all accounts used by vendors and other third parties, then disable them when not in use. (8.1.5)
  • Lock out users IDs for 30 minutes after six failed access attempts. (8.1.6, 8.1.7)
  • Follow best practice guidelines outlined in DSS for password setting – including strong password composition, encrypting credentials, verifying ID before reset, and mandatory resets every 90 days. (8.2.1, 8.2.2, 8.2.3, 8.2.4)
  • Incorporate multi-factor authentication for all non-console admin access and remote access to CDE. (8.3)

DON’T:

  • Use the same password for multiple accounts or devices – once one is compromised, they all will be.
  • Use shared user IDs or authentication methods in the CDE. (8.5)

DSS Requirement 9 – Restrict physical access to cardholder data

DO: (if applicable)

  • Document process for physical access to CDE systems and a list of all devices, limiting access to roles that require it and monitoring all with authorization tokens and surveillance. (9.1.1a, 9.1.1b, 9.2, 9.3, 9.9.1)
  • Create visitor authorization and access controls that ensure visitors are identified, documented, and monitored in areas that access the CDE. (9.4)
  • Establish firm controls for physical media moved within the facility, use tracked couriers when moved outside, and ensure destroyed media cannot be reconstructed. (9.5, 9.6, 9.8)
  • Train employees with processes to identify outside vendors requesting physical access and identify/report suspicious behavior. (9.9.2, 9.9.3)

Defend against complacency

We’ve secured our cardholder data itself. The perimeter has been reinforced against external threats. Measures have been put in place to negate the threat of internal breaches. What dangers could be left? Complacency.

Just getting to a state of PCI compliance for your assessment is not enough. PCI DSS 3.2 acknowledges this and has requirements that reinforce monitoring access to all network resources, testing policies regularly, and developing programs that keep personnel involved year-round.

Applicable DSS Requirements:

DSS Requirement 10 – Track and monitor all access to network resources and cardholder data

DO:

  • Implement audit trails for all systems, alerts on suspicious activity, and a response plan for those anomalies. (10.1, 10.2, 10.6.2.b))
  • Track all admin actions, login attempts, account changes, and pauses in the audit trail. (10.2.3, 10.2.4, 10.2.5, 10.2.6)
  • Ensure each audit log captures user ID, event type, date and time, event success or failure, where the event originated from, and what resources are affected. (10.3)
  • Keep all audit logs for at least one year with the last three months available for analysis. (10.7)
  • Prevent audit trail tampering and use software to alert on log changes. (10.5)
  • Create a process that reviews CHD system logs daily, and one that reviews all other system components based on your risk assessment results. (10.6.1, 10.6.2)

DON’T:

  • Give audit log access to anyone without a role justification. (10.5.1)
  • Leave the daily audit trail review to manual methods – this can be a massive time void
  • Store audit logs for external-facing technologies on those machines – they can be compromised. (10.5.4)

DSS Requirement 11 – Regularly test security systems and processes

DO:

  • Document each authorized wireless access points with a business justification. (11.1.1)
  • Implement processes to test and respond to authorized and unauthorized wireless access points on a quarterly basis. (11.1, 11.1.2)
  • Run vulnerability scans internally (with qualified personnel) and externally (with Approved Scanning Vendor) with every quarter and significant network change, correcting and re-scanning all identified vulnerabilities. (11.2)
  • Run penetration tests internally and externally (with qualified personnel or 3rd party) annually, correcting and retesting any exploitable risk found. (11.3)
  • Deploy a change-detection tool that alerts personnel to any unauthorized mod of critical systems and runs file comparisons at least weekly. (11.5)
  • Document a process for responding to change-detection alerts. (11.5.1, 11.6)

DON’T:

  • Stick to the bare-minimum for testing
    • Solution: Run reviews more frequently than the bare minimum – you will respond to threats sooner, before they are exploited.

DSS Requirement 12 – Maintain a policy that addresses info security for all personnel

DO:

  • Publish an annually reviewed security policy that documents all CDE critical devices and services, defines appropriate access (in role, use of that access, and location), and implements a risk assessment process. (12.1, 12.2, 12.3, 12.4)
  • Annually complete security awareness training with all personnel who access the CDE. (12.6)
  • Assign role responsibilities to document procedures, analyze security alerts, administer accounts, and monitor access to all data. (12.5)
  • Maintain documentation of service providers that requires lists of services provided, due-diligence in selection (including risk assessment), acknowledgement from SP accepting CHD responsibility, and a process to monitor the providers PCI DSS compliance. (12.8.1, 12.8.2, 12.8.3, 12.8.4, 12.8.5)
  • Create an annually tested response plan for a system breach that outlines tasks for each role, specific actions for different threats/alerts, how to cover critical systems and data backup, legal requirements for reporting, and notifications of card brands. (12.10)

Looking Ahead

PCI DSS 3.2, ultimately, urges you to make a shift in company culture. PCI compliance shouldn’t be something that is discussed only with an impending assessment, but on a regular basis. Find your sensitive data, restrict and monitor access to it, alert on suspicious behavior, and document everything. The checklist above will not only help you move towards these goals, but will prepare management to deal with new threats and requirements. Click here for a printable version of the PCI Compliance checklist.

 

PCI-blue

 

 

Sources:

http://blog.pcisecuritystandards.org/preparing-for-pci-dss-32

https://www.varonis.com/learn/pci-dss-3-1-it-requirements/

https://www.pcisecuritystandards.org/document_library?category=pcidss&document=pci_dss

http://blog.pcisecuritystandards.org/preparing-for-pci-dss-3-2-summary-of-changes

http://info.securitymetrics.com/pci-guide

 

EU GDPR Spotlight: Protection by Design and Default

EU GDPR Spotlight: Protection by Design and Default

Privacy by Design (PbD) is a well-intentioned set of principles – see our cheat sheet – to get the C-suite to take consumer data privacy and security more seriously. Overall, PbD is a good idea and you should try to abide by it. But with the General Data Protection Regulation (GDPR), it’s more than that: it’s the law if you do business in the EU zone!

PbD has sensible guidelines and practices concerning consumer access to their data, and making privacy policies open and transparent.  These are not controversial ideas, except if you are, ahem, a large Internet company that collects lots of consumer data.

And PbD also dispenses good general advice on data security that can be summarized in one word: minimize.

Minimize collection of consumer data, minimize who you share the data with, and minimize how long you keep it. Less is more: less data for the hacker to take, means a more secure environment.

By Design and By Default

While you’re keeping consumer data, according to PbD, you also should have “end-to-end” security in place. Privacy is supposed to be baked into every system that handles the data.

It all seems like reasonable things to do. Various security best practices and standards — for example, PCI DSS and CIS Critical Security Controls— have been offering similar PbD-like security recommendations.

However, the EU has been way ahead of the US in making PbD principles part of their data regulations. In fact, the existing Data Protection Directive, the current law, contains PbD principles in various place – particularly data minimization and giving consumers the right to access and correct their data.

The new GDPR, which will go into effect in 2018, retains the existing rules on data and then goes a step further. PbD is explicitly spelled out in article 25, “Data protection by design and by default”.  Here are two relevant passages:

… implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing…

The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage ..

Got that: limiting and minimizing are now the law of the land, with respect to data. (I’ll talk about pseuodonymization in another post. It’s a cool idea that lets you have protect data and consumer privacy without having to resort to encryption.)

Impact on Your Marketing Campaign

The new GDPR has direct, practical implications. Just as an example, consider the impact it will have on web-based marketing.

Businesses are always trying to get information about their customers and looking to bring in new leads using the full digital arsenal — web, email, mobile  And when given half a chance, marketers always want more data —age, income, zip code, last book read, favorite ice cream, favorite food, etc. — even for the simplest consumer interaction.

What the EU GDPR says is that marketers should limit data to the purpose for which it is being collected—do I really need zip codes or favorite books? — and not to retain the data beyond the point where it’s no longer relevant.

So the data points you collected from that web campaign over five years ago —maybe containing 5000 email address along with favorite pet names — and now lives in spreadsheet no one ever looks at. Well, you should find it and delete it.

If a hacker gets hold of it, and uses it for phishing purpose, you’ve created a security risk for your customers.

Plus, if the local EU authority can trace the breach back to your company, you can face heavy fines.

Need more EU General Data Protection Regulation knowledge? Our white paper  gives you a complete run down!

There’s Something About Frameworks: A Look at HITRUST’s CSF

There’s Something About Frameworks: A Look at HITRUST’s CSF

Repeat after me: frameworks are not standards. They are instead often used as a guide to navigate through the underlying standards.

There are lots of frameworks cropping up in the cybersecurity world. If you’re completely new to the idea of, let’s say protecting critical infrastructure and not sure even how to begin working out the right controls, then you take a trip to NIST’s own Critical Infrastructure Security Framework.

Is there anything similar in the world of healthcare to navigate its complex security and privacy regulations?

The folks at the Health Information Trust Alliance or HITRUST have, after working with healthcare and IT experts, come up with their own Common Security Framework (CSF).

Nitty Gritty of Common Security Framework

A healthcare security framework has to take into account the entire scope of healthcare security, including not just the actual health data, but other data as well, for example, financial and transactional information.

So it’s not surprising that HITRUST’s sprawling CSF — over 400 pages of guidance goodness covering 13 different areas — has controls that map into HIPAA’s safeguards for protected health information, PCI’s DSS for credit card, and COBIT controls related to financial information—to name just a few!

The overall idea is you dive into CSF to refer to an area in healthcare you’re interested in safeguarding, say access control, and then find the actual compliance and regulatory mappings. CSF provides several levels of these mappings — that would be Level 1, Level 2, and Level 3 — so that you have increasing granularity in your implementation.

For example, in the case of CSF’s information access control policy (Control 1.1a), CSF directs you to HIPAA 164.308 a(4). Remember that HIPAA requirement? It’s where HIPAA tells you to implement a policy so that authorized users access the minimal information for employees to do their jobs.

Keep in mind that HIPAA is technology neutral and not overly prescriptive. So if you want a more specific requirement for getting this done, the Level 2 mapping then directs you to ISO 27002 A.9.1.1. To jog your memory, this is where the ISO folks get into the weeds on prescribing specific controls for apps and information.

Varonis Can Help

Yes, we can!  CSF is a giant meta-standard and a good resource for those planning comprehensive solutions for every aspect of healthcare security, down to the level of electrical equipment safety — see CSF Control 0.8.d and it’s NIST Cybersecurity Framework mapping!

Varonis already provides support for many of the key compliance standards — especially the aforementioned HIPAA and PCI—which form the basis of many of the Level I and Level II mappings.

If you’re looking for an overall map — yes, another map !— that shows some of the key areas where Varonis can help in CSF, please review the table below.

 

CSF CONTROL CATEGORY MAPPINGS SOLUTIONS
01: Access Control

(.02) Authorized Access to Information System

(.06) Application and Information Access Control

  • HIPAA 164.308(a)
  • PCI DSS 8.1, 8.2
02: Human Resources Security

(.04i) Termination of Employment/removal of access rights

  • HIPAA 164.308(a)
  • PCI DSS 8.1.3
03: Risk Management

(.01b) Performing Risk Assessments

(.01c) Risk Mitigation

  • HIPAA 164.308a
  • PCI DSS 1.2
06: Compliance

(c) Protection of organizational records (retention)

(d) Data protection and privacy of covered information (retention)

  •  PCI DSS 3.1
07: Asset Management

(.02d) Classification Guidelines

  • HIPAA 164.308a

 

09: Communication and Operating Management

(.10aa) Monitoring/audit logging

  • HIPAA 164.308,164.312
  • PCI DSS 10.1
10: Information Systems Acquisition, Development, and Maintenance

(.04) Security of System Files

  • PCI  DSS 2.2
11: Information Security Incident Management

(01a) Reporting Information Security Events

  • HIPAA 163.308a
  • HIPAA 164.404
  • PCI DSS 12

How Varonis Helps with PCI DSS 3.1

How Varonis Helps with PCI DSS 3.1

The Payment Card Industry Data Security Standard (PCI-DSS) 3.1 is a set of regulations that govern how organizations manage credit card and other cardholder data. Many security professionals advocate that DSS is not only about passing an annual compliance audit, but also having programs in place for continual assessments, remediation, and monitoring.

To learn how Varonis solutions can help organizations meet PCI compliance and how we provide security to protect your organization inside and out, read our How Varonis Helps with PCI DSS 3.1” compliance brief.

pci_cta

SSL and TLS 1.0 No Longer Acceptable for PCI Compliance

SSL and TLS 1.0 No Longer Acceptable for PCI Compliance

In April of 2016, the PCI Council released version 3.1 of their Data Security Standard (DSS). While most of the changes in this minor release are clarifications, there is at least one significant update involving secure communication protocols. The Council has decided that SSL and TLS 1.0 can no longer be used after June 30, 2016.

The fine print about these two protocols can be found under DSS Requirement 2.0: “Do not use vendor-supplied defaults for system passwords and other security parameters”.

I guess the ancient Netscape-developed SSL (Secure Socket Layer) and TLS (Transport Layer Security) are considered other security parameters. (We’ve got an article dedicated to the difference between SSL & TLS, if you’re curious.)

RIP SSL

In any case, the Council is responding to the well-known POODLE exploit in SSL as well as NIST’s recent conclusions about SSL. As of April 2014, they proclaimed that SSL is not approved for use in protecting Federal information.

Unfortunately, you’ll need a brief history lesson to understand the role of TLS.

Developed in the 1990s by the IETF folks, TLS version 1.0 was based heavily on SSL and designed to solve compatibility issues—a single, non-proprietary security solution. Then a series of cryptographic improvements were made for TLS 1.1 and the current 1.2.

One key point is that TLS implementations support a downgrade negotiation process whereby the client and server can agree on the weaker SSL protocol even if they opened the exchange at the latest and greatest TLS 1.2.

Because of this downgrade mechanism, it was possible in theory to leverage the SSL-targeted POODLE attack to indirectly take a bite out of TLS by forcing servers to use the obsolete SSL.

Then in December 2014, security researchers discovered that a POODLE-type attack could be launched directly at TLS without negotiating a downgrade.

Overall, the subject gets complicated very quickly and depending on whom you read, security pros implicate browser companies for choosing compatibility over security in their continuing support of SSL or everyone for implementing the TLS standard incorrectly.

There’s a good discussion of some of these issues in this Stack Exchange Q&A.

What Can Be Done?

The PCI Council says you must remove completely support for SSL 3.0 and TLS 1.0. In short: servers and clients should disable SSL and then preferably transition everything to TLS 1.2.

However, TLS 1.1 can be acceptable if configured properly. The Council points to a NIST publication that tells you how to do this configuration.

PCI DSS Explained: Our New White Paper Decodes the Complexity

PCI DSS Explained: Our New White Paper Decodes the Complexity

The Payment Card Industry Data Security Standard (PCI DSS) is not just another list of requirements for protecting data. In 2013, the number of credit and debit card transactions worldwide reached over 100 billion—that’s lots of swipes and 16-digit numbers entered! With its almost 300 controls, PCI DSS provides the rules of the road for protecting and securing credit card data for every bank, retailer, or ecommerce site.

But does the average IT security person who’s charged with implementing its security safeguards really understand this complex standard?

Likely not! And that’s why we came up with PCI DSS for IT Pros and Other Humans. Our white paper simplifies the 12 core controls and condenses them into three higher-level steps.

Why simplify? Our approach is based on the PCI’s Council’s own best practices advice, which puts monitoring, assessment, and mitigation at the center of a real-world data security program.

To find out why strictly following the DSS controls is just not enough, you’ll want to read our paper.