All posts by Kieran Laffan

EternalRocks leaves backdoor trojan for remote access to infected machines

EternalRocks leaves backdoor trojan for remote access to infected machines

What we know so far

The WannaCry ransomware worm outbreak from last Friday week used just one of the leaked NSA exploit tools, ETERNALBLUE, which exploits vulnerabilities in the SMBv1 file sharing protocol.

On Wednesday security researcher Miroslav Stampar, member of the Croatian Government CERT, who created infamous sqlmap (SQL injection pentesting tool), detected a new self-replicating worm which also spreads via several SMB vulnerabilities. This worm, dubbed EternalRocks uses seven leaked NSA hacking tools to infect a computer via SMB ports exposed online.

Unlike WannaCry, EternalRocks has no kill switch to stop the code from executing. It uses some files with the same names as WannaCry to try to fool security researchers into thinking it is WannaCry.

EternalRocks analysis

  • Spreads via SMB ports exposed online
  • SMB reconnaissance tools SMBTOUCH and/or ARCHITOUCH are used to scan for open SMB ports on the public internet
  •  If open ports are found then one of 4 SMB exploit tools, which target different vulnerabilities in the Microsoft SMB file sharing protocol, are used to get inside the network:
    • ETERNALBLUE (SMBv1 exploit tool)
    • ETERNALCHAMPION (SMBv2 exploit tool)
    • ETERNALROMANCE (SMBv1 exploit tool)
    • ETERNALSYNERGY (SMBv3 exploit tool)
  • Once inside EternalRocks downloads the Tor web browser, which supposedly*allows you to browse the web anonymously and access sites hosted on the Dark Web which cannot be accessed from normal web browsers like Chrome, IE, or Firefox.
  • Downloads .net components which will be used later
  • Tor connects to a C&C server on the Dark Web
  • After a delay, currently set to 24 hours, the C&C server responds and an archive containing the 7 SMB exploits are downloaded. This delay is likely to avoid sandboxing techniques
  • Next the worm scans the internet for open SMB ports – to spread the infection to other organizations.

*It is questionable just how anonymous Tor really is considering that almost everyone involved in developing Tor was funded by the US Government.

The Good News

EternalRocks does not appear to have been weaponized (yet). No malicious payload – like ransomware – is unleashed after infecting a computer.

The Bad News

Even if SMB patches are retroactively applied machines infected by the EternalRocks worm are left remotely accessible via the DOUBLEPULSAR backdoor trojan. The DOUBLEPULSAR (backdoor trojan) installation left behind by EternalRocks is wide open. Whether on purpose or not the result is that other hackers could use DOUBLEPULSAR to access machines infected by EternalRocks.

What you should do

Block external access to SMB ports on the public internet\

  • Patch all SMB vulnerabilities
  • Block access to C&C servers (ubgdgno5eswkhmpy.onion) and block access to while you are at it
  • Monitor for any newly added scheduled tasks
  • A DOUBLEPULSAR detection script is available on Github
  • Make sure DatAlert Analytics is up to date monitoring your organization for insider threats

For detailed information on EternalRocks check out the repository setup by Stampar a few days ago on GitHub.

Why did last Friday’s ransomware infection spread globally so fast?

Why did last Friday’s ransomware infection spread globally so fast?

Quick ransomware background

Ransomware is a type of malware that encrypts your data and asks for you to pay a ransom to restore access to your files. Cyber criminals usually request that the ransom be paid in Bitcoins: the #1 cryptocurrency (basically a distributed ledger) which can be used to buy and sell goods. By nature, Bitcoin transactions (e.g. ransom payments) are very difficult to trace.

Historically, most ransomware infections use the attack vector – how they get in – of social engineering (like clickbait from a social media platform – think cute kitty pics on Facebook or Twitter) or email phishing campaigns, which contain attachments or links to a website. The end result is that a malicious payload gets a foothold on a machine inside a corporate network. Unfortunately, all of those next generation perimeter defenses that organizations spend good money on are not that difficult to bypass in order to get inside.

Once inside, most ransomware will scan the internal network to see which servers host file shares, attempts to connect to each share, encrypt its contents, and then demand a ransom be paid to regain access to the now encrypted files. End users can usually access way more data than they should be able to: either through wide open permissions or by accumulating permissions over the course of their employment at their company. Think for a minute just often you’ve stumbled across a folder or files which you know you shouldn’t be able to access. Access controls are out of control. In this case, IT is typically blind because of the sheer complexity of file system permissions.

Good to know, but what was different last week?

Without going too much into the technical details, I can tell you that the code behind the biggest ransomware outbreak in history isn’t actually all that special. It’s a type of cryptoworm: a self-propagating malicious form of malware. That means that once it gets a foothold, it can spread autonomously without the need for someone to remote control it.

Normally, ransomware targets unstructured data hosted on file shares – this ransomware, however, did not discriminate.

In April, several hacking tools created by the NSA were leaked online. These hacking tools exploit vulnerabilities in hardware and software so that they can hack into or move laterally around a computer network.

WannaCry ransomware (also known as WCry / WanaCry / WannaCrypt0r / WannaCrypt / Wana Decrypt0r) – the type responsible for last Friday’s attack – went a few steps further: once it got onto even a single machine within a corporate network, it did the following:

  • Looped through any open RDP (Remote Desktop) sessions, to encrypt data on the remote machine
  • Sought out any vulnerable* Windows machines – endpoints (laptops/desktops/tablets) and servers using Microsoft vulnerabilities
  • Used the traditional approach of going after file shares directly from the endpoint

*The particular vulnerability that made the difference last week was in the Microsoft SMBv1 file sharing protocol, which was used to hop from machine to machine encrypting data – like a spider web effect. Most internal servers are separated on internal networks so that end users can’t access them. The cryptoworm would need to hit just one internal server (e.g. a file server) and from there it would target whatever vulnerable servers that file server can access. This allowed it to quickly traverse entire networks, effectively crippling many of them. Like many cryptoworms, it’s self-propagating and so replicates itself and searches out to other vulnerable hosts/computer networks worldwide.

The truth is that the worldwide infection could have been much worse if not for the quick thinking of a security researcher. @MalwareTechBlog spotted that the malware code was connecting out to a nonsensical domain, which was not registered. This call out was hard-coded in case the creator wanted to stop it and likely also to help avoid IDS/IPS sandboxing techniques. If the request comes back showing that the domain is live, the “kill switch” kicks in to stop the malicious part of the code from executing – effectively stopping the malware in its tracks. @MalwareTechBlog, acting on a hunch, registered the domain name and was immediately registering thousands of connections every second. The result was that he stopped what could have been a much wider spread infection.

The bad news is that new versions of the code are already in development:

Lessons Learned

Microsoft released a patch (software code update to fix vulnerabilities) for this particular SMBv1 vulnerability back in March. The sad truth of the matter is that proper vulnerability patch management processes would mean that most organizations would not have been so badly affected.

That’s not to say that vulnerability patch management processes are enough coverage for ransomware. Nor are backups, since some ransomware will hide in your backups so that after you restore files they will simply attack again.

There is no one stop shop for stopping ransomware infections or any cyber security threat for that matter. Security is all about risk reduction – and requires a layered approach with controls in place at each layer while leveraging solutions to automate processes wherever possible. If any organization says that they’re 100% safe from cyber-attacks, then they’re either delusional or telling you porky pies!

Why UBA Will Catch the Zero-Day Ransomware Attacks (That Endpoint Protectio...

Why UBA Will Catch the Zero-Day Ransomware Attacks (That Endpoint Protection Can’t)

Ransomware attacks have become a major security threat. It feels like each week a new variant is announced –Ransom32, 7ev3n. This malware may even be involved in the next big breach. New variants such as Chimera threaten to not just ransom your data, but also leak it online if you don’t pay up.

These cyber extortionists are not exactly the most scrupulous people, and so who’s to say they won’t sell your data online even if you pay the ransom? They don’t have to offer a Terms of Service agreement!

Let’s face it: they have a really good business model.

What’s the Signature?

Some have turned to endpoint security solutions in the hope that it will detect and stop crypto-malware. However, the industry is catching on to the fact that, as one observer put it, “signature-based antivirus software that most organizations still rely on to defend them can’t cope with modern attacks.”

A recent CIO article described the drawback best:

 “… while a signature-based approach reduces the performance hit to the systems on which it runs, it also means somebody has to be the sacrificial sheep. Somebody has to get infected by a piece of malware so that it can be identified, analyzed and other folks protected against it. And in the meantime the malefactors can create new malware that signature-based defenses can’t defend against.”

Bottom line: endpoint security solutions can’t block unknown ransomware variants by, for example, blacklisting connections to a current (but outdated) list of C&C servers. They’re also bound to a device/user/process, and so don’t provide any anti-heuristics or debugging techniques.

Ransomware Prevention that Works

If endpoint security tools won’t help prevent ransomware, what will?

Northeastern University’s latest ransomware research paperCutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks, analyzed 1,359 ransomware samples and found that a “close examination on the file system activities of multiple ransomware samples suggests that by… protecting Master File Table (MFT) in the NTFS file system, it is possible to detect and prevent a significant number of zero-day ransomware attacks.”

Is there a technology that will protect your file systems based on this idea?

Answer: User Behavior Analytics (UBA). It’s an essential ransomware prevention measure.

UBA compares what users on a system are normally doing — their activities and file access patterns – against the non-normal activities of an attacker who’s stolen internal credentials. First, the UBA engine monitors normal user behavior, by logging each individual user’s actions – file access, logins, and network activities. And then over time, UBA derives a profile that describes what it means to be that user.

Identifying Ransomware with Varonis Automated UBA Threat Models

Without any configuration, Varonis UBA threat models spot the signs of ransomware activity — when files are being encrypted — and therefore can stop these attacks without having to rely on a static list of signatures.

Once detected, a combination of automated steps can be triggered to prevent the infection from spreading: for example, disabling the infected user, the infected computer, network drives on the infected machine, or the NIC.

Interested in seeing UBA in action? Let’s talk.

Further reading:

SQL Server Best Practices, Part II: Virtualized Environments

SQL Server Best Practices, Part II: Virtualized Environments

This article is part of the series "SQL Server Best Practices". Check out the rest:

It is 2016 and some people still think SQL Server cannot be run on a virtual machine. SQL Server can successfully run in a VM but SQL is resource-intensive by nature and so if you are going to virtualize SQL then you simply must adhere to best practices. Not following best practices can be the difference between poor vs exceptional virtual SQL Server performance. Please see my previous blog post on general SQL server best practices as these apply in a virtualized environment also.

Power Management

The physical VM host should be set to high performance in the BIOS to ensure that it is firing on all cylinders which in turn will allow the hypervisor to allocate the abstracted resources as it sees fit.

Power management should always be set to high performance within Windows VMs. Balanced is a setting for laptops which need to reserve power. VMs can have serious performance issues if not configured correctly. In some environments VM power management settings can be controlled by the hypervisor but when resource intensive apps such as SQL server are in play, make sure that Windows power management is set to high performance.

Always Use SLAT Compatible Server Hardware

Although it might not be the case with older hardware, most modern servers have x64 processors which support SLAT (Second Level Address Translation).

VMware and Hyper-V hosts should run 64-bit x64 processors (AMD or Intel). It is absolutely vital that the host processor supports SLAT. SLAT goes by several aliases.

  • Intel calls it Extended Page Tables
  • AMD calls it Nested Page Tables or Rapid Virtualization Indexing

SLAT enables the CPU to maintain the mapping between the virtual memory used by the VMs and the physical memory on the hypervisor host. If the CPU cannot perform this memory mapping then it would fall to the hypervisor to do so. Performance and scalability are both improved by having the CPU perform the memory mapping.

Microsoft studies have proved that SLAT:

  • Considerably reduces the host processing overhead to about 2 percent
  • Reduces the host memory requirements by about 1MB per running VM

Don’t think too much about it – just make sure that the underlying VM host’s hardware supports SLAT.

Do Not Overcommit the VM Host CPU

I cannot stress this point enough. If you overcommit the VM host, and have resource intensive applications such as SQL server running on the VMs on that VM host, then you will encounter performance issues sooner or later. It is not a problem if you have a bunch of low resource usage web / app servers sharing resources as the hypervisor can easily keep up with which VM needs which resources but when you bring resource intensive apps into the mix it is a recipe for disaster.

If your virtualized SQL Server workload is highly intensive, then make sure you are running the latest version of Hyper-V or vSphere as each iteration comes with new maximums for scalability.

Best practice for the initial sizing of a VM, especially one that will host a resource intensive application such as SQL server, is to make sure that the total number of virtual CPUs assigned to the VM does not exceed the number of physical CPU sockets (as opposed to the logical cores) available on host machine.

CPU Ready

This is not something you want to encounter as it is indicative of an overprovisioned VM and/or host. CPU ready is the amount of time a VM is ready (needs) CPU clock cycles on the physical host but has to wait to get time because other VMs are already using the resources.

Calculating ready time can be a pain because it depends on the polling interval for the metric presented on the VM host e.g. 20 seconds (20,000 milliseconds):

(CPU ready value / 20,000 ms) x 100% = Percentage performance impact per 20 second interval.

If you extrapolate over time you can quickly see how this would cause performance degradation, especially if running high performance applications such as SQL server.

Ready values <5% per vCPU are generally OK. Ready values >5% per vCPU are a warning and you are likely already experiencing performance degradation.

Without any misconfiguration it is not at all difficult to find CPU ready values of >=10% due to some large VMs with several vCPUs running on a few physical cores or a similar disproportion of vCPU to pCPU.

If the VM itself is overprovisioned e.g. a VM with 8 vCPUs must wait for all 8 pCPUs on the underlying VM host to be free before getting any clock cycles. This is where right sizing comes into pay. If the VM truly needs a large number of vCPUs then by all means add them. If you are sizing for a new application then only add vCPUs as you monitor performance. Windows task manager is not a great indicator of performance in a virtualized environment and so monitor from the VM host side. If all vCPUs are maxed then it likely needs more vCPUs. If not then leave well enough alone. I’ve seen situations where removing vCPUs from a VM actually improved the performance of the applications with databases hosted on that virtual SQL server.

If the VM host is overprovisioned then there are several VMs running on that host which are all competing for resources. If this is the case you should migrate some VMs to other hosts to alleviate the resource contention issues.

The equivalent to CPU ready on Hyper-V is the Perfmon counter Hyper-V Hypervisor Virtual Processor\CPU Wait Time Per Dispatch which is available since Windows Server 2012.


Hyper-threading is an Intel technology that exposes two hardware contexts (threads) from a single physical core. These threads are referred to as logical CPUs. It is a common misconception that hyper-threading doubles the number of CPUs or cores. This is simply not the case. Hyper-threading improves the overall host throughput from 10-30% by keeping the processor pipeline busier and allowing the hypervisor more opportunities to schedule CPU clock cycles and so you should definitely take advantage of hyper-threading by enabling it in the BIOS of the VM host machine.

Cores per Socket

NUMA (Non-Uniform Memory Access) allocates each CPU its own local memory. The CPU and memory combined are known as a NUMA node. The advantages of NUMA is that it enables a processor to access its own local memory faster than it would non-local memory. Both Windows and SQL are fully NUMA aware and make scheduling decisions for threads based on the NUMA topology.

vNUMA presents the physical VM host’s NUMA architecture directly to the VM guest OS. The vNUMA topology of a VM can span across multiple physical NUMA nodes. After a vNUMA-enabled VM is powered on, the architecture presented to the OS cannot be altered. This is actually a positive thing because altering the vNUMA architecture can cause instabilities in the OS. This restriction can however cause problems if an attempt is made to migrate the VM to another VM host which has a different NUMA architecture.

vNUMA is enabled by default for VMs which have more than 8 vCPUs (regardless of the combination of sockets and cores which makes up the number of vCPUs in play).

Best practice:

The number of virtual sockets should equal the number of vCPUs you want (single core per socket).

This is the default setting when creating a VM. This config is known as wide and flat, and vNUMA will present the optimal vNUMA topology to the guest operating system, based on the underlying physical VM host server’s NUMA topology. If a VM’s config is not wide and flat then vNUMA will not be able to automatically pick the best NUMA configuration, and will instead simply match whatever config you entered, which can lead to a NUMA topology mismatch which detrimentally affects performance.

Licensing constraints are the most common reason why admins chose to go against these best practices. If you must use do so then make sure that you mirror the physical VM host’s NUMA topology at least.

CPU Hot-Add

This setting can be a bit of a catch 22 – there are pros and cons to enabling and disabling.


CPU hot plug allows VM admins to add CPUs on the fly to VMs without needing to shut down the VM first. CPU hot plug allows for dynamic resource management and the ability to add CPUs when vNUMA is not required (usually smaller VMs).


When CPU hot add is enabled on a VM it automatically disables vNUMA. SQL servers which are wider than the NUMA architecture of the physical server they reside on cannot see the underlying NUMA architecture, which results in performance degradation.

Whether or not to enable CPU hot-add comes down to a question of how wide your VM will be. My recommendation is to disable CPU hot-add for larger VMs which require vNUMA. Prevention is always better than cure and so take the time to right size the SQL server VM’s CPU rather than relying on CPU hot-add as a fall back.

CPU Affinity

I do not recommend using CPU affinity on production machines because it limits the hypervisor’s ability to efficiently schedule vCPUs on the physical server.

Do not overcommit the VM host memory

Again cannot stress this one enough. When initially sizing a SQL VM make sure that the host is not and will not be overcommitted when the SQL VM is powered on. Don’t forget that the VM host machine has its own memory overhead to run the hypervisor operating system too!

Memory Reservation

SQL server is a memory hog and so whatever memory you throw its way will be used and not released. It might make sense to therefore set the memory reservation for the SQL VM to equal the provisioned memory minus 4-6GB for Windows to function. This will significantly reduce the likelihood of ballooning and swapping, and will guarantee that the virtual machine gets the memory it needs for optimum performance. Memory reservations can however prevent the migration of VMs between hosts if the target host does not have unreserved memory equal to or greater than the size of the memory reservation.

To calculate the amount of memory to provision for a virtual machine:

  • VM Memory = SQL Max Server Memory + ThreadStack + OS Mem + VM Overhead
  • ThreadStack = SQL Max Worker Threads * ThreadStackSize
  • ThreadStackSize = 1MB on x86, 2MB on x64, and 4MB on IA64

Dedicated vs Dynamic memory

Yes, this goes against the fundamentals of virtualization and so you may lose this fight with your Virtualization admin but it is worth arguing for. Dedicated resources mean that you can take a fight for resources on the VM host out of the equation.

Memory is one of the biggest factors when it comes to SQL Server performance. SQL uses memory for its internal buffer (recently used data) and procedure caches (recently executed T-SQL commands). These buffers mean that SQL Server can get the data and commands it requires from the caches instead of having to go to disk and incur the associated I/O overhead. SQL Server can automatically manage and grow its buffer and procedure caches based on the requirements of the workload and the memory that’s available. If there is no available memory then performance will be impacted.

If your Virtualization admin has ruled dedicated memory out of the question then ask about Hyper-V Dynamic Memory or VMware memory overcommit configurations.

VMware treats memory as a sharable resource, and each megabyte of memory is considered an individual share. Memory overcommit is an automated dynamic process which takes shares from VMs which are not using them and allocates those shares to other VMs as required.

Memory is reclaimed from VMs that have less proportional shares to give to the VMs with more proportional shares and so make sure the SQL VM has a high enough weighting of shares.

Hyper-V Dynamic memory also dynamically distributes unused memory. In both technologies VMs retain 25% of unused memory as a cushion in case it suddenly requires more memory.

It is worth noting that Datacenter or Enterprise editions of SQL 2008 or later are required to support hot-add RAM. Microsoft server operating systems have been hot-add compatible since W2K3r2sp2.

Do Not Store Files on the Same Disk

OS files, SQL data files, SQL log files, SQL backups, etc… will all end up on the same VHD if you build a VM with the default settings and install SQL with the default settings.

SQL Server binaries, data files should be placed on separate VMDKs.

Use RAID 10 for user data, log files, and TempDB files for best performance and availability.

Check out my previous post on SQL server best practices in relation to tempdb sizing.


Virtualized SQL Server can provide the performance and scalability to support production database applications provided best practices are followed.

This is a multi-part series on SQL Server best practices. Read part I here.


SQL Server Best Practices, Part I: Configuration

SQL Server Best Practices, Part I: Configuration

This article is part of the series "SQL Server Best Practices". Check out the rest:

Am I the only one who finds the Microsoft SQL server best practice guides to be a little painful to trawl through? Somehow, I doubt it. After being frustrated reading numerous technical guides, best practice guides, TechNet articles, and blog posts written by SQL experts, I thought it would be helpful to compile a simple post around SQL server best practices.

The goal of this post is not to delve into SQL server settings in great depth but instead to walk through some of the things you should look at when architecting or troubleshooting SQL server performance issues.

Shared Instance vs. Dedicated Instance

If an app has a large number of schemas / stored procedures then this could potentially impact other apps which share the same SQL instance. Instance resources could potentially become divided / locked, which would in turn cause performance issues for any other apps with databases hosted on the shared SQL instance.

Troubleshooting performance issues can be a pain as you must figure out which instance is the root cause, which might not be so easy.

This question is usually weighed against the costs of operating system and SQL licenses. If app performance is paramount then a dedicated instance is highly recommended.

Microsoft licenses SQL server at the server level per core and not per instance. For this reason admins are tempted to install as many SQL server instances as the server can handle, to save on SQL licensing costs, which can lead to major performance issues down the road.

Choose dedicated SQL instances whenever possible.

Separate SQL Files Into Different Disks

SQL Server accesses data and log files with very different I/O patterns. Data file access is mostly random whilst transaction log file access is sequential. Spinning disk storage requires re-positioning of the disk head for random read and write access. Sequential data is therefore more efficient than random data access. Separating files that have different access patterns helps to minimize disk head movements, and thus optimizes storage performance.

Use RAID 10 for user binaries, data, log files, and TempDB for best performance and availability.

TempDB Sizing

Proactively inflate TempDB files to their full size to avoid disk fragmentation.

Page contention can occur on GAM, SGAM, or PFS pages when SQL has to write to special system pages to allocate new objects. Latches protect (lock) these pages in memory. On a busy SQL server it can take a long time to get a latch on a system page in tempdb. This results in slower query run times and is known as Latch Contention.

A good rule of thumb for creating tempdb data files:

  • For <= 8 cores
    • Tempdb data files = # of cores
  • For > 8 cores
    • 8 Tempdb data files

Beginning with SQL server 2016, the number of CPU cores visible to the operating system is automatically detected during installation, and based on that number, SQL calculates and configures the number of Tempdb files required for optimum performance. Automatically configuring tempdb files according to the number of available CPU cores is a big step forward and so kudos to Microsoft for introducing this great new feature 🙂

One other thing worth looking at in relation to tempdb is Trace Flag 1118 (Full Extents Only)

Microsoft KB2154845 advises that Trace Flag 1118 can help to reduce allocation contention in tempdb. Trace Flag 1118 tells SQL Server to avoid “mixed extents” and use “full extents”.

With Trace Flag 1118 enabled each newly allocated object in every database on the instance gets its own private 64KB of data. The impact is greatest in tempdb where most objects are created.

Memory Configuration

  • min server memory
  • max server memory
  • max worker threads
  • index create memory
  • min memory per query

Min Server Memory

The min server memory option sets the minimum amount of memory that the SQL instance has at its disposal. Since SQL is a memory hog which chews up whatever RAM throw at it you are unlikely to ever encounter this unless the underlying operating system were to request too much memory from SQL server. Virtualization technologies bring this setting into play.

Max Server Memory

The max server memory option sets the maximum amount of memory that the SQL instance can utilize. It is generally used if there are multiple apps running at the same time as SQL and you want to guarantee that these apps have sufficient memory to function properly.

Some apps will only use whatever memory is available when they start and do not request more even if needed. That is where the max server memory setting comes into play.

On a SQL cluster / farm for example, several SQL instances could be competing for resources. Setting a memory limit for each SQL instance so that the different SQL instances are not duking it out over RAM will guarantee best performance.

Don’t forget to leave at least 4-6GB of RAM for the operating system to avoid performance issues.

Max Worker Threads

The max worker threads option helps optimize performance when large numbers of clients are connected to SQL server. Normally, a separate operating system thread is created for each query request. If hundreds of simultaneous connections are made to SQL, then one thread per query request would consume large amounts of system resources. The max worker threads option helps improves performance by enabling SQL to create a pool of worker threads to service a larger number of query requests.

The default value is 0, which allows SQL to automatically configure the number of worker threads at startup. This works for most systems. Max worker threads is an advanced option and so should not be altered without the go ahead from an experienced DBA.

When should I configure SQL to use more worker threads? If the average work queue length for each scheduler is above 1 then you might benefit from adding more threads to the system but only if the load is not CPU-bound or experiencing any other heavy waits. If either of those things are happening then adding more threads would not help as they would just end up waiting too.

Index Create Memory

The index create memory option is another advanced option that usually should not be touched. It controls the max amount of RAM initially allocated for creating indexes. The default value for this option is 0 which means that it is managed by SQL Server automatically. However, if you run into difficulties creating indexes, consider increasing the value of this option.

Min Memory per Query

When a query is run, SQL tries to allocate the optimum amount of memory for it to run efficiently. By default, the min memory per query setting allocates >=1024 KB for each query to run. Best practice is to leave this setting at the default value of 0, to allow SQL to dynamically manage the amount of memory allocated for index creation operations. If however SQL server has more RAM than it needs to run efficiently, the performance of some queries could be boosted if you increase this setting. So long as there is memory available on the server, which is not being used by SQL, any other apps, or the operating system, then boosting this setting can help overall SQL server performance. But if there is no free memory available, increasing this setting would likely hurt overall performance rather than help it.

CPU Configuration


Hyper-Threading is Intel’s proprietary Simultaneous Multithreading (SMT) implementation which improves parallelization of computations (multi-tasking) performed on x86 microprocessors. Hardware which uses hyper-threading allows the logical hyper-thread CPUs appear as physical CPUs to the operating system. SQL then sees the physical CPUs which the operating system presents and so can make use of the hyper-threaded processors.

The caveat here is that each SQL Server version has its own limitations on compute power it can utilize.

NUMA (Non-Uniform Memory Access)

NUMA is a memory-access optimization method that helps increase processor speed without increasing the load on the processor bus. If NUMA is configured on the server where SQL will be installed then you need not worry as SQL is NUMA aware, and performs well on NUMA hardware without any special configuration.

Processor Affinity

You are unlikely ever to need to alter the processor affinity defaults unless you encounter performance problems but it is still worthwhile understanding what they are and how they work.

SQL supports processor affinity by means of two options:

  • CPU affinity mask
  • Affinity I/O mask

SQL uses all CPUs available from the operating system. It creates schedulers on all the CPUs to make best use of the resources for any given workload. When multitasking the operating system or other apps on the SQL server can switch process threads from one processor to another. SQL is a resource intensive app and so performance can be impacted when this occurs. To minimize we can configure the processors in a way that all the SQL load will be directed to a pre-selected group of processors. This is achieved using CPU Affinity Mask.

The affinity I/O mask option binds SQL disk I/O to a subset of CPUs. In SQL online transactional processing (OLTP) environments, this extension can enhance the performance of SQL threads issuing I/O operations.

Note: hardware affinity for individual disks or disk controllers is not supported.

Max Degree of Parallelism (MAXDOP)

By default, SQL uses all available CPUs during query execution. While this is great for large queries, it can cause performance problems and limit concurrency. MAXDOP configuration depends on the SQL server machine – a symmetric multiprocessing (SMP) computer, a non-uniform memory access (NUMA) computer, or hyperthreading-enabled processors.

Use the following guidelines from Microsoft when you configure the MAXDOP value (SQL2005+):

Server with single NUMA node Less than 8 logical processors Keep MAXDOP at or below # of logical processors
Server with single NUMA node Greater than 8 logical processors Keep MAXDOP at 8
Server with multiple NUMA nodes Less than 8 logical processors per NUMA node Keep MAXDOP at or below # of logical processors per NUMA node
Server with multiple NUMA nodes Greater than 8 logical processors per NUMA node Keep MAXDOP at 8

Check out this MS best practice guide for more details:

Maximum Cost of Parallelism

The default is set to 5. The cost threshold figure is used by the optimizer when evaluating plans which are multi-threaded. 5 is a really low setting which is only appropriate for purely OLTP applications.

Note: DatAdvantage is an OLAP application, not an OLTP application.

For non OLTP systems, I recommend starting with this setting at 50 or so and tuning up or down as appropriate. Make sure you measure for the critical queries in your application and adjust if required.

A Few Other Settings Worth a Mention

Instant File Initialization

Although technically a Windows permission, granting the “Perform volume maintenance tasks” permission to SQL gives it a boost when it comes time to grow out data files.

By default Windows writes a bunch of zeros whenever a user asks for space. If I create a 1MB file, Windows will write 1MB of zeros to disk to properly initialize the file. Giving SQL this permissions means that, when requesting space for data files, SQL tells Windows to mark the space as used and immediately hand it back to SQL, which results in faster data file growth.

Backup Compression

Starting with SQL Server 2008r2, a check box enables backup compression. Backups are smaller, take less time, and restores even take less time. This setting is a no brainer really!

Remote Dedicated Administrator Connection (DAC)

This setting only really comes into play to help make troubleshooting easier when SQL has gone haywire.

When you connect through the DAC, SQL Server provides a dedicated connection, CPU scheduler, and memory. Remote troubleshooting a SQL instance pegged at 100% CPU utilization is much easier when you have dedicated resources at your disposal! You must be connected to SQL either physically at the console or remotely over RDP to use remote DAC. Again this setting is a bit of a no brainer really. Set it and forget it!


SQL Server can provide the performance and scalability to support production database applications provided best practices are followed.

I hope you’ve found this post useful.

In my next post I will go through some best practices around SQL server in a virtualized environment.

This is a multi-part series on SQL Server best practices. Read part II here.



Continue reading the next post in "SQL Server Best Practices"

A Brief History of Ransomware

A Brief History of Ransomware

Ransomware’s Early Days

The first documented and purported example of ransomware was the 1989 AIDS Trojan, also known as PS Cyborg1. Harvard-trained evolutionary biologist Joseph L. Popp sent 20,000 infected diskettes labeled “AIDS Information – Introductory Diskettes” to attendees of the World Health Organization’s international AIDS conference.

But after 90 reboots, the Trojan hid directories and encrypted the names of the files on the customer’s computer. To regain access, the user would have to send $189 to PC Cyborg Corp. at a post office box in Panama. Dr. Popp was eventually caught but never tried for his scheme as he was declared unfit to stand trial. His attorney said he began wearing a cardboard box on his head to protect himself from radiation2.

Fast Forward to the Internet Age

With the Internet making it easier to carry out Popp’s ransom idea, cyber criminals began to realize that they could monetize ransomware on a far wider scale.

In 2006, criminal organizations began using more effective asymmetric RSA encryption.

  • The Archiveus Trojan3 encrypted everything in the My Documents directory and required victims to purchase items from an online pharmacy to receive the 30-digit password.
  • The GPcode4, an encryption Trojan, which initially spread via an email attachment purporting to be a job application, used a 660-bit RSA public key. Two years later, a variant (GPcode.AK) used a 1024-bit RSA key.

The New Wave

Starting 2011, ransomware moved into big time. About 60,000 new ransomware was detected in Q3 2011, and more than doubled in Q3 2012, to over 200,000.What’s most astounding is that from Q3 2014 to Q1 2015, ransomware more than quadrupled.


source: McAfee Labs Threats Report

With no signs of slowing down, there are now many, many ransomware variants. Here’s a brief rundown of the ones you should know:

CryptoLocker – first versions appear to have been posted September 20136

  • Usually enters the company by email.
  • If a user clicks on the executable, it starts immediately scanning network drives, renames all the files & folders and encrypts them.

Locker – first copycat software emerged in December 20137

  • $150 to get the key, with money being sent to a Perfect Money or QIWI Visa Virtual Card number.

CryptoLocker 2.0 – a new and improved version of CryptoLocker was found in December 20138

  • CryptoLocker 2.0 was written using C# while the original was in C++.
  • Tor and Bitcoin used for anonymity and 2048-bit encryption.
  • The latest variant is not detected by anti-virus or firewall.

CryptorBit – a new ransomware discovered in December 20139

  • CryptorBit corrupts the first 1024 bytes of any data file it finds.
  • Can bypass Group Policy settings put in place to defend against this type of ransomware infection.
  • Social engineering used to get end users to install the ransomware using such devices as a fake flash update or a rogue antivirus product.
  • Tor and Bitcoin again used for a ransom payment.
  • Also installs crypto-coin mining software that uses the victim’s computer to mine digital currency.

CTB-Locker (Curve-Tor-Bitcoin Locker) – discovered midsummer 201410

  • First infections were mainly in Russia. The developers were thought to be from an eastern European country.

SynoLocker – appeared in August 201411

  • This one attacked Synology NAS devices. SynoLocker encrypted files one by one.
  • Payment was in Bitcoins and again Tor was used for anonymity.

CryptoWall – rebranded from CryptoDefense in April 201412

  • Exploited a Java vulnerability.
  • Malicious advertisements on domains belonging to Disney, Facebook, The Guardian newspaper and many others led people to sites that were CryptoWall infected and encrypted their drives.
  • According to an August 27 report from Dell SecureWorks Counter Threat Unit (CTU): “CTU researchers consider CryptoWall to be the largest and most destructive ransomware threat on the Internet as of this publication, and they expect this threat to continue growing.”
  • More than 600,000 systems were infected between mid-March and August 24, with 5.25 billion files being encrypted. 1,683 victims (0.27%) paid a total $1,101,900 in ransom. Nearly 2/3 paid $500, but the amounts ranged from $200 to $10,000.13

Cryptoblocker – new ransomware variant emerged in July 201414

  • only encrypt files <100MB and will skip anything in Windows or Program Files.15
  • It uses AES rather than RSA encryption.

OphionLocker – surprise! Another ransomware released during the holidays, December 201416

  • ECC (elliptic curve cryptography) public-key encryption.
  • 3 days to pay the ransom or the private key will be deleted.

Pclock – greets the New Year, January 2015 by miming CryptoLocker17

  • Files in a user’s profile are encrypted.
  • Volume shadow copies are deleted and disabled.
  • 72-hour countdown timer to pay 1 bitcoin in ransom.

CryptoWall 2.0 – ransomware goes on steroids in January 201518

  • Delivered via email attachments, malicious pdf files and various exploit kits.
  • Encrypts the user’s data, until a ransom is paid for the decryption key.
  • Uses TOR to obfuscate the C&C (Command & Control) channel.
  • Incorporates anti-vm and anti-emulation checks to hamper identification via sandboxes.
  • Has the ability to run 64-bit code directly from its 32-bit dropper. It can switch the processor execution context from 32 bit to 64 bit.

TeslaCrypt – a new CryptoWall variant surfaced in February 201519

  • Targets popular video game files such as Call of Duty, MineCraft, World of Warcraft, and Steam.

VaultCrypt – pretended to be customer support in February 201520

  • First circulated in Russia.
  • Uses Windows batch files and open source GnuPG privacy software for file encryption.

CryptoWall 3.0 – a new version appeared March 201521

  • I2P network communication.
  • Uses exploit kits to gain privilege escalation on the system.
  • Disables many security features on a target system.

CryptoWall 4.0 6 months later, in September 2015, a new variant is on the loose22

  • The most important change from CryptoWall 3.0 to 4.0 is that it re-encrypts filenames of the encrypted files, making it more difficult to decipher which files need to be recovered.

LowLevel04 this file-encrypting ransomware greeted us in October 201523

  • Also known as the Onion Trojan-Ransom
  • Spreads via brute force attacks on machines with Remote Desktop or Terminal Services
  • Encrypts files using AES encryption but the encryption key itself is RSA encrypted

And finally, a game changer known as Chimera – November 201524

  • The hackers will publish the encrypted files on the Internet if the victim doesn’t pay!

Is public disclosure the next phase of ransomware? I’ll discuss Chimera and its implications in my next post.

If you want more details on how to combat this very real problem, we wrote an excellent how-to on detecting and cleaning CryptoLocker infections. It was based on working with our customers to identify files that had been encrypted by CryptoLocker with DatAdvantage. We also created step-by-step instructions tosetup DatAlert to detect CryptoLocker.

Feel free to contact us if you have questions, or if you’d like to set up a free consultation.