SQL Server Best Practices, Part I: Configuration

Am I the only one who finds the Microsoft SQL server best practice guides to be a little painful to trawl through? Somehow, I doubt it. After being frustrated reading...
Kieran Laffan
7 min read
Last updated June 9, 2023

This article is part of the series "SQL Server Best Practices". Check out the rest:

Am I the only one who finds the Microsoft SQL server best practice guides to be a little painful to trawl through? Somehow, I doubt it. After being frustrated reading numerous technical guides, best practice guides, TechNet articles, and blog posts written by SQL experts, I thought it would be helpful to compile a simple post around SQL server best practices.

The goal of this post is not to delve into SQL server settings in great depth but instead to walk through some of the things you should look at when architecting or troubleshooting SQL server performance issues.

Learn how to automate Microsoft 365 management with our free PowerShell course

Shared Instance vs. Dedicated Instance

If an app has a large number of schemas / stored procedures then this could potentially impact other apps which share the same SQL instance. Instance resources could potentially become divided / locked, which would in turn cause performance issues for any other apps with databases hosted on the shared SQL instance.

Troubleshooting performance issues can be a pain as you must figure out which instance is the root cause, which might not be so easy.

This question is usually weighed against the costs of operating system and SQL licenses. If app performance is paramount then a dedicated instance is highly recommended.

Microsoft licenses SQL server at the server level per core and not per instance. For this reason admins are tempted to install as many SQL server instances as the server can handle, to save on SQL licensing costs, which can lead to major performance issues down the road.

Choose dedicated SQL instances whenever possible.

Separate SQL Files Into Different Disks

SQL Server accesses data and log files with very different I/O patterns. Data file access is mostly random whilst transaction log file access is sequential. Spinning disk storage requires re-positioning of the disk head for random read and write access. Sequential data is therefore more efficient than random data access. Separating files that have different access patterns helps to minimize disk head movements, and thus optimizes storage performance.

Use RAID 10 for user binaries, data, log files, and TempDB for best performance and availability.

TempDB Sizing

Proactively inflate TempDB files to their full size to avoid disk fragmentation.

Page contention can occur on GAM, SGAM, or PFS pages when SQL has to write to special system pages to allocate new objects. Latches protect (lock) these pages in memory. On a busy SQL server it can take a long time to get a latch on a system page in tempdb. This results in slower query run times and is known as Latch Contention.

A good rule of thumb for creating tempdb data files:

  • For <= 8 cores
    • Tempdb data files = # of cores
  • For > 8 cores
    • 8 Tempdb data files

Beginning with SQL server 2016, the number of CPU cores visible to the operating system is automatically detected during installation, and based on that number, SQL calculates and configures the number of Tempdb files required for optimum performance. Automatically configuring tempdb files according to the number of available CPU cores is a big step forward and so kudos to Microsoft for introducing this great new feature 🙂

One other thing worth looking at in relation to tempdb is Trace Flag 1118 (Full Extents Only)

Microsoft KB2154845 advises that Trace Flag 1118 can help to reduce allocation contention in tempdb. Trace Flag 1118 tells SQL Server to avoid “mixed extents” and use “full extents”. https://technet.microsoft.com/en-us/library/ms190969(v=sql.105).aspx

With Trace Flag 1118 enabled each newly allocated object in every database on the instance gets its own private 64KB of data. The impact is greatest in tempdb where most objects are created.

Memory Configuration

  • min server memory
  • max server memory
  • max worker threads
  • index create memory
  • min memory per query

Min Server Memory

The min server memory option sets the minimum amount of memory that the SQL instance has at its disposal. Since SQL is a memory hog which chews up whatever RAM throw at it you are unlikely to ever encounter this unless the underlying operating system were to request too much memory from SQL server. Virtualization technologies bring this setting into play.

Max Server Memory

The max server memory option sets the maximum amount of memory that the SQL instance can utilize. It is generally used if there are multiple apps running at the same time as SQL and you want to guarantee that these apps have sufficient memory to function properly.

Some apps will only use whatever memory is available when they start and do not request more even if needed. That is where the max server memory setting comes into play.

On a SQL cluster / farm for example, several SQL instances could be competing for resources. Setting a memory limit for each SQL instance so that the different SQL instances are not duking it out over RAM will guarantee best performance.

Don’t forget to leave at least 4-6GB of RAM for the operating system to avoid performance issues.

Max Worker Threads

The max worker threads option helps optimize performance when large numbers of clients are connected to SQL server. Normally, a separate operating system thread is created for each query request. If hundreds of simultaneous connections are made to SQL, then one thread per query request would consume large amounts of system resources. The max worker threads option helps improves performance by enabling SQL to create a pool of worker threads to service a larger number of query requests.

The default value is 0, which allows SQL to automatically configure the number of worker threads at startup. This works for most systems. Max worker threads is an advanced option and so should not be altered without the go ahead from an experienced DBA.

When should I configure SQL to use more worker threads? If the average work queue length for each scheduler is above 1 then you might benefit from adding more threads to the system but only if the load is not CPU-bound or experiencing any other heavy waits. If either of those things are happening then adding more threads would not help as they would just end up waiting too.

Index Create Memory

The index create memory option is another advanced option that usually should not be touched. It controls the max amount of RAM initially allocated for creating indexes. The default value for this option is 0 which means that it is managed by SQL Server automatically. However, if you run into difficulties creating indexes, consider increasing the value of this option.

Min Memory per Query

When a query is run, SQL tries to allocate the optimum amount of memory for it to run efficiently. By default, the min memory per query setting allocates >=1024 KB for each query to run. Best practice is to leave this setting at the default value of 0, to allow SQL to dynamically manage the amount of memory allocated for index creation operations. If however SQL server has more RAM than it needs to run efficiently, the performance of some queries could be boosted if you increase this setting. So long as there is memory available on the server, which is not being used by SQL, any other apps, or the operating system, then boosting this setting can help overall SQL server performance. But if there is no free memory available, increasing this setting would likely hurt overall performance rather than help it.

CPU Configuration

Hyper-Threading

Hyper-Threading is Intel’s proprietary Simultaneous Multithreading (SMT) implementation which improves parallelization of computations (multi-tasking) performed on x86 microprocessors. Hardware which uses hyper-threading allows the logical hyper-thread CPUs appear as physical CPUs to the operating system. SQL then sees the physical CPUs which the operating system presents and so can make use of the hyper-threaded processors.

The caveat here is that each SQL Server version has its own limitations on compute power it can utilize.

https://msdn.microsoft.com/en-us/library/ms143760.aspx

NUMA (Non-Uniform Memory Access)

NUMA is a memory-access optimization method that helps increase processor speed without increasing the load on the processor bus. If NUMA is configured on the server where SQL will be installed then you need not worry as SQL is NUMA aware, and performs well on NUMA hardware without any special configuration.

Processor Affinity

You are unlikely ever to need to alter the processor affinity defaults unless you encounter performance problems but it is still worthwhile understanding what they are and how they work.

SQL supports processor affinity by means of two options:

  • CPU affinity mask
  • Affinity I/O mask

SQL uses all CPUs available from the operating system. It creates schedulers on all the CPUs to make best use of the resources for any given workload. When multitasking the operating system or other apps on the SQL server can switch process threads from one processor to another. SQL is a resource intensive app and so performance can be impacted when this occurs. To minimize we can configure the processors in a way that all the SQL load will be directed to a pre-selected group of processors. This is achieved using CPU Affinity Mask.

The affinity I/O mask option binds SQL disk I/O to a subset of CPUs. In SQL online transactional processing (OLTP) environments, this extension can enhance the performance of SQL threads issuing I/O operations.

Note: hardware affinity for individual disks or disk controllers is not supported.

Max Degree of Parallelism (MAXDOP)

By default, SQL uses all available CPUs during query execution. While this is great for large queries, it can cause performance problems and limit concurrency. MAXDOP configuration depends on the SQL server machine – a symmetric multiprocessing (SMP) computer, a non-uniform memory access (NUMA) computer, or hyperthreading-enabled processors.

Use the following guidelines from Microsoft when you configure the MAXDOP value (SQL2005+):

Server with single NUMA node Less than 8 logical processors Keep MAXDOP at or below # of logical processors
Server with single NUMA node Greater than 8 logical processors Keep MAXDOP at 8
Server with multiple NUMA nodes Less than 8 logical processors per NUMA node Keep MAXDOP at or below # of logical processors per NUMA node
Server with multiple NUMA nodes Greater than 8 logical processors per NUMA node Keep MAXDOP at 8

Check out this MS best practice guide for more details: https://support.microsoft.com/en-us/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-confi

Maximum Cost of Parallelism

The default is set to 5. The cost threshold figure is used by the optimizer when evaluating plans which are multi-threaded. 5 is a really low setting which is only appropriate for purely OLTP applications.

Note: DatAdvantage is an OLAP application, not an OLTP application.

For non OLTP systems, I recommend starting with this setting at 50 or so and tuning up or down as appropriate. Make sure you measure for the critical queries in your application and adjust if required.

A Few Other Settings Worth a Mention

Instant File Initialization

Although technically a Windows permission, granting the “Perform volume maintenance tasks” permission to SQL gives it a boost when it comes time to grow out data files.

By default Windows writes a bunch of zeros whenever a user asks for space. If I create a 1MB file, Windows will write 1MB of zeros to disk to properly initialize the file. Giving SQL this permissions means that, when requesting space for data files, SQL tells Windows to mark the space as used and immediately hand it back to SQL, which results in faster data file growth.

Backup Compression

Starting with SQL Server 2008r2, a check box enables backup compression. Backups are smaller, take less time, and restores even take less time. This setting is a no brainer really!

Remote Dedicated Administrator Connection (DAC)

This setting only really comes into play to help make troubleshooting easier when SQL has gone haywire.

When you connect through the DAC, SQL Server provides a dedicated connection, CPU scheduler, and memory. Remote troubleshooting a SQL instance pegged at 100% CPU utilization is much easier when you have dedicated resources at your disposal! You must be connected to SQL either physically at the console or remotely over RDP to use remote DAC. Again this setting is a bit of a no brainer really. Set it and forget it!

Conclusion

SQL Server can provide the performance and scalability to support production database applications provided best practices are followed.

I hope you’ve found this post useful.

In my next post I will go through some best practices around SQL server in a virtualized environment.

This is a multi-part series on SQL Server best practices. Read part II here.

Sources:

https://msdn.microsoft.com/en-us/library/mt590198(v=sql.1).aspx

https://social.technet.microsoft.com/Forums/sqlserver/en-us/home?category=sqlserver

https://technet.microsoft.com/en-us/library/mt590198(v=sql.1).aspx

https://blogs.technet.microsoft.com/dataplatforminsider/

https://blogs.msdn.microsoft.com/sqlserverstorageengine/

https://support.microsoft.com/en-us/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-confi

http://blogs.msdn.com/sqlperf/default.aspx

https://blogs.msdn.microsoft.com/sqltips/

https://www.brentozar.com/

 

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

why-uba-will-catch-the-zero-day-ransomware-attacks-(that-endpoint-protection-can’t)
Why UBA Will Catch the Zero-Day Ransomware Attacks (That Endpoint Protection Can’t)
Ransomware attacks have become a major security threat. It feels like each week a new variant is announced –Ransom32, 7ev3n. This malware may even be involved in the next big...
a-brief-history-of-ransomware
A Brief History of Ransomware
Ransomware’s Early Days The first documented and purported example of ransomware was the 1989 AIDS Trojan, also known as PS Cyborg1. Harvard-trained evolutionary biologist Joseph L. Popp sent 20,000 infected...
why-did-last-friday's-ransomware-infection-spread-globally-so-fast?
Why did last Friday's ransomware infection spread globally so fast?
Ransomware is a type of malware that encrypts your data and asks for you to pay a ransom to restore access to your files. Cyber criminals usually request that the...
sql-server-best-practices,-part-ii:-virtualized-environments
SQL Server Best Practices, Part II: Virtualized Environments
It is 2016 and some people still think SQL Server cannot be run on a virtual machine. SQL Server can successfully run in a VM but SQL is resource-intensive by...