Improving your SQL Server’s performance – Part 2

In Part 1 of this series we talked about establishing a performance base for our SQL Server. In Part 2 we will be looking at three changes we can apply to the default SQL Server set up to increase performance. The first two changes will provide performance increases when restoring SQL backups and locking pages in memory.

· Enable Instant File Initialization.

You want to grant the “Perform volume maintenance tasks” right on your database server to the SQL Server service account, using the Group Policy Editor, which you can start by typing GPEDIT.MSC in a Run window. After you navigate to Windows Settings → Security Settings → Local Policies → User Rights Assignment, you will want to find the “Perform volume maintenance tasks” right in the right-hand pane of the editor window. Then you should double-click on the item, and then add the logon for the SQL Server service account to grant that right to the SQL Server service.

clip_image002

The reason to do this is in order to allow SQL Server to skip the default step of “zeroing out” the space within a SQL Server data file after it is created, grown, or restored. This can save a lot of time, anywhere from seconds to many, many minutes, depending of the initial size of the file or file growth and your I/O subsystem performance, so it makes it much easier to proactively manage the size of your data files. It also can have a dramatic effect on the time it takes to restore a database from a full database backup.

There is a security risk associated with granting this right to the SQL Server service account. There is a possibility that a SQL Server Administrator who was not also a Windows Local Administrator could open and read the contents of recently deleted files that they would not otherwise have access to.

Add –T845, -E and –T1117 to your SQL Server startup options in order to be able to enable Lock Page in Memory feature.

clip_image003

Also run the following Script in Management Studio to make sure that TRACE FLAG 1806 is off.

DBCC TRACEOFF(1806)

You must restart SQL Server for the permissions to takes effect.

· Move TempDB to its own drive preferably on RAID1 and create multiple Data files

By default, the TempDB files are put on the same drive as the SQL Server binaries. Even if the user chooses a custom install, TempDB still goes on the same drive as the other data files Instead, the TempDB data files should be on their own dedicated drive that is RAID1.

You can fix this by first moving TempDB to its own drive.  In this example, I put the data file on the T drive and the log file on the L drive.  (Be aware that the directory paths must already exist.)

use master
go
alter database tempdb modify file (name=’tempdev’, filename=’T:\MSSQL\DATA\tempDB.MDF’, size = 1mb)
go
alter database tempdb modify file (name=’templog’, filename=’L:\MSSQL\LOGS\templog.LDF’, size = 1mb)
go

Additional, you should create multiple TempDB data files. Common practice is to create 1 to 2 or 1 to 1 relationship between your data files and the number of CPU cores on your server.

USE [master]
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N’tempdev2′, FILENAME = N’T:\MSSQL\DATA\tempdev2.ndf’ , SIZE = 10GB , FILEGROWTH = 0)
GO

The data file creation should only take a couple of seconds – if it takes more than ten seconds, then instant file initialization isn’t configured correctly. We talked about this back in the pre-installation checklist, so go back and revisit that before you create the next TempDB file. Fix the security to allow for instant file initialization now – it has a huge performance impact on database growth.

Assuming that one file growth only took a couple of seconds, then go ahead and create the rest of the TempDB data files.

Notice that I don’t have filegrowth enabled.  You want to proactively create the TempDB files at their full sizes to avoid drive fragmentation.  If you have a dual-cpu quad-core server (8 cores total) and an 80-gb array for TempDB data, you would create eight 10gb files for TempDB.  That way, each file is contiguous, all laid out in one big chunk.  If you create them as smaller files and let them autogrow, then the disk will be fragmented all over the place because the files will be growing at random times.  Plus, you could end up with differently sized TempDB files if one of them happened to grow faster than the rest.  That’s why I pre-grow all of the TempDB files ahead of time and get them at exactly the right size.

In Part 3, of “Improving your SQL Server performance” I will go over a benchmark test that I complete on SQL Server running Dynamics GP 2010 with the default settings and results after applying the setting from this post.