May the Force of PowerShell Module dbatools be with you

Dbatools is a great PowerShell module that has 106 ways to help ease the administrative burden faced by SQL Server DBA’s and developers. As a IT Consultant that handles Microsoft Dynamics products that install within SQL Server, I would suggest that this is a great module for us to know and use too.

For those that may not know what PowerShell is, it was developed by Microsoft for task automation and configuration management. You can learn more about PowerShell at Microsoft Virtual Academy

How to Install

There are four ways to install dbatools module into PowerShell. The PowerShell command that I used was:

Invoke-Expression (Invoke-WebRequest -UseBasicParsing https://dbatools.io/in)

The complete instructions on how to install the module can be found on dbatool’s download page: https://dbatools.io/download/

Getting Started

There are several scenarios to use dbatools. The developer’s goal with every command is to support:

  • SQL Server 2000 – 2016
  • Express – Datacenter Edition
  • Clustered and stand-alone instances
  • Windows and SQL authentication
  • Default and named instances
  • Multiple instances on one server

You can learn more about dbatools and the Functions associated with the PowerShell module here: https://dbatools.io/getting-started/

Functions to Know for Dynamics

As IT Consultant in the Microsoft Dynamics space, we are sometimes tasked with setting up SQL Server and/or migrating the Dynamics databases to the client’s new SQL Server. Some of the key functions I have used in the past to do this are:

clip_image001 Databases

clip_image002 Security

clip_image003 Server Objects

clip_image004 SSIS

clip_image005 Management

clip_image006 SQL Server Agent

Best Practice Commands

General Administration Commands

A complete list of the 106 Functions contained in the current version, 0.8.693. Can be viewed here, https://dbatools.io/functions/, and just follow the link for each Function on that page to get the details on that Functions use.

Conclusion

Until next time, I hope this helps you with your SQL Server Administration and database migrations.

Is SQL Server Memory Settings affecting your server’s performance?

By default, SQL Server’s max memory is 2147483647. With this default setting SQL Server will “Commit” as much memory as possible until there’s none left for the operating system to run efficiently.

If the operating system has no memory available, it will start using the page file instead of RAM. Using the page file in place of memory will result in poor system performance.

SQL Server really does need as much memory as possible. However, that doesn’t mean we should leave SQL Server’s max memory set to unlimited.  If we starve the OS for memory, then Windows itself may not get the memory it needs.

To Fix the Problem

Set the Max Memory that SQL Server can use in your environment.  To change your SQL Server Memory:

1. Log into SSMS

2. Right click on the instance name and navigate to the bottom of the menu and click on “Properties”

clip_image001

3. Change the Max Memory settings to allow room for efficient operating system performance so it doesn’t use the page file in place of RAM. For recommendations on SQL Server Max Memory settings check out Glenn Berry’s Suggested Max Memory Settings.

clip_image003

4. Click “Save” button and restart SQL Services for changes to take effect.

Until next time, happy SQL Server performance tuning!

Improving your SQL Server’s performance – Part 3

 

In Part 1 of this blog series we talked about the need of create a good starting point for performance tuning on your SQL Server is by establishing a baseline for your server’s current performance.

In Part 2 we talked about enabling Instant File Initialization and adding additional TempDB files to your TempDB database.

As a conclusion to those posts, we will look at the effects of those changes by running a simple GP2010 Sales Order processing batch posting. To run the test I create a virtual server with SQL 2012 using the default setting and GP2010 SP2. I made one change to the default to limit SQL server to only use a max of 12GB of RAM since I was running GP2010 and SQL on the same image. Below are the specs of the server:

image

For testing I create one Sales Ordering Processing batch with 10,524 Invoices containing 31,323 lines of transactions.

In test 1 – No SQL optional settings were changed

· Instant File Initialization is disabled

· One Tempdb

The posting of the batch to the General Ledger took 7 hours to complete.

For test 2 I restore the company database back to its original state before test 1 and then made the following SQL optional settings changes

· Instant File Initialization is enabled

· Tempdb data file change to have 1 data file per CPU core (4 files)

The posting of the batch to the General Ledger took 6 hours to complete.

This was a very simple test on a virtual server with only one hard drive. Imagine the system performance gains when you apply all of the ideal SQL performance tuning recommendations to your server!

Improving your SQL Server’s performance – Part 2

In Part 1 of this series we talked about establishing a performance base for our SQL Server. In Part 2 we will be looking at three changes we can apply to the default SQL Server set up to increase performance. The first two changes will provide performance increases when restoring SQL backups and locking pages in memory.

· Enable Instant File Initialization.

You want to grant the “Perform volume maintenance tasks” right on your database server to the SQL Server service account, using the Group Policy Editor, which you can start by typing GPEDIT.MSC in a Run window. After you navigate to Windows Settings → Security Settings → Local Policies → User Rights Assignment, you will want to find the “Perform volume maintenance tasks” right in the right-hand pane of the editor window. Then you should double-click on the item, and then add the logon for the SQL Server service account to grant that right to the SQL Server service.

clip_image002

The reason to do this is in order to allow SQL Server to skip the default step of “zeroing out” the space within a SQL Server data file after it is created, grown, or restored. This can save a lot of time, anywhere from seconds to many, many minutes, depending of the initial size of the file or file growth and your I/O subsystem performance, so it makes it much easier to proactively manage the size of your data files. It also can have a dramatic effect on the time it takes to restore a database from a full database backup.

There is a security risk associated with granting this right to the SQL Server service account. There is a possibility that a SQL Server Administrator who was not also a Windows Local Administrator could open and read the contents of recently deleted files that they would not otherwise have access to.

Add –T845, -E and –T1117 to your SQL Server startup options in order to be able to enable Lock Page in Memory feature.

clip_image003

Also run the following Script in Management Studio to make sure that TRACE FLAG 1806 is off.

DBCC TRACEOFF(1806)

You must restart SQL Server for the permissions to takes effect.

· Move TempDB to its own drive preferably on RAID1 and create multiple Data files

By default, the TempDB files are put on the same drive as the SQL Server binaries. Even if the user chooses a custom install, TempDB still goes on the same drive as the other data files Instead, the TempDB data files should be on their own dedicated drive that is RAID1.

You can fix this by first moving TempDB to its own drive.  In this example, I put the data file on the T drive and the log file on the L drive.  (Be aware that the directory paths must already exist.)

use master
go
alter database tempdb modify file (name=’tempdev’, filename=’T:\MSSQL\DATA\tempDB.MDF’, size = 1mb)
go
alter database tempdb modify file (name=’templog’, filename=’L:\MSSQL\LOGS\templog.LDF’, size = 1mb)
go

Additional, you should create multiple TempDB data files. Common practice is to create 1 to 2 or 1 to 1 relationship between your data files and the number of CPU cores on your server.

USE [master]
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N’tempdev2′, FILENAME = N’T:\MSSQL\DATA\tempdev2.ndf’ , SIZE = 10GB , FILEGROWTH = 0)
GO

The data file creation should only take a couple of seconds – if it takes more than ten seconds, then instant file initialization isn’t configured correctly. We talked about this back in the pre-installation checklist, so go back and revisit that before you create the next TempDB file. Fix the security to allow for instant file initialization now – it has a huge performance impact on database growth.

Assuming that one file growth only took a couple of seconds, then go ahead and create the rest of the TempDB data files.

Notice that I don’t have filegrowth enabled.  You want to proactively create the TempDB files at their full sizes to avoid drive fragmentation.  If you have a dual-cpu quad-core server (8 cores total) and an 80-gb array for TempDB data, you would create eight 10gb files for TempDB.  That way, each file is contiguous, all laid out in one big chunk.  If you create them as smaller files and let them autogrow, then the disk will be fragmented all over the place because the files will be growing at random times.  Plus, you could end up with differently sized TempDB files if one of them happened to grow faster than the rest.  That’s why I pre-grow all of the TempDB files ahead of time and get them at exactly the right size.

In Part 3, of “Improving your SQL Server performance” I will go over a benchmark test that I complete on SQL Server running Dynamics GP 2010 with the default settings and results after applying the setting from this post.

Improving your SQL Server’s performance

A good starting point for performance tuning on your SQL Server is by establishing a baseline for your server’s current performance. To help that Microsoft provides a SSMS dashboard to monitor your server’s performance. The dashboard provides valuable information on:

  • Server Waits
  • Expensive Queries by
    • CPU
    • Logical Reads
    • Logical Writes
  • Missing Indexes

You can install the dashboard by downloading the software package located here: http://www.microsoft.com/en-us/download/details.aspx?id=29063

Once the software is installed, each SQL Server instance you plan to monitor must contain the procedures and functions used by the queries in the dashboard reports. Using SQL Server Management Studio (SSMS), open the setup.sql script from your installation directory (default of C:\ProgramFiles(x86)\Microsoft SQL Server\110\Tools\Performance Dashboard) and run the script. Close the query window once it completes.

To open the dashboard in the Object Explorer pane in SSMS, right mouse click on the SQL Server instance node, then choose Reports-Custom Reports. Browse to the installation directory (default of C:\ProgramFiles(x86)\Microsoft SQL Server\110\Tools\Performance Dashboard) and open the performance_dashboard_main.rdl file. Explore the health of your server by clicking on the various charts and hyperlinks in the report. Here is what the dashboard looks like at the main screen.

SQL dashboard

For a tutorial on the dashboard, its features and more detail into the installation steps follow this link: http://sqlserverfinebuild.codeplex.com/wikipage?title=Install%20SQL%20Performance%20Dashboard

In my next post, we’ll explore changing some default installation setting on SQL Server to increase your Server’s performance.