By default, SQL Server’s max memory is 2147483647. With this default setting SQL Server will “Commit” as much memory as possible until there’s none left for the operating system to run efficiently.
If the operating system has no memory available, it will start using the page file instead of RAM. Using the page file in place of memory will result in poor system performance.
SQL Server really does need as much memory as possible. However, that doesn’t mean we should leave SQL Server’s max memory set to unlimited. If we starve the OS for memory, then Windows itself may not get the memory it needs.
To Fix the Problem
Set the Max Memory that SQL Server can use in your environment. To change your SQL Server Memory:
1. Log into SSMS
2. Right click on the instance name and navigate to the bottom of the menu and click on “Properties”
3. Change the Max Memory settings to allow room for efficient operating system performance so it doesn’t use the page file in place of RAM. For recommendations on SQL Server Max Memory settings check out Glenn Berry’s Suggested Max Memory Settings.
4. Click “Save” button and restart SQL Services for changes to take effect.
Until next time, happy SQL Server performance tuning!
In Part 1 of this blog series we talked about the need of create a good starting point for performance tuning on your SQL Server is by establishing a baseline for your server’s current performance.
In Part 2 we talked about enabling Instant File Initialization and adding additional TempDB files to your TempDB database.
As a conclusion to those posts, we will look at the effects of those changes by running a simple GP2010 Sales Order processing batch posting. To run the test I create a virtual server with SQL 2012 using the default setting and GP2010 SP2. I made one change to the default to limit SQL server to only use a max of 12GB of RAM since I was running GP2010 and SQL on the same image. Below are the specs of the server:
For testing I create one Sales Ordering Processing batch with 10,524 Invoices containing 31,323 lines of transactions.
In test 1 – No SQL optional settings were changed
· Instant File Initialization is disabled
· One Tempdb
The posting of the batch to the General Ledger took 7 hours to complete.
For test 2 I restore the company database back to its original state before test 1 and then made the following SQL optional settings changes
· Instant File Initialization is enabled
· Tempdb data file change to have 1 data file per CPU core (4 files)
The posting of the batch to the General Ledger took 6 hours to complete.
This was a very simple test on a virtual server with only one hard drive. Imagine the system performance gains when you apply all of the ideal SQL performance tuning recommendations to your server!