Feeds:
Posts
Comments

Archive for the ‘SQL Tuning’ Category

Working on a recent consulting job, I was asked to explain the difference in throughput, which is, measured in MB/s verses IOPS as I had recommended a storage array that would provide many more IOPS and throughput than what is currently in use.

For this client I had presented the expected IOPS per type of disk from 7200 RPM, 10K and 15K and how one would calculate the total IOPS per expansion unit but failed to explain the available throughput other than an increase in uplink speed to 6GB.

Throughput is a measurement of the average number of megabytes transferred within a period of time for a specific file size.  Back in the day this was performed using a single computer making a single request for a disk, but in today’s age with large storage arrays that are providing storage to a number of clients we need to measure based on a lot of small read/writes verses a single computer making a large request.

To measure throughput one can use the following formula.

MB/s = IOPS * KB per IO / 1024

With this formula your IO will be your block size.  For an array with 10 10k SAS drives which provide approx 145 IOPS per disk we will use 1450 IOPS, in the real world this will differ based on RAID configuration.  Other than RAID0 there is a penalty for writes.  Not with reads.  For RAID1 and 10 there is an I/O penalty of 2, for RAID 5 there is I/O penalty of 4 and for RAID 6 a I/O penalty of 6.  RAID 6 is popular among SANs.

MB/s = 1450 * 64 /1024 or 90.625 MB/s
MB/s = 1450 * 128 /1024 or 181.25 MB/s
MB/s = 1450 * 256 /1024 or 362.5 MB/s

Hard drive manufactures will advertise 100MB throughput for a hard drive so why wouldn’t you see 1000MB throughput in a RAID 0 configuration.  Because in the real world you are not performing the edge cases they are doing.  Your IO workload will be more than a single sector read.  These disk manufactures don’t share their engineering specifications or what criteria they used to benchmark their drives.  One thing you can bank on is that they do what ever it takes to get the best results possible for marketing.  Your results will vary drastically.

Read Full Post »

I was reading an article over the weekend where the author pointed out that the standard naming convention for SQL Server database files was .MDF for the data file, LDF for the log file and NDF for additional data files.  This was good stuff so far, then the author stated that these can be changed if you want to mess around with your other DBA’s.  I have known for years that the file extensions are merely suggestions (best practice) and that you could change them if you so chose.  Some companies may have policies defined that data files must have an extension of .001 and logs .002, etc.

What I wish the author had pointed out was that if you deviate from what is standard for file extension naming for your data and log files, please make sure your security department adds those new extensions to the list of excludes for your virus protection.  If not, you will have successfully launched a denial of service (DOS) attack against yourself.

 

Read Full Post »

Without Instant File Initialization turned on, each time a data file on SQL Server 2005 and above is created or has to grow in size, the file or portion of the file added must be zeroed out.  Depending on the size of the file this can be a very time consuming task.

By turning on Database Instant File Initialization, you are telling the operating system to trust SQL Server to properly handle this allocation.  Since you are looking to enable Instant File Initialization you should also consider setting a proper auto growth setting for your database.  Something in MB verses percentage.  This should be a fail safe, you try to implement some type of policy or procedure to monitor the amount of free space in your databases and try to grow them after hours.  What me demonstrate this on YouTube.

While there isn’t an option within SQL Server to turn this on, the process is very simple.  You simply have to add a SQL Server group to a security policy.  To find which group to add you need to run lusrmgr.msc on the workstation or server in question.  Click on Groups, marked with a RED arrow on the top left of the image below.  Then look for the group that states “specified instance of SQL Server”  This is highlighted in the image below.  The group in my experience has nearly always been “SQLServerMSSQLUser”, plus Machine Name, then Instance Name.  Take note of this group (write it down) as you will need to add it to a local security policy.  You can now close out of lusrmgr.

Next click start, run and type SECPOL.MSC.  The dialog box in the image below will open up.  Expand “Local Policies”, click on “User Rights Assignment”, then scroll down the list of Policies until you see “Perform volume maintenance tasks”.  Double click on “Perform volume maintenance tasks”.

Click on “Add User or Group”

Click on “Object Types”

Uncheck all boxes except for “Groups” and click “OK”

Next make sure that “Locations” is your local workstation or server name, not the Active Directory Domain.  Next click “Advanced” and chose “Find Now”, a list of groups should now appear.  This is where you will scroll down the list and find the group you determined from above (lusrmgr.msc).  In my example it is “SQLServerMSSQLUser$WIN-LJDS9D9PTAA$MSSQLSERVER”.  Once you have selected your group click “OK”

You should now see your group listed.

Click “OK” and then “OK” one last time.  That is it. You can now close the Local Security Policy window.  When your database grows or when you add additional files, you should no longer have the long waits of the system having to zero out the additional space.  Want to see a live demo of this, check out my YouTube Channel

Additional things to note is
1) You could just grant access to the SQL Server Service account, however if this Service Account is ever changed you will have to go back and add the new account to the Security Policy.  Which ever way works best for you.  The important thing is to just have it on.
2) By not zeroing out the file there could possibly still be data that could be read.  This is a very slight security risk and there are only a few scenarios where this could pose a risk.  The average Joe could not access the data.
3) Instant File Initialization only works for your data files, it does not have any impact to your LOG files.

Want to see a live demo of this, check out my YouTube Channel

Read Full Post »

Without Instant File Initialization turned on, each time a data file on SQL Server 2005 and above is created or has to grow in size, the file or portion of the file added must be zeroed out.  Depending on the size of the file this can be a very time consuming task.

By turning on Database Instant File Initialization, you are telling the operating system to trust SQL Server to properly handle this allocation.  Since you are looking to enable Instant File Initialization you should also consider setting a proper auto growth setting for your database.  Something in MB verses percentage.  This should be a fail safe, you try to implement some type of policy or procedure to monitor the amount of free space in your databases and try to grow them after hours.

While there isn’t an option within SQL Server to turn this on, the process is very simple.  You simply have to add a SQL Server group to a security policy.  To find which group to add you need to run lusrmgr.msc on the workstation or server in question.  Click on Groups, marked with a RED arrow on the top left of the image below.  Then look for the group that states “specified instance of SQL Server”  This is highlighted in the image below.  The group in my experience has nearly always been “SQLServerMSSQLUser”, plus Machine Name, then Instance Name.  Take note of this group (write it down) as you will need to add it to a local security policy.  You can now close out of lusrmgr.

Next click start, run and type SECPOL.MSC.  The dialog box in the image below will open up.  Expand “Local Policies”, click on “User Rights Assignment”, then scroll down the list of Policies until you see “Perform volume maintenance tasks”.  Double click on “Perform volume maintenance tasks”.

Click on “Add User or Group”

Click on “Object Types”

Uncheck all boxes except for “Groups” and click “OK”

Next make sure that “Locations” is your local workstation or server name, not the Active Directory Domain.  Next click “Advanced” and chose “Find Now”, a list of groups should now appear.  This is where you will scroll down the list and find the group you determined from above (lusrmgr.msc).  In my example it is “SQLServerMSSQLUser$WIN-LJDS9D9PTAA$MSSQLSERVER”.  Once you have selected your group click “OK”

You should now see your group listed.

Click “OK” and then “OK” one last time.  That is it. You can now close the Local Security Policy window.  When your database grows or when you add additional files, you should no longer have the long waits of the system having to zero out the additional space.  Want to see a live demo of this, check out my Youtube Channel

Additional things to note is
1) You could just grant access to the SQL Server Service account, however if this Service Account is ever changed you will have to go back and add the new account to the Security Policy.  Which ever way works best for you.  The important thing is to just have it on.
2) By not zeroing out the file there could possibly still be data that could be read.  This is a very slight security risk and there are only a few scenarios where this could pose a risk.  The average Joe could not access the data.
3) Instant File Initialization only works for your data files, it does not have any impact to your LOG files.

Want to see a live demo of this, check out my Youtube Channel

Read Full Post »

In recent months I have been asked to ensure we are auditing both successful and failed logins, not just failed logins.  It is simple enough to open SSMS, connect to the instance, right click on the server, chose properties, click on security and then check the proper radio button.  When putting together or most recent updates to our server installation guide I decided to spend the few minutes to research how to make this change with few steps.

In my case I need this to be both failed and successful logins.  I will execute the following code within SSMS.

EXEC xp_instance_regwriteN'HKEY_LOCAL_MACHINE', 
 N'Software\Microsoft\MSSQLServer\MSSQLServer',N'AuditLevel', REG_DWORD, 3 (more…)

Read Full Post »

Recently I learned that you can and probably should increase the number of SQL Server Error logs that your instance of SQL Server maintains.  This is real simple to change.  You can make the change via the GUI with a few simple steps or accomplish it with TSQL.

To make this change with SSMS, open SSMS, connect to your instance, expand Management, right click on SQL Server Logs and chose Configure.  You can then check the box “Limit the number of error log files before they are recycled” and increase the number from 6 to any number up to 99. 

Using TSQL you can execute the following statement to increase to 99 files, simply change 99 to how ever many files you would like to retain.

USE [master];
GO

EXEC xp_instance_regwriteN’HKEY_LOCAL_MACHINE’,N’Software\Microsoft\MSSQLServer\MSSQLServer’,N’NumErrorLogs’, REG_DWORD, 99;

GO

Read Full Post »

RedGate SQL Index Manager BETA

I recently attended the first micro training event with Red Gate where Grant Fritchey demonstrated a BETA tool called SQL Index Manager.  This tool is aimed at the accidental DBA or for those like me who are just lazy.  🙂

It is basically a nice GUI to return your index fragmentation level and makes a recommendation whether you should rebuild or reorganize your index.  An extra nice feature is the ability to fix them right then and there or the tool will generate the TSQL code for you to execute later either manually in SSMS or you can through it into a job. 

A much better long term solution though is to utilize Ola Hallengren maintenance script or the one I like to use written by Michelle Ufford.  Even with me having weekly jobs to interrogate my indexes I have been playing with the SQL Index Manager and taking a look at my databases each day during the week to see how much fragmentation is occuring.  What I have noticed is a few volital tables that could use a mid week or nighly reorg.  Fortunately most of these are tables used in various back end operations and do not impact my end users.

I give this tool two thumbs up and would recommend everyone to download it now while it is free and give it a try.  Keep in mind that this is a BETA release.  This product came out of the down tools week so this product was built quickly but I haven’t found to many bugs other than the random crash here and there.  It is still more stable than most Windows OS’s I have worked on. 

Check out RedGate SQL Index Manager

Read Full Post »

Older Posts »