Feeds:
Posts
Comments

Archive for the ‘Query Tidbits’ Category

A friend of mine contacted me through email today having a very common problem with a query he had written.  He received an error, “cannot resolve collation conflict for equal to operator”.

Early in my career I had experienced the same issue so I quickly searched my code snippets and found where I had used COLLATE DATABASE_DEFAULT after each of the comparison columns.  For example

SELECT COLUMN
FROM TABLE_A
INNER JOIN TABLE_B
WHERE TABLE_A.COLUMN COLLATE DATABASE_DEFAULT = TABLE_B.COLUMN COLLATE DATABASE_DEFAULT

If you experience the error “Cannot resolve collation conflict for equal to operator” I hope this solution helps you out.

Read Full Post »

Have you ever tried to restore a backup over an existing database and receive an error stating the tail of the log for the database has not been backed up? Many times if you are just restoring a database into a development environment you don’t care so you just restore WITH REPLACE and move on.  However if you want to ensure that your restore contains the latest transactions from the production database, simply make a transaction log backup.  Example

(BACKUP LOG [DB_NAME] TO DISK = ‘DRIVE:\PATH\DB_NAME_LOG.BAK’)

Now lets suppose you are working with a production system and something dreadful happens to the database.   Lets assume that you have a transactional database and take TLOG backups every 10 minutes. Now lets suppose at 2:00 in the afternoon a regular TLOG backup is taken and at 2:07 there is a hardware problem where your MDF file becomes corrupt. You now have to restore from backup. What happens to those 7 minutes of transactions?

Without recovering the tail end of the log, this data is gone. Since it was only the MDF file that was corrupt or lost, we can still take a log backup even without the MDF. To do so use

(BACKUP LOG [DB_NAME] TO DISK = ‘DRIVE:\PATH\DB_NAME_LOG.BAK’ WITH NO_TRUNCATE)

The key here is the NO_TRUNCATE statement.  Using NO_TRUNCATE allows you to be able to backup the transaction log without the database being online.  If your database is online you can just take a regular transaction log backup.

If you want to see this in action, check out my video.  In this video I go rouge and delete my MDF file and demonstrate how to backup and restore the tail end of the log.

How to backup and restore the tail end of a transaction log

Read Full Post »

Try researching recovering the master database online and you will see countless references to having to have SQL at the same version and builld level as the backup of the master database.  Since most folks probably don’t have a routine in place to record the version number each time SQL server is patched we need a way to be able to determine the version and build level of our instance.

If you take regular backups of the master database and have to restore to the same instance, you should be ok to not have to worry about the version.  However, if you are having to recover the system database from one instance to another server, then the build version becomes very important.

Imagine the scenario that you get a call from the NOC at 3:00 AM stating an important production server has crashed. You crawl out of bed, remote in and realize that all your SAN LUNS are missing.  You report back to the NOC, they call the SAN administrator and you learn that the LUNS are unrecoverable.  As luck would have it, you have a spare server you can migrate over to but you don’t script out all your user objects so you have to recover the system databases as well.  What version are you on?  10.0.4023 or 10.0.4062  You don’t know.  All you know is SQL 2008 Sp2.

As luck would have it, the boys and girls at Microsoft thought ahead.  This data is stored in the header and we have the ability to retrieve that information.

RESTORE HEADERONLY FROM DISK = ‘DRIVE:\PATH\DB_NAME.BAK’

This will return the SoftwareVersionMajor, SoftwareVersionMinor, and the SoftwareVersionBuild.  For example 10.0.4064.  Pretty neat huh?

Once you have SQL on the new instance installed and patched to the same level, you can start SQL in single user mode, connect to SQL using SQLCMD, and then restore the master database.  There are tons of blogs with step by steps on how to recover the master database.

I hope you never have to use this in production, but you should have plenty of experience practicing this.

Read Full Post »

I decided to start monitoring the amount of free space in my database files so that I can make sure all my databases are set to auto grow by the right amount and to monitor the amount of free space so I can schedule the expansion of the files rather than let them auto grow.  I still plan to allow the files to auto grow if needed but by properly monitoring the size this will become the exception and not the norm.

I am certain there are numerous tools out there that will handle this for me but today I don’t have those tools.  What I do have is SSMS, Powershell, and a bunch of servers.  I put together a simple query that uses sp_MSforeachdb to gather the information I needed from the entire instance.  For now I am just gathering the Instance Name, database name, fileid, auto grow size in mb, file size in mb, space used in mb, free space in mb, percent free in mb, file name, file path, and the current time.

Running this query against each instance was easy enough but I have over 100 instances.  I don’t want to schedule a job on each server to run this and I haven’t implemented Central Management Server yet so I turned to Powershell to make this happen.  I recalled a conversation I had recently with a good friend Jon Boulineau (blog) where he was collecting database backup information from a list of servers, so I pinged him for advise.  While having an IM chat one night with Jon I was telling him what I was looking to do and within a few minutes Jon sent me a Posh script to model mine after.

Jon is one of the best developers that I know and the script he sent me was very well laid out and already had just about everything I needed minus my query.

I am really looking forward to putting this to use and building upon it.  I know there are several packs of Posh scripts that are available for SQL Server management and I will be looking at them shortly after the Summit.

$listOfServers = "server1", "server2", "server3"            
 $query = "sp_MSforeachdb  'use [?] 
select @@servername AS ServerName
 
,DB_NAME() AS DB_NAME

,[fileid]
 
,GROWTH = CASE WHEN [growth] = ''10'' THEN convert(nvarchar(10),''10%'') ELSE convert(nvarchar(10),[growth]/128) END 
,
[File_Size_MB] = convert(decimal(12,2),round([size]/128.000,2)) 

,[Space_Use_MB] = convert(decimal(12,2),round(fileproperty([name],''SpaceUsed'')/128.000,2))
 
,[Free_Space_MB]=convert(decimal(12,2),round(([size]-fileproperty([name],''SpaceUsed''))/128.000,2)) 

,[Percent_Free_MB]=cast((([size]-fileproperty([name],''SpaceUsed''))/128.00000)/([size]/128.000) as numeric(10,2))*100 

,RTRIM(LTRIM([name])) AS FileName
 
,RTRIM(LTRIM([filename])) AS FilePath 

,convert(datetime,getdate(),112) as DateInserted 
FROM dbo.sysfiles'"             
$destinationServerName = "reporting_server"            
$destinationDatabaseName = "db_name"            
foreach($server in $listOfServers)            
 {                    
    $results = Invoke-Sqlcmd -ServerInstance $server -Query $query            

    $connectionString = "Data Source=$destinationServerName;Integrated Security=true;Initial Catalog=$destinationDatabaseName;" ;                    
    $bulkCopy = new-object ("Data.SqlClient.SqlBulkCopy") $connectionString ;                    
    $bulkCopy.DestinationTableName = "TableName" ;             
    $bulkCopy.WriteToServer($results) ;             
}

Read Full Post »

This is post is going to be very simple but it is something I come across ever so often and find myself having to look up the syntax most of the time.  I guess it is one of those things that if you don’t use it often enough you don’t commit it to memory.

So here is my story, I routinely come across database servers that when you go to look at the properties of the database you get that dreadful message about the “Property Owner is not available for Database ‘[db_name]’.  This property may not exist for this object, or may not be retrievable due to insufficient access rights.”  It is a very simple fix if you can remember the stored procedure to execute.  All you have to do is assign the proper user as DBO.

The bigger question is how did this issue occur in the first place.  For those of you novice DBA’s you may or may not be aware that when you restore a database you become the owner of the database unless you change it.

So how to correct this.  Simple.  EXEC sp_changedbowner ‘SA’  You can substitute another account in place of SA if that is your companies practice.

EDIT >> A friend by the name of Nicholas Cain pointed out that ALTER AUTHORIZATION can accomplish the same task.   I believe this is new to SQL 2005 and as I still support SQL 2000 😦 sp_changedbowner is still needed.  However going forward I do think I will start using the following syntax at least on all 2005 and above systems.

ALTER AUTHORIZATION ON DATABASE :: DB_NAME TO SA

Read Full Post »

I see questions on forums all the time about DBA’s or System Admins finding they are out of drive space due to a very large transaction log.  Post vary a bit on what the admin has tried but a common theme is that they have tried shrinking the file but it didn’t shrink and a plea for help.

Several immediate issues stand out with these post for help.  First is that regular transaction log backups are not occurring.  Second is that if regular backups are not happening, is the admin adequately ready to restore this system if they had to.  What is their recovery plan and the service level agreement for this system?

A quick fix for the users with the swollen transaction log that is crippling the server due to filling up all the available drive space is pretty simple.  Issue a command that will issue a check point in the transaction log and then shrink the file.  A permanent solution is to then start taking regular transaction log backups if you need to be able to recover to a point in time other than since the last full backup.  If you don’t have the need for that type of service level agreement then SIMPLE mode or BULK LOGGED may be your answer.

To issue a check point you have a couple of options.  With all versions of SQL Server you can simply backup the log.  The issue with that in most cases is that you are in a pinch and need to resolve this issue immediately.  You don’t have time to backup a huge log and even if you did have the time, you probably have no need for the huge transaction log.  You just need to quickly issue a check point.  What do you do?

With SQL 2000 and 2005 you can backup the log with TRUNCATE_ONLY.  The proper syntax is something like “BACKUP LOG [DB_NAME] WITH TRUNACTE_ONLY”.  With SQL 20008 and above the “TRUNCATE_ONLY” no longer works as it was deprecated.  A nice workaround is to backup the log to a null file system.  To do that simply use “BACKUP LOG [DB_NAME] TO DISK = ‘NUL'”.  It almost seems like cheating doesn’t it.

Another option is to change the recovery model from FULL to SIMPLE, shrink the log and then set the recovery mode back to FULL.  This option works just as well as TRUNCATE_ONLY or backing up to the NULL disk.

Regardless of which option that you chose to issue the check point and reclaim the wasted space, you should only shrink to a certain target size that you need.  If you shrink the log file to small then it will just have to grow again causing a negative impact to your system.  If you are not certain what size you need, just take a SWAG at it and monitor.  As for how often to take regular transaction log backups, well that depends on your SLA.  I have some systems that backup every 10 minutes, others once an hour.  IT DEPENDS on your environment and needs.

You should also take a full backup.  If something happens between now and your next full backup, you will only be able to recover to your last known good backup.

Read Full Post »

Recently I answered a question on AskSSC that I thought I would create a quick blog about.  Someone had asked if there was an easy way to run SELECT * FROM SYS.PROCEDURES on every database.  I immediately thought of the undocumented stored procedure sp_msforeachdb.  I posted a response and gave a bit of advise to modify the query to make it an insert statement and to only return the columns from SYS.PROCEDURES that the OP needed.  Another answer was to use PowerShell which is also an excellent way of achieving the results.  To expand my answer with using sp_msforeachdb I have the following process to create a table and insert the results there.  Keep in mind that I have a local database called “Tim” on my instance as a dumping ground for testing items.  Change “Tim” to what ever database name you would prefer.

USE Tim
GO

CREATE TABLE [dbo].[PROCEDURES]
(
[DB_NAME] [nvarchar](128) NULL
,[name] [sysname] NOT NULL
,[OBJECT_ID] [int] NOT NULL
,[SCHEMA_ID] [int] NOT NULL
,[TYPE] [char](2) NOT NULL
,[type_desc] [nvarchar](60) NULL
,[create_date] [datetime] NOT NULL
,[modify_date] [datetime] NOT NULL
)
ON  [PRIMARY]

EXEC sp_msforeachdb ‘USE ?; INSERT INTO TIM.DBO.PROCEDURES
(DB_NAME, NAME, OBJECT_ID, SCHEMA_ID, TYPE, TYPE_DESC, CREATE_DATE,MODIFY_DATE)
SELECT DB_NAME() as DB_NAME,NAME, OBJECT_ID, SCHEMA_ID, TYPE, TYPE_DESC, CREATE_DATE, MODIFY_DATE
FROM  SYS.PROCEDURES’

SELECT  DB_NAME
,       NAME
,       OBJECT_ID
,       SCHEMA_ID
,       TYPE
,       TYPE_DESC
,       CREATE_DATE
,       MODIFY_DATE
FROM    TIM.DBO.PROCEDURES

Read Full Post »

Older Posts »