Feeds:
Posts
Comments

Archive for the ‘Backups/Recovery’ Category

Have you ever tried to restore a backup over an existing database and receive an error stating the tail of the log for the database has not been backed up? Many times if you are just restoring a database into a development environment you don’t care so you just restore WITH REPLACE and move on.  However if you want to ensure that your restore contains the latest transactions from the production database, simply make a transaction log backup.  Example

(BACKUP LOG [DB_NAME] TO DISK = ‘DRIVE:\PATH\DB_NAME_LOG.BAK’)

Now lets suppose you are working with a production system and something dreadful happens to the database.   Lets assume that you have a transactional database and take TLOG backups every 10 minutes. Now lets suppose at 2:00 in the afternoon a regular TLOG backup is taken and at 2:07 there is a hardware problem where your MDF file becomes corrupt. You now have to restore from backup. What happens to those 7 minutes of transactions?

Without recovering the tail end of the log, this data is gone. Since it was only the MDF file that was corrupt or lost, we can still take a log backup even without the MDF. To do so use

(BACKUP LOG [DB_NAME] TO DISK = ‘DRIVE:\PATH\DB_NAME_LOG.BAK’ WITH NO_TRUNCATE)

The key here is the NO_TRUNCATE statement.  Using NO_TRUNCATE allows you to be able to backup the transaction log without the database being online.  If your database is online you can just take a regular transaction log backup.

If you want to see this in action, check out my video.  In this video I go rouge and delete my MDF file and demonstrate how to backup and restore the tail end of the log.

How to backup and restore the tail end of a transaction log

Read Full Post »

Try researching recovering the master database online and you will see countless references to having to have SQL at the same version and builld level as the backup of the master database.  Since most folks probably don’t have a routine in place to record the version number each time SQL server is patched we need a way to be able to determine the version and build level of our instance.

If you take regular backups of the master database and have to restore to the same instance, you should be ok to not have to worry about the version.  However, if you are having to recover the system database from one instance to another server, then the build version becomes very important.

Imagine the scenario that you get a call from the NOC at 3:00 AM stating an important production server has crashed. You crawl out of bed, remote in and realize that all your SAN LUNS are missing.  You report back to the NOC, they call the SAN administrator and you learn that the LUNS are unrecoverable.  As luck would have it, you have a spare server you can migrate over to but you don’t script out all your user objects so you have to recover the system databases as well.  What version are you on?  10.0.4023 or 10.0.4062  You don’t know.  All you know is SQL 2008 Sp2.

As luck would have it, the boys and girls at Microsoft thought ahead.  This data is stored in the header and we have the ability to retrieve that information.

RESTORE HEADERONLY FROM DISK = ‘DRIVE:\PATH\DB_NAME.BAK’

This will return the SoftwareVersionMajor, SoftwareVersionMinor, and the SoftwareVersionBuild.  For example 10.0.4064.  Pretty neat huh?

Once you have SQL on the new instance installed and patched to the same level, you can start SQL in single user mode, connect to SQL using SQLCMD, and then restore the master database.  There are tons of blogs with step by steps on how to recover the master database.

I hope you never have to use this in production, but you should have plenty of experience practicing this.

Read Full Post »

Checking Backups

Over the past two years I have acquired quite a few database servers that I am responsible for.  We have many standards among the database group I am a part of, things such as standard drive configurations (drive for data files, log files, tempdb, and backup),  lock down scripts, maintenance jobs, etc.  Over time I have come to realize that previous dba’s could have and have changed some of these standards for any number of reasons.  For example, if a particular database on the server needed a different backup strategy they might exclude it from the standard script that backups up all user databases, master, and msdb.  Same goes for the scripts to rebuild indexes, etc.  If our standard monitors are only checking to the standard backup job or standard index rebuild job then how am I to know if all databases backed up properly without visiting each server to validate this?  Also what something happened and the dba on call needed to temporarily disable my full backup job and forgot to enable it back?  If our monitor is only checking for when it fails I will never be notified that it is not running.  This lead me to create a very simple process to query MSDB to get the max date/time the database backed up and to join the Master database to get the real database name.  I insert this result set into a centralized database and can now run a report each day to see when I have a database that hasn’t backed up within the past 24 hours.  Actually I have this automated to where I get a daily email.  I know there are tons of third party tools out there to monitor servers but when your company has zero dollars to spend right now on centralized management and or you don’t have the time to pursue researching those tools, spending a few hours on a Saturday afternoon to develop this process is a good trade off.  While researching this I also noticed that we did not have a process in place to remove historical backup information out of MSDB.  Now on a very large SQL 2000 consolidation server that has been running for many years with over 100 databases doing log shipping, MSDB can grow pretty large.  I had to create a few indexes in MSDB so that the stored procedure SP_DELETE_BACKUPHISTORY would run efficiently.  The indexes were media_set_id on backupset, restore_history_id on restorefile and restorefilegroup.  On a dozen servers I was able to reduce the size of MSDB and be able to start monitoring 215 production databases to ensure they are properly backed up each day.  I am sure over time I will find a few things to tweak with my process but for now it is working and helping me to make sure my databases are backed up.

Read Full Post »