Category Archives: Code

Introducing Open Query Store

Introducing Open Query Store

Many of us may have heard about the amazing new feature in SQL Server 2016 called Query Store. However, there are a lot of SQL Servers out there that are neither 2016, nor will they ever be upgraded to 2016.

What is Open Query Store?

Open Query Store is our attempt at copying the capabilities of the Query Store, but for SQL 2005 – 2014. As the name suggests, this is an open source project with the MIT license, meaning you can use it for fun or profit (or both).

After a few meetings (read: SQL Server events and beers), Enrico van der Laar ( b | t ) and myself got the idea of creating a Query Store for the rest of us!

This quickly became what is now known as Open Query Store.

The first release (Updated release 2017-07-03) was published at the end of June 2017 and provides a background collection of query execution statistics in a database of your choice. The v1.0 release supports SQL Server from 2008 to 2014 and all editions (Express, Standard, Enterprise). There is also a PowerShell installer for those that are so inclined, which will grab the code from GitHub and install OQS into a specified database.

There is also a custom report which can be installed in SSMS (2016 and 2017), which will display information that OQS has collected.


What is the Future of Open Query Store?

The future of OQS is to get as close to feature parity with “real” Query Store as possible. We’re not sure yet exactly how close that will be, but we’ll do our best!

We have an overview of the current features on GitHub and will be adding features as time goes by.

If you have a specific feature that you want to add, then please provide feedback/suggestions via the Issues tab in GitHub.

Thanks for reading and we hope OQS can help you with query tuning in the future.

Presenting: Presentation Mode!

As a presenter at events I am constantly trying to improve the experience of showing information in slides and transitioning back and forth to demos.

ZoomIt: An OK solution for a bad problem?

The most jarring aspect of this is making sure that demo code is visible to the audience. The fantastic ZoomIt allows a presenter to (surprise, surprise) zoom into portions of the screen and highlight/annotate code or information to the audience:


First of all, the act of zooming can be disorienting to the audience. There is a flurry of zoom and scrolling activity to get to where you want to on the screen. After this, the actual presentation of the zoomed content usually works nicely enough. However, the zoom out must occur before moving back into the PowerPoint slide deck to continue with the next portion of the presentation.

This has been the only way to give a consistent and clear overview to an audience, particularly when SSMS was being used for demos. The issue revolves around the fact that although the T-SQL code editor window can resize fonts, the remainder of the SSMS interface is set in a single font type and size.

Many of you may have noticed that Microsoft made a key change in their deployment strategy with regards to SSMS when SQL Server 2016 was released. SSMS was decoupled from the core engine components and follows a much shorter release cycle. Since SQL Server 2016 was released to market in September, there have been at least 6 versions of SSMS released. This is fantastic, we now no longer have to wait for the next release of SQL Server (whether a full version or a service pack) for SSMS to get bug-fixes or feature additions.

This is now extremely important when we look at the issue around font sizes and types. Microsoft has paid attention and with their current Release Candidate (RC) for SSMS 17 they included a very important release note entry…..

Presentation Mode!

If we read the release notes, we see that there are three new tasks available via Quick Launch inside SSMS.

  • PresentEdit – This allows the fonts for presentation mode to be set
  • PresentOn – This turns on presentation mode
  • RestoreDefaultFonts – Reverts SSMS back to the default fonts after activating presentation mode

All three tasks are pretty easy to understand, although the final task highlights that a task to specifically turn off the presentation mode is currently missing (this is an RC after all).

The “Quick Launch” field can be found in the top right corner of SSMS 17.0 RC3 and begins searching as soon as we start to type in it:


By choosing “PresentEdit” an xml file is opened in a new tab in SSMS, showing us the options that we can change to make SSMS look different when presentation mode is activated.


We are presented with the option to choose font family and font size for both the text editor and, more importantly, for the environment in general (menus, object explorer etc.). This is where we can play around an find the fonts that work best in our presentations.

Using the values in my screenshot and launching PresentOn made a huge difference in font readability inside SSMS. The image below shows SSMS on the left in “standard” mode and in presentation mode on the right.


The difference is quite clear, all environment fonts are much larger and easier to read on during presentation mode. This is great for demoing SSMS during a presentation!

However, the biggest improvement is when we are querying data. In previous versions of SSMS the grid results were tiny when projected onto a wall. The only way to see the results were to either return the results as text (which has the downside of running off the right-side of the screen for larger result sets), or using ZoomIt and people getting motion sickness.

Now, with presentation mode on, the results grid is included in the font resizing:


Praise be to the spaghetti monster! No more motion sickness required and attendees can concentrate their contempt at all the bullet points in the slide deck instead.

So if you are a presenter, or want to have more control over the fonts in SSMS, your wait is almost over…… or is over now if you are brave enough to install the RC of SSMS 17 🙂

Happy font-changing

Database objects can DIE in SQL 2016 without using the KILL command

When developing database code for SQL Server one issue that has got in the way of smooth deployment has been a simple check for the existence of an object before creation.

The issue being, whenever an object should be created, we need to be sure that there isn’t an object already in the database that has the same name. If there is, we need to do something beforehand otherwise the CREATE command is going to fail like this:

Msg 2714, Level 16, State 6, Line 16
There is already an object named {objectname} in the database.

DIE Database Object, DIE!

Up until SQL Server 2016 the only way of making sure an object wasn’t already present was by adding a snippet of code directly before the CREATE statement. This could be achieved in multiple ways, but essentially all solutions just checked for an object with the name provided.

Something along the lines of this:


While that is a very short piece of code, it is still code that needs to be typed and checked and tested. Of special note is the fact that the object name needs to be correctly typed twice. Let us also not forget, because there are multiple ways of achieving this existence check a new developer may not immediately understand what this code is doing.

You may notice in the paragraph above the code example, I wrote “Up until SQL Server 2016….”. That is because with the release of SQL Server 2016, Microsoft has made our lives a little bit easier in this respect. It is now possible to do the existence check and the drop in one command rather than two.

The long-winded and more error prone example above simply becomes:


Wow! Drop Table If Exists, or DIE for short. Super-short (only one chance of mistyping the object name) and super easy for anyone to understand, right?

But wait! There’s more!

This existence check is not limited to tables. It covers many more objects, as outlined on the SQL Server Database Engine Blog.

But wait! There’s even more!

If you wanted to be really efficient and only want to write the (not really) verbose DROP TABLE IF EXISTS once for all the tables that you want to drop, you can!

This code will work flawlessly:

DROP TABLE IF EXISTS MyTable, MyOtherTable, YetAnotherTable

The elegance of DROP TABLE IF EXISTS is that if any (or all) of the three tables above exist, they will be deleted. If none of them exist, then nothing will happen.

And here is the catch

Unfortunately, the pluralised usage of DROP TABLE IF EXISTS doesn’t seem to work for all object types. I tried to do the same thing with database users:


This would end in a syntax error.

So there we have it. Objects can now DIE inside SQL Server, without the KILL command 🙂


At the time of writing I have not checked all of the supported objects of DIE, but will update this post once I have found the time to do so.

Who blew up my TempDB?

Who blew up my TempDB?

Who hasn’t experienced a server all of a sudden running out of TempDB space and seeing the following error message?

Could not allocate a new page for database 'tempdb' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

The difficulty about investigating a TempDB issue like this is that in general the solution chosen to fix a full TempDB is to restart the SQL Server instance. Seeing as how the instance has quite often locked up completely and the customer wants their server to work again ASAP, then a restart is almost inevitable.

A restart is an option to combat the symptom, but not the cause. It also (in)conveniently removes the evidence of the transaction that caused TempDB to fill up in the first place.

To see how to deal with TempDB (or other databases) filling up/filling up the disk it is on please take a look at this blogpost.

Returning to the question at hand: Who filled my TempDB?

One solution I looked into, and have implemented for this exact reason, uses Extended Events.

Extended Events is an event processing framework built inside SQL Server since SQL 2008. It allows you to run very lightweight background monitoring of system events, and in this case can capture information about which command/transaction caused a file growth event to occur.

For a full introduction to Extended Events, please take a look at the 31 Days of Extended Events series from Jonathan Kehayias ( b | t ).

Catch the culprit

To be able to catch the offending process/user, we will create an Extended Events session on the server and the following code does just that:

ADD EVENT sqlserver.database_file_size_change (
 ACTION ( sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_name, sqlserver.session_nt_username, sqlserver.sql_text )
 WHERE ( [database_id] = ( 2 ) ) -- We filter on database_id=2 to get TempDB growth only
ADD TARGET package0.event_file ( SET filename = 'D:\Temp\Database_Growth_Wathdog.xel',
 max_file_size = ( 10 ) )

Some things to note about the code:

  • We are monitoring for data file and log file growth. The event sqlserver.database_file_size_change fires for any file size change. We do this because we want to be informed of any and all file growth just to make sure we don’t miss anything.
  • If you have multiple data files for TempDB (like you may for a multi-core environment) you will see one event fire for each file that is growing. E.g. You have 4 data files and the database grows, you will see 4 entries in the extended events output.
  • The session is set to flush events to the output file in 1 second intervals (MAX_DISPATCH_LATENCY). This is done to ensure we lose as few entries to the output file as possible. If TempDB fills up, the entire instance can often stop working completely. We want to catch as much information as possible before this happens, so we flush to the output file in very short intervals.
  • We start the session at instance startup (STARTUP_STATE). This ensures we have the event active immediately on server startup. As this is monitoring file growth events, we should remain very lightweight so not have to worry about swamping our system with extended events processing.
  • We limit the output file to 10MB and allow it to deliver to a total of 5 files. This means we have to have 50MB for the files in the output directory and won’t spam the folder with too much data.
  • When the event fires, we collect information about the query/command that caused the file growth to occur. This includes NT username, Hostname, origin database of the query, command text and application name.

The information collected by this session can be vital in pinpointing the cause for the TempDB filling up. However, there is the possibility of false positives in this setup. TempDB may have been almost completely filled by another previous transaction and the transaction causing the growth event is an innocent bystander. This is an unavoidable situation, but needs to be kept in mind when analysing the data. If you don’t catch the exact transaction this way, you are still on the right track.

Analyse the data

Once the data has been collected, we need to load and parse the output files to make sense of what has happened. The following code will parse the xml output that is in the (up to) 5 files.

DECLARE @TraceFileLocation NVARCHAR(255)= N'D:\Temp\Database_Growth_Watchdog*.xel';
WITH FileSizeChangedEvent
 AS (
 SELECT object_name Event,
 CONVERT(XML, event_data) Data
 FROM sys.fn_xe_file_target_read_file(@TraceFileLocation, NULL, NULL, NULL)
 SELECT Data.value('(/event/@timestamp)[1]', 'DATETIME') EventTime,
 Data.value('(/event/data/value)[7]', 'BIGINT') GrowthInKB,
 Data.value('(/event/action/value)[2]', 'VARCHAR(MAX)') ClientUsername,
 Data.value('(/event/action/value)[4]', 'VARCHAR(MAX)') ClientHostname,
 Data.value('(/event/action/value)[5]', 'VARCHAR(MAX)') ClientAppName,
 Data.value('(/event/action/value)[3]', 'VARCHAR(MAX)') ClientAppDBName,
 Data.value('(/event/action/value)[1]', 'VARCHAR(MAX)') SQLCommandText,
 Data.value('(/event/data/value)[1]', 'BIGINT') SystemDuration,
 Data.value('(/event/data/value)[2]', 'BIGINT') SystemDatabaseId,
 Data.value('(/event/data/value)[8]', 'VARCHAR(MAX)') SystemDatabaseFileName,
 Data.value('(/event/data/text)[1]', 'VARCHAR(MAX)') SystemDatabaseFileType,
 Data.value('(/event/data/value)[5]', 'VARCHAR(MAX)') SystemIsAutomaticGrowth,
 FROM FileSizeChangedEvent;

Please take note of the variable @TraceFileLocation. The example uses a wildcard to allow loading/parsing of multiple files, this is particularly useful if you really do rollover into multiple files.

The results from the query provide a range of information regarding to who the offending process was, what command they had submitted last and a set of information about the client.

So, now we have a starting point to discover who/what is causing TempDB to grow and can discuss this with application owners.

Other ideas

It is extremely simple to to extend the session to monitor all databases and run a separate monitoring solution to inform you of such growth events. In fact, I saw Gianluca Satori ( b | t ) talk about streaming Extended Event data and processing them in near real time at his SQLSat Oslo session. I am certain that this would be a fantastic way of doing some sort of real time monitoring of growth events.

Some homework for you: What could you use this Extended Events session for? How could you modify/improve on it to help with your system monitoring and administration? Let us know in the comments.

The case of the non-starting SQL Agent Job

I ran into an interesting situation recently that involved a SQL Agent job that would not/could not run. Looking at the job settings everything seemed fine, yet the SQL Agent deemed the job “un-runnable”.

As we can see here “My Test Job” is enabled, and scheduled, but has no “Next Run” and is deemed not “Runnable”.

Job Activity Monitor

Taking a closer look at the job properties, we can further see that the job is enabled:

Job Properties - General_3

We also see that there really is a schedule for the job:

Job Properties - Schedules

And on further investigation I saw something that I have never really looked at before. The “Targets” properties:

There are no entries here, but more importantly, the radio button “Target local server” is not selected. This turned out to be the cause of the job not running!

If I try and exit the properties with “OK” (which you should never do unless you have intentionally changed a setting somewhere), then we are presented with a clear error message informing us about the missing Target:

Job Save Error

“You must specify the Target Servers on which this multi server job will execute.”

The “Targets” section of a SQL Agent Job is something that I have never delved into in any detail, but is a long standing feature to allow for administration of large server environments. You can read more into Multiserver Environment Administration on MSDN.

The reason for the setting being missing was through an incomplete scripting process for the affected job. If you script out a SQL Agent Job in SQL Server Management Studio, you will see that one of the last steps in the script is a system stored procedure “msdb.dbo.sp_add_jobserver”:

EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'

This stored procedure associates a job to a jobserver, which in my experience has always been “(local)”. Obviously, this will be different if you are using master servers and target servers in your environment.

As soon as I had set the target server to local for the job, everything was back to normal and the job ran again according to it’s schedule.

SQL Server Simple Recovery Model Requires Log Backups

“SQL Server Simple Recovery Model never requires a log backup!!!” I hear you say. Well sit back, relax and let me show you that it does…… sometimes…… although it shouldn’t!

Issue Description

I ran into a confusing situation yesterday which has a published solution, but I had never heard about it. It is one of those things that should stick in your mind considering how much havoc this may cause.

I am talking about what happens if you setup your model database so that future new databases are created in a way you want. If you stray from the SQL Server defaults, then this bug might catch you out. The issue I am talking about is the recovery model of the model database. By default SQL Server has a model database set to FULL recovery and with (in my opinion) vastly outdated initial size and auto-growth settings.

My department’s policy is to set the model database to SIMPLE and to set initial size and auto-growth to something more sensible than 1MB and 10% (for argument’s sake 128MB initial size and 128MB growth).

With SQL Server 2012 (RTM and SP1), setting the model to SIMPLE looks fine on the outside, but under the covers that is not what happens. A new database will be created using model, setting the new DB to SIMPLE recovery and you won’t immediately notice any issues. However, as soon as you start running DML you can see that something isn’t quite right by interrogating sys.databases and looking at the log_reuse_desc column for the database.

You can tag along for the ride by running the following code on a TEST!!! 2012 RTM or SP1 instance


USE master

IF EXISTS (SELECT * FROM sys.databases AS D WHERE name ='MyTestDB')


-- Create a test database using defaults from model

-- Verify that the database is in SIMPLE recovery model and log reuse is 'NOTHING'
SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

-- Create a table in the test db and add some data (creating log file entries)
CREATE TABLE dbo.TestTable
 (Col1 int NOT NULL IDENTITY (1,1),
 Col2 char(256) NOT NULL

INSERT INTO dbo.TestTable
VALUES (default)
GO 2000

-- Now look at the log reuse information again - all is fine......
SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

-- Perform a backup of the database, because that is what we DBAs do regularly (here to nul so we don't fill up disk space)

-- Add yet more test data
INSERT INTO dbo.TestTable
VALUES (default)
GO 2000

-- Now look at the log reuse information again - oh look..... apparently we need a log backup!!!
SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

As you can see from running that code block, the log reuse description is now ‘LOG_BACKUP’ which is not possible for a SIMPLE recovery database! In fact, you can run a log backup if you like too, it will run successfully but won’t clear the log out for re-use!! So if you don’t do something about this, you will either fill your log file (if it isn’t set to auto-grow), or you will fill your disk drive where your log file resides. How fast this will happen depends solely on what activity is going on in that particular database. Your monitoring processes for drive space will warn you, but if the growth is fast enough, even these early warning systems may not be enough.

Solutions to the problem

As I mentioned at the beginning of this post, there is a solution to the problem (or rather, there are multiple solutions).

  1. Don’t set model to SIMPLE in the first place, making sure it is a step you make after creating databases that don’t require FULL recovery.
  2. Continue with model set to SIMPLE, but incorporate a process that “fixes” your databases after creation.
  3. Install at least CU4 for SQL Server 2012 SP1 or CU7 for SQL Server 2012 RTM both of which incorporate the bug-fix for this problem (along with a whole host of other fixes).

The steps required to follow option 2 would be:

  • Set the affected database to FULL recovery model
  • Perform a FULL backup of the database
  • Set the database back to SIMPLE recovery model

Here is the code to achieve that and also prove the title of this blog post:

-- Now look at the log reuse information again - oh look..... apparently we need a log backup!!!
SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

-- Perform the "required" log backup

-- The log has been cleared...... So we have FULL recovery in disguise
SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

-- How about we fix this permanently. Change to FULL, run a backup, change back to SIMPLE




-- Now fill our table with more data and check the log reuse column again
INSERT INTO dbo.TestTable
VALUES (default)
GO 2000

SELECT name,
FROM sys.databases
WHERE name = 'MyTestDB';

As for option 3: I realise that using RTM is something people just shouldn’t be doing, but the fact that this bug is in SP1 is a nasty one. Also, not everyone will install Cumulative Updates (CUs) because they are not as rigorously tested as Service Packs (SPs). I know of companies that have policies strictly forbidding installing CUs on production machines for this reason alone. This makes it all the more serious that it is a bug that managed to get through to SP1. Obviously, as soon as SP2 for SQL 2012 comes out (but when will that be?) this bug-fix will be available, but until that time, you have to live with CUs or other workarounds.

I have to say again, I am surprised I hadn’t heard about this issue before – it flew under my radar or was just not picked up by the “normal news channels”. Either way, I thought I’d throw this out there to help the google juices for anyone who has similar issues.

And so, that is how the SIMPLE recovery model sometimes does need a log backup……… sort of.

Have fun 🙂

Windows Failover Cluster Patching using PowerShell

PowerShell is the new standard scripting and automation language for Microsoft products. In this post I will be describing how I used two new Powershell commands to help semi-automate the installation of Windows Updates to a Windows 2012 Failover Cluster.

I run a number of clusters on both Windows Server 2008 and Windows Server 2008R2 and I must say that although Windows Clustering was a new thing to me about 5 years ago, my overall experience has been pretty painless during that time. From what I hear/read Server 2008 was the first outing of a much updated Windows Failover Clustering (including PowerShell integration), so I managed to body-swerve the more painful versions before it.

Be that as it may, Windows Server 2008 was always lacking in terms of scripting/automation and didn’t get much better with Windows Server 2008R2. The previous incarnation of the automation tooling was “cluster.exe” which was a command line tool that allowed for quite detailed control of a cluster. This was a tool that has “grown” over time to cover many areas of Windows Clustering, but was not designed to be as flexible or programmable as PowerShell. The PowerShell command-set for Clustering covered just about everything that “cluster.exe” covered and then some more – a good overview can be found at the Windows File Server Team Blog.

As Windows has evolved, the PowerShell offerings for each feature within Windows has also evolved. I recall hearing that the with Windows Server 2012 you can control the entire O/S from within PowerShell and that the GUI basically constructs and runs these PowerShell commands in the background (my brain may be playing tricks on me though).

With this in mind, I am preparing a server migration/replacement which will move a cluster from Windows Server 2008 to Windows Server 2012.  I took another look at the PowerShell commands available to me on Server 2012 and was rather pleased to see two improvements in the Failover Clustering command-set. The major command addition that I discovered, and had to write about here, was the “Suspend-ClusterNode” command and its brother “Resume-ClusterNode”.

Suspend-ClusterNode allows you to temporarily “pause” a cluster node. This basically removes any currently running resources/services from the specified node and the cluster will not assign any workload to that node until it has been re-instated as an active cluster participant. Resume-ClusterNode brings a previously suspended node back online within the cluster.

You may ask “Why would you want to do this?” or be thinking “This was possible with Windows Server 2008R2”; well dear reader, let’s take a look at those two points.

“Why would you want to do this?”

The short answer: Server maintenance with minimal downtime.

The slightly longer answer: Imagine you have a 3 node cluster. It is patchday and you want to cleanly patch the nodes, one after the other, with a minimum of downtime. You can afford for one node of the three to be offline for maintenance at any one time. This means that you can suspend node 3 while nodes 1 and 2 remain online. The cluster then knows that node 3 is down, but it is not down not due to a failure (so will not start panicking). You can then patch the node, reboot as necessary (who am I kidding, this is a windows server, you’ll have to reboot it!) and then the node is ready to re-join the cluster as an active/available member. You are then able repeat this process on the remaining nodes to install updates across the whole cluster.

“This was possible with Windows Server 2008R2”

The short answer: Yes it was, but required some manual intervention.

The long answer: Windows Server 2008R2 offered the ability to suspend a cluster node, but without the ability to control how the cluster dealt with any resources on the node being suspended. This is where the real magic in the new PowerShell command “Suspend-ClusterNode” comes into play.

Let’s take a quick look at the syntax of both so we can compare:


Old Syntax

Suspend-ClusterNode [[-Name] ] [[-Cluster] ]

New Syntax

Suspend-ClusterNode [[-Name] ] [[-Cluster] ] [[-TargetNode] ] [-Drain] [-ForceDrain] [-Wait] [-Confirm] [-WhatIf]

As we can see, the new syntax offers quite a few extra parameters over the old; the main ones to note are [TargetNode] and [Drain]. [TargetNode] allows us to specify where any resources should be moved to and [Drain] initiates the move of the resources/services. This allows for a much finer control over resources within a cluster during maintenance operations. With the new command it is really easy to perform the 3 node cluster maintenance outlined earlier. We suspend one node after the other, moving any resources they should have to another node of our choosing and can then resume the node after the maintenance has completed. If we now take a look at Resum-ClusterNode, we will see another level of brilliance that becomes available to us that further eases node maintenance work:


Let’s compare old vs. new:

Old Syntax

Resume-ClusterNode [[-Name] ] [-Cluster ]

New Syntax

Resume-ClusterNode [[-Name] ] [-Cluster ] [[-Failback] {Immediate | NoFailback | Policy}]

Again, we can see that there is more to decide upon with the new syntax. When you resume a suspended node, you can decide what happens to the resources that were previously running on that node.

The parameter [Failback] has three options:

  • “Immediate” is pretty obvious and will immediately take control of the resources that were previously running on that node before suspension.
  • “NoFailback” will resume the node but leave all resources where they currently are – this is a good idea if you have already failed over to an updated node and don’t want another service outage in this maintenance period.
  • Finally, “Policy” would postpone any failback until a pre-defined failover timeframe is reached.

Once we see how Suspend-ClusterNode and Resume-ClusterNode have been extended, we can understand how the extensions open up better scripting control of clusters and their resources.  I have prepared a script that can be run in our maintenance window that will suspend a node and push all resources to another node.  The suspended node can then receive all Windows Updates and be rebooted if necessary and finally a script can be run to bring the node back online.  The same set of scripts are then run against the second node, reversing the flow back to the node we have just patched.  Using Suspend-ClusterNode and Resume-ClusterNode only reduces the overall code in the PowerShell scripts by a few more lines, but the complexity is reduced drastically.  This makes code maintenance easier and the code is just easier on the eye.

After seeing the improvement to my local code, I am certain that the extension to the PowerShell command-set was internally driven at Microsoft. Imagine how many clusters they have running that needed this extension to improve their automation. As far as I can see it, this is an obvious knock-on effect of their drive to “The Cloud” and a very positive one at that!

As always, think before running any scripts and please don’t blindly setup scripts to suspend and resume nodes in a cluster. This is an aid to your daily work and not a robot you should build to run any/all updates onto a cluster 🙂

Have fun!

Reducing data footprints on test systems

Picture the scene:

You are a DBA. You have to make everyone happy, including DEVs (yes, including them too). Part of that is to ensure that your DEVs have a fresh copy of your production data so that they can test their programming with current data and structures. The issue with that is your production database is multiple hundreds of GB in size and your test system doesn’t have the capacity to store a full copy of your database. Your IT Budget will also not stretch to purchasing more storage! What can you do?

I had a similar situation recently and took a look at the possibilties and here is what I came up with:

1. Try out tools like SQL Virtual Restore to allow you to “mount” a backup from a remote server avoiding the space issue all together. This is, of course, a solution that requires a license (no budget maybe?)

2. Use the great features offered to you by SQL Server Developer Edition: data compression to the rescue!

So of course, I chose door number 2.

As you may know, Developer Edition offers the same features as Enterprise Edition with the caveat that the instance cannot be used for production purposes in any way. This means that the seriously powerful and useful data compression becomes available for your test servers! This counts even if you only use Standard Edition in production – joy of joys! The good thing being that Developer Edition only costs around $50 (or less if you have an MSDN subscription) – even more joys!

If you have never seen/used compression take a quick look over on MSDN to see what it all is (Data Compression). Basically, you can compress data in indexes and tables at the storage level avoiding any query rewrites and still profiting from the storage savings. This can be a major advantage in terms of raw storage needs, but can also benefit you in high read environments with low RAM sizes. The data remains compressed when held in the buffer pool and is only de-compressed when being accessed. This means that you can keep more data in the buffer pool and reduce hard disk I/O (obviously not for free, compressing and de-compressing costs CPU cycles).  This may be acceptable on a test system with extremely constrained storage space.

The usage in my example scenario is now an even better proposition, not only can I reduce my storage footprint, I can also potentially increase test system performance on the I/O side of the equation (who doesn’t have I/O issues, especially on a test box!).

The next hurdle is of course identifying which indexes and tables that are in the database you want to squeeze down. This is possible via SSMS’ object explorer, but only if you want to spend an eternity doing so! The best way is to take a look at the meta-data tables/views/functions to interrogate the system objects. These are really interesting on their own (at least they are to me), but after writing my Index Script Creator, I realised the potential for using the information in these system objects to allow me to automate certain tasks like this one.

Similarly to the Index Script Creator, my Index Compressor Script (uploaded as a doc  but it is a .sql file really – stupid wordpress restrictions!) runs off and investigates all tables and indexes (down to the partition level) and then generates an index rebuild command to compress any currently non-compressed partitions of indexes / tables. The resulting commands can then be run and space can be reclaimed – just be aware that rebuilding indexes does cause log activity.

After compressing your tables and indexes you should have a much smaller amount of data (depending on how well your data compresses) and be able to reclaim some disk space.

I realise that this method will mean that you have to have a large enough disk to have the database in its uncompressed state to begin with, but you can end up with a cost effective solution to a budget-constrained test environment.

Happy zipping!

When COUNT() isn’t the only way to count

I have come across a situation a number of times in the past that seems to be one of those things that are so obvious when you see the solution, but can’t see them before the penny has dropped.

Imagine the following scenario:

You want to find the total number of orders that have the Order Status ‘A’ and the number of orders with an Order Status of ‘B’. This sounds like a simple enough request, that I’m sure you have heard of before.

Lets start off with some test data.

--Test Structure
USE master
IF DB_ID('Sandbox') IS NULL

USE Sandbox
IF OBJECT_ID('dbo.CountExample') IS NOT NULL
    DROP TABLE dbo.CountExample
    DROP FUNCTION dbo.Nums
-- Test Function to allow fast test data creation
CREATE FUNCTION [dbo].[Nums] (@m AS bigint)
AS (SELECT n = 1
SELECT n = 1),
AS (SELECT n = 1
FROM t0 AS a,
t0 AS b),
AS (SELECT n = 1
FROM t1 AS a,
t1 AS b),
AS (SELECT n = 1
FROM t2 AS a,
t2 AS b),
AS (SELECT n = 1
FROM t3 AS a,
t3 AS b),
AS (SELECT n = 1
FROM t4 AS a,
t4 AS b),
FROM t5)
FROM results
WHERE n <= @m

CREATE TABLE dbo.CountExample
(OrderId int NOT NULL,
OrderStatus char(1) NOT NULL)


--Test data
INSERT INTO dbo.CountExample
CHAR(n % 27 + 64)
FROM dbo.Nums (1000) AS N

Now that we have some test data and tables, we can take a look at what solutions are possible.

Solution 1:

The solution that I have seen come from a lot of people has been to basically run two queries, one for each Order Stautus and then collect these together returning the result.

Something along the lines of:

        FROM dbo.CountExample AS CE
        WHERE OrderStatus = 'A') CountA,
       (SELECT COUNT(*) CountB
        FROM dbo.CountExample AS CE
        WHERE OrderStatus = 'B') CountB

This delivers the correct result, but causes two separate queries to be run (one for each Order Status). There are variations of this solution, using sub-queries or CTEs, but I hope you get the idea that a separate COUNT() is required for each total that you want to calculate.

Solution 2:

The best way, that I know of, to achieve this would be to change the logic from a COUNT() to a SUM(). This sounds wrong at first, especially because the column Order Status is a char(1) and not an integer!

Take a look at how I have solved the problem with SUM():

       SUM(CASE WHEN OrderStatus = 'B' THEN 1 ELSE 0 END) CountB
FROM dbo.CountExample AS CE

Looking at the code, we can see that I have not just used SUM(), but also a CASE statement. CASE is one of my favourite constructs in T-SQL, as it allows you to perform logical processing of an entire set or only part of a set without filtering using a WHERE clause.

If you take a look at the execution plan, you will also see that the table is accessed once. This is an instant improvement over the “standard” solution of COUNT()-ing per Order Status and has the added bonus of never being noticeably more expensive, regardless of how many different Order Status totals are required.

So there you go.  COUNT() isn’t always the best way to count data in SQL Server.

Be prepared!

While I was on my recent trans-global sojourn I came to the conclusion that the boy scouts really had a valid point with their motto “Be Prepared”. Of course, I am not meaning it quite in the semi-military sense that Robert Baden-Powell meant, but being prepared before the large system roll-out really saved a lot of time, nerves and money.

The roll-out was a set of reporting servers in a transactional replication setup, pushing operational data from a central server to the outlying branch offices around the world. I spent a great deal of time preparing these systems for the roll-out; standardising the installations, scripting out each step required to set up users, roles, permissions, DB objects, linked servers, jobs etc. This long preparation period was very tedious and I would often lose the motivation to keep at it.

The final payoff for this long drawn-out process has been the actual roll-out at each location. I have been able to arrive at each office, totally jet-lagged and tired from the entire travel regime and basically sit and watch the scripts run through and see the systems come to life. Had I not been prepared, I would have been in a world of pain, trying to remember what needed doing and when whilst fighting off sleep and headaches.

As a side note: A former colleague/mentor told me to save every script I ever run. If you don’t, you’ll need to repeat that script that took you an hour to write the very next day after you deleted it. This task has been made much easier to do thanks to the SSMS Toolspack provided by the more than awesome Mladen Prajdić. His SSMS extension has saved me hours of time when I have written code and accidentally thrown it away, or when SSMS has crashed. Use his tool and donate to the cause!

So, before you start any project, always keep the Scouts motto in mind – “Be prepared”. Hard disk space is so cheap as to be free, how cheap is your time?