Tag Archives: code

Presenting: Presentation Mode!

As a presenter at events I am constantly trying to improve the experience of showing information in slides and transitioning back and forth to demos.

ZoomIt: An OK solution for a bad problem?

The most jarring aspect of this is making sure that demo code is visible to the audience. The fantastic ZoomIt allows a presenter to (surprise, surprise) zoom into portions of the screen and highlight/annotate code or information to the audience:


First of all, the act of zooming can be disorienting to the audience. There is a flurry of zoom and scrolling activity to get to where you want to on the screen. After this, the actual presentation of the zoomed content usually works nicely enough. However, the zoom out must occur before moving back into the PowerPoint slide deck to continue with the next portion of the presentation.

This has been the only way to give a consistent and clear overview to an audience, particularly when SSMS was being used for demos. The issue revolves around the fact that although the T-SQL code editor window can resize fonts, the remainder of the SSMS interface is set in a single font type and size.

Many of you may have noticed that Microsoft made a key change in their deployment strategy with regards to SSMS when SQL Server 2016 was released. SSMS was decoupled from the core engine components and follows a much shorter release cycle. Since SQL Server 2016 was released to market in September, there have been at least 6 versions of SSMS released. This is fantastic, we now no longer have to wait for the next release of SQL Server (whether a full version or a service pack) for SSMS to get bug-fixes or feature additions.

This is now extremely important when we look at the issue around font sizes and types. Microsoft has paid attention and with their current Release Candidate (RC) for SSMS 17 they included a very important release note entry…..

Presentation Mode!

If we read the release notes, we see that there are three new tasks available via Quick Launch inside SSMS.

  • PresentEdit – This allows the fonts for presentation mode to be set
  • PresentOn – This turns on presentation mode
  • RestoreDefaultFonts – Reverts SSMS back to the default fonts after activating presentation mode

All three tasks are pretty easy to understand, although the final task highlights that a task to specifically turn off the presentation mode is currently missing (this is an RC after all).

The “Quick Launch” field can be found in the top right corner of SSMS 17.0 RC3 and begins searching as soon as we start to type in it:


By choosing “PresentEdit” an xml file is opened in a new tab in SSMS, showing us the options that we can change to make SSMS look different when presentation mode is activated.


We are presented with the option to choose font family and font size for both the text editor and, more importantly, for the environment in general (menus, object explorer etc.). This is where we can play around an find the fonts that work best in our presentations.

Using the values in my screenshot and launching PresentOn made a huge difference in font readability inside SSMS. The image below shows SSMS on the left in “standard” mode and in presentation mode on the right.


The difference is quite clear, all environment fonts are much larger and easier to read on during presentation mode. This is great for demoing SSMS during a presentation!

However, the biggest improvement is when we are querying data. In previous versions of SSMS the grid results were tiny when projected onto a wall. The only way to see the results were to either return the results as text (which has the downside of running off the right-side of the screen for larger result sets), or using ZoomIt and people getting motion sickness.

Now, with presentation mode on, the results grid is included in the font resizing:


Praise be to the spaghetti monster! No more motion sickness required and attendees can concentrate their contempt at all the bullet points in the slide deck instead.

So if you are a presenter, or want to have more control over the fonts in SSMS, your wait is almost over…… or is over now if you are brave enough to install the RC of SSMS 17 🙂

Happy font-changing

When COUNT() isn’t the only way to count

I have come across a situation a number of times in the past that seems to be one of those things that are so obvious when you see the solution, but can’t see them before the penny has dropped.

Imagine the following scenario:

You want to find the total number of orders that have the Order Status ‘A’ and the number of orders with an Order Status of ‘B’. This sounds like a simple enough request, that I’m sure you have heard of before.

Lets start off with some test data.

--Test Structure
USE master
IF DB_ID('Sandbox') IS NULL

USE Sandbox
IF OBJECT_ID('dbo.CountExample') IS NOT NULL
    DROP TABLE dbo.CountExample
    DROP FUNCTION dbo.Nums
-- Test Function to allow fast test data creation
CREATE FUNCTION [dbo].[Nums] (@m AS bigint)
AS (SELECT n = 1
SELECT n = 1),
AS (SELECT n = 1
FROM t0 AS a,
t0 AS b),
AS (SELECT n = 1
FROM t1 AS a,
t1 AS b),
AS (SELECT n = 1
FROM t2 AS a,
t2 AS b),
AS (SELECT n = 1
FROM t3 AS a,
t3 AS b),
AS (SELECT n = 1
FROM t4 AS a,
t4 AS b),
FROM t5)
FROM results
WHERE n <= @m

CREATE TABLE dbo.CountExample
(OrderId int NOT NULL,
OrderStatus char(1) NOT NULL)


--Test data
INSERT INTO dbo.CountExample
CHAR(n % 27 + 64)
FROM dbo.Nums (1000) AS N

Now that we have some test data and tables, we can take a look at what solutions are possible.

Solution 1:

The solution that I have seen come from a lot of people has been to basically run two queries, one for each Order Stautus and then collect these together returning the result.

Something along the lines of:

        FROM dbo.CountExample AS CE
        WHERE OrderStatus = 'A') CountA,
       (SELECT COUNT(*) CountB
        FROM dbo.CountExample AS CE
        WHERE OrderStatus = 'B') CountB

This delivers the correct result, but causes two separate queries to be run (one for each Order Status). There are variations of this solution, using sub-queries or CTEs, but I hope you get the idea that a separate COUNT() is required for each total that you want to calculate.

Solution 2:

The best way, that I know of, to achieve this would be to change the logic from a COUNT() to a SUM(). This sounds wrong at first, especially because the column Order Status is a char(1) and not an integer!

Take a look at how I have solved the problem with SUM():

       SUM(CASE WHEN OrderStatus = 'B' THEN 1 ELSE 0 END) CountB
FROM dbo.CountExample AS CE

Looking at the code, we can see that I have not just used SUM(), but also a CASE statement. CASE is one of my favourite constructs in T-SQL, as it allows you to perform logical processing of an entire set or only part of a set without filtering using a WHERE clause.

If you take a look at the execution plan, you will also see that the table is accessed once. This is an instant improvement over the “standard” solution of COUNT()-ing per Order Status and has the added bonus of never being noticeably more expensive, regardless of how many different Order Status totals are required.

So there you go.  COUNT() isn’t always the best way to count data in SQL Server.

SSMS ToolsPack – Powershell Turboboost

I have been using the rather brilliant SSMSToolsPack from Mladen Prajdić recently and love the query execution history feature.

I like to keep my history around for a while – I do a lot of work that then has to be repeated later – and this tool lets me not worry about forgetting to save that important query I ran for someone weeks ago.

I ran in to a little problem though.  All those query executions have to be stored somewhere, this is done by setting a folder to store the query text. 

SSMSToolsPack stores the query text in txt files, these can get out of control if you run enough queries.  They are then stored per day in a folder; so for today the queries would be stored in the folder “2010-12-03”.

If you want to search through the history and have a great number of files and folders, the search can get very slow (I suspect this has to do with the directory and file traversal).  I am lucky to have a small SSD on my main machine, I store source code and the execution history files/folders on there.  This speeds things up, but it seems that even then the search is sluggish (takes about 30 seconds to index on my machine).

I took a quick look at these files and saw that they were basically all the same content wise.  I tried just combining the files to see if that could improve the performance of searching and lo-and-behold search was blazingly fast!

Being lazy, I whipped up a script in Powershell to make this easier/semi-automatic and here it is:

clear host
$path = "" #Set path here!
foreach ($folder in Get-ChildItem $path)
  if ($folder.PSIsContainer -eq "False")
      foreach ($file in Get-ChildItem $folder.FullName)
         $target = $path+$folder.Name+"output.txt" #set output file
         if ($file.Name -ne "output.txt")
          { #concatenate content of all files except output
            cat $file.FullName > $target 
            #del $file.FullName #delete the file after processing

It is nothing special, but maybe if you use SSMSToolsPack and have experienced a similar slowdown, you can use this to help.

Presenting the Index Creator Script

Born of a need to originally script out indexes in a way similar to how SSMS creates them, I created a script to do just that. The original was very quick-n-dirty as can be seen here: http://ask.sqlservercentral.com/questions/16646/create-script-for-indexes
I had not accounted for very much, other than the indexes as they were. All options, schema information etc. was ignored, as we have nothing special at work and I really needed those indexes quickly.

Since posting on ASK, I have tinkered on-and-off with the script for a while. I am now at a point where I think other people could really profit from it and no longer have it stuck to some hard-coded schemas etc.

Presenting the Index Creator Script v1.0! This script will go through the current database, finding all indexes (optionally system indexes too) and supply you with create index scripts.

It is clever enough to spot the difference between / usage of :

– Primary Keys
– Unique Constraints
– Clustered and Non-Clustered Indexes
– Filtered Indexes (produces the filter too)
– Included Columns
– Partitioned Tables/Indexes (although the partition schema and functions are not produced – yet!)
– Data Compression (on a partition level if used – yes this is possible!)
– Fill Factor
– Index Padding
– Locking (Row and Page)

I have supplied two versions of the code; one for SQL 2005 and one for SQL 2008 and above. This is done as SQL 2008 offers Data Compression, which is implemented in the indexes and partitions. Some of the script relies on this information and would not be backwards compatible.

It has been very interesting coding this script, as it has enlightened me on the structures in SQL server with regards to indexes. For example, as of SQL 2005, regardless of edition, SQL Server creates indexes using partitions. Although partitions cannot be used by editions lower than Enterprise/Developer Edition, all indexes are created with at least one partition. This makes sense, as that would mean there would have to be a separate structures depending upon edition. This way, regardless of edition, the storage engine works the same, you just don’t get the option of creating partitions on an edition lower than Enterprise/Developer. As soon as you migrate a database to Enterprise Edition, you get the possibility of then splitting the indexes on to multiple partitions.

Even better than that, I found out that indexes can be compressed by partition. I sort of knew this already, but in writing the script I saw this in even more clarity. Each partition of an index can use a different level of compression. This can be very interesting, especially if the costs of compression are high, but the benefits in storage are high too. Think of a CPU bound system where some partitions are accessed often and would need a lower compression to reduce CPU load, with other partitions that are accessed rarely which can benefit from the higher compression ratio.

I hope these scripts are of some benefit. If you have comments/questions/suggestions, please get in touch.

Index Creator Script – This is a zip file. Download, change file extension and open in your favourite ZIP manager (damn you wordpress!). There are 2 .sql files in there (one for 2005 and one for 2008). Disclaimer – use at your own risk, I am not responsible if it breaks your PC/Server.

In reality, this script can’t break things, but you have been warned!

UPDATE: Thanks to @Fatherjack for the quick heads-up on a syntax error. Things should look good now though! 🙂

Scripting DB Objects using Powershell

I recently had to script out all objects for our database at work so we could put it into TFS.  I am sure there are much better ways of doing this, but I wanted to learn Powershell and heard about SMO via simple-talk.

This is what I came up with (sorry about code formatting I haven’t found out how to improve that on wordpress yet):

# define parameters


$server = "",

$instance = "default",

$database = "",

$schema = "dbo",

$basefolder = "C:tempPowershell$databaseSchemaObjects",

$objectoption = $(read-host "All Objects = 0, Tables = 1, Table Triggers = 2, Views = 3, Functions = 4, Stored Procedures = 5")

# trap errors

$errors = "C:tempPowershellerrors.txt"



"______________________" | out-file $errors -append;

"ERROR SCRIPTING TABLES" | out-file $errors -append;

get-date | out-file $errors -append;

"ERROR: " + $_ | out-file $errors -append;

"`$server = $server" | out-file $errors -append;

"`$instance = $instance" | out-file $errors -append;

"`$database = $database" | out-file $errors -append;

"`$tables = $tables" | out-file $errors -append;

"`$path = $path" | out-file $errors -append;

"`$scripts = $scripts" | out-file $errors -append;

#throw "ERROR: See $errors"


# load .NET assembly
[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
function MakeDirectory #Checks to see if the supplied Directory is there and creates it if not. Inside a Process to allow a Pipe to use it.
if (!(Test-Path -path $DirName))
New-Item $DirName -type directory | Out-Null
MakeDirectory ($basefolder)
# Create Server Object using SMO
$srv = new-object Microsoft.SqlServer.Management.Smo.Server($server)

# Create Database Object
$db =$srv.databases[$database]

# SMO Scripter creation and options set

$scr = New-Object "Microsoft.SqlServer.Management.Smo.Scripter"
$scr.Server = $srv
$options = New-Object "Microsoft.SqlServer.Management.SMO.ScriptingOptions"
$options.AllowSystemObjects = $false
$options.IncludeDatabaseContext = $false
$options.IncludeIfNotExists = $false
$options.ClusteredIndexes = $true
$options.NoCollation = $true
$options.Default = $true
$options.DriAll = $true
$options.Indexes = $true
$options.NonClusteredIndexes = $true
$options.IncludeHeaders = $false
$options.ToFileOnly = $true
$options.Permissions = $true
$options.ScriptDataCompression = $true
$options.ScriptDrops = $false
$options.AppendToFile = $false

#Set options for SMO.Scripter
$scr.Options = $options
if ($objectoption -eq 0 -or $objectoption -eq 1)
# script each table
foreach ($table in $db.Tables | where {$_.IsSystemObject -eq $false})
$tablefolder = $basefolder + "Tables";
MakeDirectory ($tablefolder); #Check for folder, and create if needed
$tablefile = $tablefolder + $table.Name +".table.sql";
$options.FileName = $tablefile;
$scr.Options = $options;
if ($objectoption -eq 0 -or $objectoption -eq 2)
# Script table triggers (go into tables then triggers)
foreach ($table in $db.Tables)
foreach ($trigger in $Table.Triggers | where {$_.IsSystemObject -eq $false})
$triggerfolder = $basefolder + "TablesTriggers";
MakeDirectory ($triggerfolder); #Check for folder, and create if needed
$triggerfile = $triggerfolder + $trigger.name +".trigger.sql";
$options.FileName = $triggerfile;
$scr.Options = $options;
if ($objectoption -eq 0 -or $objectoption -eq 3)
# script each view
foreach ($view in $db.Views | where {$_.IsSystemObject -eq $false})
$viewfolder = $basefolder + "Views";
MakeDirectory ($viewfolder); #Check for folder, and create if needed
$viewfile = $viewfolder + $view.Name +".view.sql";
$options.FileName = $viewfile;
$scr.Options = $options;
if ($objectoption -eq 0 -or $objectoption -eq 4)
# script each function
foreach ($function in $db.UserDefinedFunctions | where {$_.IsSystemObject -eq $false})
$functionfolder = $basefolder + "ProgrammabilityFunctions";
MakeDirectory ($functionfolder); #Check for folder, and create if needed
$functionfile = $functionfolder + $function.Name +".function.sql";
$options.FileName = $functionfile ;
$scr.Options = $options;
if ($objectoption -eq 0 -or $objectoption -eq 5)
# script each stored procedure
foreach ($procedure in $db.StoredProcedures | where {$_.IsSystemObject -eq $false})
$procedurefolder = $basefolder + "ProgrammabilityStored Procedures";
MakeDirectory ($procedurefolder); #Check for folder, and create if needed
$procedurefile = $procedurefolder + $procedure.Name +".proc.sql";
$options.FileName = $procedurefile ;
$scr.Options = $options;

Controlling access to database objects

I have been playing around with database security recently and a question over at ASK regarding controlling access to database objects has prompted this post.

It is regarded a best practice to lock down your SQL Server instances from unwanted access.  As SQL Server offers so many features, there are many facets of the system that need to be adjusted to reach the goal of a watertight system.

Removing user access to your tables is one of a number of ways of guarding your data.  If they cannot access the tables, then they have an extra roadblock in accessing and breaking your data.  Once this access is revoked, you can then go about granting access to these tables through other objects: vies, functions, stored procedures that give a layer of abstraction from the data and allow a tighter control over user access.

This would also allow you to build up a data access layer (DAL) that would move the database design towards the more traditional programming techniques applied in object oriented programming (OOP).  A DAL offers you the benefit of making structural design changes that are transparant to anyone or anything that is accessing the DAL, similar to interfacing between objects in OOP.

There are plenty of resources on this topic but this may give someone, somewhere a start off in the right direction.  Below is a test script that will create a test user, a table and a view that accesses the table.  The test user has access rights revoked to the table itself, but is allowed to access a subset of the table columns through the test view.

/* Create a test user without a login for this example only */


/* Create a test table */

CREATE TABLE dbo.TestTable


, Col2 int NOT NULL

, Col3 int NOT NULL) ;

/* Deny select rights to TestUser */

DENY SELECT ON dbo.TestTable TO TestUser ;

/* Create a view that selects the first two columns of the test table */


SELECT Col1,Col2 FROM dbo.TestTable ;

/* Grant select rights to TestUser for the TestView */

GRANT SELECT ON TestView TO TestUser ;

/* Impersonate TestUser to inherit his access rights*/


/*Try selecting from the base table – fails*/

SELECT * FROM dbo.TestTable ;

/* Try selecting from the TestView – success*/

SELECT * FROM dbo.TestView ;

/* Revert back to your access rights */


/* Tidy up */

DROP VIEW dbo.TestView;

DROP TABLE dbo.TestTable;


Let technology replace your ageing brain

After seeing a question on ASK (Sqlsercentral version of Stackoverflow) asking for help with code to extract index meta data, I took a look into my little box of coding tricks.  I had put something together a while back to rename indexes to fit a certain naming scheme that almost fit the job.  I promptly posted my answer and the OP was suprised at the speed of the reply.

This proved to me again, that keeping all scripts that you ever create is really important.  If you have written it, save it somewhere permanent.  Ideally you will keep these scripts on a network share or on the web, so that you can access it any time, anywhere.  I have learned the hard way, that the little innocent script you wrote and threw away, is going to be needed again.  This normally happens about a month or so later, and tools like SSMS Toolpack with the excellent Query Execution History can help, but not as good as a script collection.

I know that I will be updating my script collection and will post the scripts and a little note here as and when I get the time.

So remember, save your scripts and be prepared!