Dynamic SQL Queries In SQL Server

SQL Server allows to execute SQL code from a variable. The simple way is to use EXEC command.

The second way is to use sp_executesql stored procedure. It even allows to run code with parameters.
1. Simple query

Don’t forget to change single quote (‘) with two single quotes inside a string.

2. Query with a parameter
You need to pass a list of parameters to be used inside a dynamic SQL query and actual values for those parameters.

Note that name of the parameter inside the query and the name of variable with actual value could be different.

3. Query with output parameter

4. Query with table-valued parameter
The rules of the SQL Server are the same: we need to create table-valued type and use a readonly parameter.

5. Temporary tables with dynamic SQL
SQL Server allows only to use temp tables that were created before running dynamic SQL code. If you create a temp table during dynamic query, it would be inaccessible after that query.

Important! You can pass more parameters to sp_executesql procedure than is really used in dynamic query. This can be very helpful when you need to dynamically add joins or where clauses to a query. So you pass all the parameters, but consume just a part of them.

Also you can download full sample code Dynamic-SQL-queries.zip to play with it.

Calculating Size Of A Database Table

If you want to get a statistics on a number of database pages used by a specific table in SQL Server, you can query DMV sys.dm_db_partition_stats.

The following query will show you the number of pages per table and each heap/clustered/nonclustered index.

OBJECTPROPERTY Function In SQL Server

SQL Server has an excellent metadata function OBJECTPROPERTY() that allows to view a lot of properties of database objects. For example, if table has a trigger, clustered key, check constraint or default. The full list is available on MSDN article OBJECTPROPERTY (Transact-SQL).

For example, if you want to list all tables that has no primary key, you can do this with a simple query:

SQL Server Service Accounts

When you install SQL Server, you need to choose accounts for SQL Server services. Here are some usefull links that can help you to make a correct decision:

As overall installation manual you can use Jonathan Kehayias’ guide described in his post SQL Server Installation Checklist.

Pagination And Total Number Of Rows From One SELECT

If you have a business requirement to implement a pagination in user interface, you could use OFFSET/FETCH or ROW_NUMBER() to retrieve rows from SQL Server. But when you need to show the total number of rows that can be retrieved from database, it becomes harder.

You can use two queries – one for counting the total number, second for extracting the page, but this is not a good idea. Why? First, you’ll have negative impact on performance because you need 2 queries, second, you can get different results cause data could change between these two queries.

Luckily, there are some solutions to do this in one query. Because SQL Server could not return rows and assign a value in one query, we need to put the total number as an additional column into a resultset.

1. Using COUNT(*) OVER()

It’s a very simple query to write. As for performance, I’ll compare all solutions later.

2. Using CTE to count all records

3. Extracting keys to temporary table

The idea is to extract primary keys to temporary table or CTE, get a page of rows, then join it with the original table. We need to use ROW_NUMBER() to assign ordinal number to every row in order to get a page. There are 2 ways to implement pagination.

3a. Extracting keys and filtering row numbers in WHERE

3b. Extracting keys and using OFFSET/FETCH

Now I’ll compare these queries for IO statistics and query optimizer costs.

1. Using COUNT(*) OVER() 2. Using CTE to count all records 3a. Extracting keys and filtering row numbers in WHERE 3b. Extracting keys and using OFFSET/FETCH
Select all rows IO statistics Table ‘Worktable’. Scan count 3, logical reads 4728… Table ‘sysschobjs’. Scan count 1, logical reads 25 Table ‘sysschobjs’. Scan count 2, logical reads 26 Table ‘sysschobjs’. Scan count 2, logical reads 46 Table ‘sysschobjs’. Scan count 2, logical reads 138
Query cost 15% 11% 25% 50%
Select with WHERE clause IO statistics Table ‘Worktable’. Scan count 3, logical reads 169… Table ‘sysschobjs’. Scan count 1, logical reads 4 Table ‘sysschobjs’. Scan count 2, logical reads 8 Table ‘sysschobjs’. Scan count 2, logical reads 28 Table ‘Worktable’. Scan count 0, logical reads 0… Table ‘Workfile’. Scan count 0, logical reads 0… Table ‘sysschobjs’. Scan count 3, logical reads 31
Query cost 4% 6% 35% 55%

From my viewpoint, the second solution is the best cause it needs the least number of pages to be read and it does not create a table in TempDB.

Debugging SQL Server CLR Function In Visual Studio 2015

This is a step-by-step instruction of how to debug CLR code in Visual Studio. It is based on MSDN article Debugging CLR Database Objects. As stated in the comment there, the process can differ for different versions of Visual Studio. I’ve got Visual Studio 2015 Community Edition with Update 3.

1) Run Visual Studio with Administrator rights. It’s needed for VS to attach the SQL Server process.
If you skip this, you might see an error later:
Unable to debug .NET code. Could not attach to SQL Server process on 'VS2015'. Operation not supported. Unknown error: 0x80004005.
2) Open SQL Server Object Explorer in Visual Studio.
3) Right-click server and select Allow SQL/CLR Debugging.
4) Drill down to CLR function, right-click and select Debug Function…
5) In the Debug Function window enter value for input parameter and press OK.
6) Visual Studio will generate a test script and enter into the debug mode.
7) Now you can press F11 to step into the test script code and CLR function.

During the last step Windows Firewall could ask you to allow VS debugger to connect through network. Please allow it.
clr-debugging-firewall

Also Visual Studio will always ask you to attach to SQL Server process in order to debug CLR code. Press Attach here.
clr-debugging-attach-sql-server-process

Creating SQL Server CLR Function In Visual Studio 2015

If you try to find an example how to create CLR code for SQL Server, you may encounter MSDN article How to: Create and Run a SQL Server User-Defined Function by using Common Language Run-time Integration, but it describes the process for Visual Studio 2010. Now I’ll show how to do it in VS 2015.

1) Create or open SQL Server Database Project.
2) Add new item and choose SQL CLR C# User Defined Function type.
3) Write the code like in this primitive case.

It’s important to use correct types for SQL Server and C# – SqlDouble and double respectively.
4) Publish the project.

Although the project was deployed successfully, you need to enable CLR on SQL Server. Run the following script:

If you won’t do that, you’ll get an error when you try to run this CLR function:

Msg 6263, Level 16, State 1, Line 1
Execution of user code in the .NET Framework is disabled. Enable "clr enabled" configuration option.

SQL Join With First Matching Rows. Choosing the Best Approach

It’s a very old task for SQL developers to write a query that will join two tables and will pick only first matches from the second table to every row from the first table. In my case I need to combine client name from Clients table and his/her first phone number from Phones table.

After making some investigations I stopped at three different queries.

1. Join with SELECT TOP 1 subquery

2. Using CROSS APPLY operator

3. Subquery with SQL Server Window function (ROW_NUMBER)

If any client doesn’t have a phone number, you need to make some slight modifications to the code above: change JOIN to LEFT JOIN, and CROSS APPLY to OUTER APPLY. In this case you’ll have a client name with corresponding NULL instead a phone number.

I won’t estimate the simplicity of the code and ease of understanding. The code that looks shorter might not be the most effective. We need to compare query costs and choose the least one.

And now SQL Server will show its magic. The percentage of each query costs are 50%, 49% and 1% (just look at the screenshot below).

Join with first matching rows

So the most effective is the last query that uses a join with a ranking subquery (SQL Server creates a temporary table here). This query also operates with a minimum number of pages to retrieve the result. You can switch on the I/O statistics (run SET STATISTICS IO ON command) and look at Messages tab in SSMS. In my case I have the following output:
(70347 row(s) affected)
Table 'Phones'. Scan count 70713, logical reads 215349...
Table 'Clients'. Scan count 5, logical reads 833...
Table 'Worktable'. Scan count 0, logical reads 0...
Table 'Worktable'. Scan count 0, logical reads 0...

(70347 row(s) affected)
Table 'Phones'. Scan count 70708, logical reads 213139...
Table 'Clients'. Scan count 1, logical reads 761...

(70347 row(s) affected)
Table 'Phones'. Scan count 5, logical reads 2210...
Table 'Clients'. Scan count 5, logical reads 833...
Table 'Worktable'. Scan count 0, logical reads 0...

Tip: If you would use a LEFT JOIN in the last query, don’t place a “row_num = 1” condition in the WHERE clause, only after JOIN … ON. If you place it in WHERE clause, SQL Server will make an left outer join, and then filter rows (all NULL values will be rejected here). So you will get an equivalent of inner join.

HDD Cluster Size and SQL Server Performance

During my #sqlweek I was asked few times what HDD parameters (sector or cluster sizes) would be better for SQL Server installations. There’s a good article on Technet that explains Windows disk parameters in details, so I don’t need to retell it. Just read the article Disk Partition Alignment Best Practices for SQL Server thoroughly. The main points discussed are partition alignment, stripe unit size and file allocation unit size (cluster size).

As for the HDD cluster size, it is said that “an appropriate value for most installations should be 65,536 bytes (that is, 64 KB) for partitions on which SQL Server data or log files reside. In many cases, this is the same size for Analysis Services data or log files, but there are times where 32 KB provides better performance. To determine the right size, you will need to do performance testing with your workload comparing the two different block sizes.”

SQL Server 2008R2 Causes Severe Error in Merge Output

I’m developing a solution that uses MERGE operation including OUTPUT statement. The development was done in SQL Server Data Tools project with target platform SQL Server 2008. All was well while I’ve been testing it on SQL Server 2014. But when I tried to deploy it on SQL Server 2008 R2 without any service pack, it failed. I got the following errors:

Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.

I’ve tried to apply service packs (from SP1 to SP3), but it didn’t help to solve the problem.

Now I took the stored procedure that causes the SQL Server to dump and started to cut pieces of code to find the exact place. That was the MERGE command, and particularly the OUTPUT statement. It uses @action, inserted.columns and deleted.columns. I’ve checked documentation for SQL Server 2008R2, the statement was correct.

At the last I’ve noticed a post in stackoverflow.com about calculated column in CTE target. Wow! That’s my case – I’m really using CTE target where I filter only first rows with window function. Here is an abridged piece of code:

The problem was that I use SELECT * for simplicity. Therefore calculated field row_num was also included in the target CTE in MERGE command. When I replaced the * to a list of explicitly quailfied fields, the error disappeared.

So, if you use MERGE command with OUTPUT statement in SQL Server 2008 R2 and your target is a common-table expression (CTE), don’t include calculated fields in target CTE.