Recently I had to examine the content of almost all tables in all databases to find necessary tables and recreate the entity model. So I need a script that can iterate thru all tables in all databases on a given server.
This can be made with the help of undocumented stored procedure sp_msforeachdb. The examples of use can be found via Google search.Here is my script that lists all tables on a MS SQL server.
DECLARE @Tables TABLE (
INSERT INTO @Tables (DatabaseName, SchemaName, TableName)
EXEC sp_msforeachdb 'select ''?'', s.name, t.name from [?].sys.tables t inner join [?].sys.schemas s on t.schema_id = s.schema_id';
SELECT * FROM @Tables ORDER BY 1, 2, 3;
As I wrote in one of my previous posts (see here) SSDT leaves test data in tables after it executes INSERT command. One of the methods to delete these data is to use a special script in post-test action. The second method is to use transaction to run the unit test.
Obviously we can use explicit BEGIN TRANSACTION and ROLLBACK commands to execute test script. But it needs some effort to change the testing script. SSDT offers another graceful solution as described in MSDN. We can declare the transaction in C# code (that’s .cs file for unit test).
To execute only one test from the test class in a transaction we should do the following:
- Right-click the unit test in Solution Explorer and select View Code.
- Find the declaration of your particular test
- Place the code from inside the function body into the following block:
using (TransactionScope ts = new TransactionScope(System.Transactions.TransactionScopeOption.Required))
.... inside-code of the function
The code will look like this:
To execute all tests of the class in a transaction we should do:
- Declare TransactionScope variable in a test class.
- Create a transaction in the Init() method.
- Dispose of the transaction in the Cleanup() method.
The code will look like this:
When we test SQL Server stored procedures or functions there may be situations where one object (for example, stored procedure) calls another object (for example, a scalar function). In this case we have dependency between objects.
One of the method to test the functionality of dependent objects is to use fake objects, i.e. we use a stub function instead of a real one. In this post I’ll show how to substitute a function with a fake.
I’ve created a very simple function that multiply an integer value by 2.
CREATE FUNCTION [dbo].[StubFunction]
RETURN @param * 2;
I’ve created a simple unit test for that function in SQL Server Data Tools with the only test condition to check that 1 * 2 = 2.
My test passes. Now I want to replace my function with a fake that will always return a value 2.
This can be made in pre-test section by renaming the real function and creating a stub.
There’s two important things in this script:
- It does not check the existence of a new function name and it does not delete it. If this new name already exists, the unit test throw an error.
- You can’t create a stub function with a direct CREATE FUNCTION command due to an error in SQL batch. Therefore, I use a wrapper to execute SQL command dynamically.
In post-test section I safe-delete my stub function and revert the name back.
The test passes. That’s all right!
Using a fake object allows us to test each database object separately but it needs some efforts from a developer.
Recently I’ve faced with a task to extract client names and phone numbers from XML document and return a table like a standard SELECT query. This can be made with a nodes() method.
This method shreds the XML document and extracts nodes from it. The description is available on MSDN. Let’s look at the example how it works.
DECLARE @xml XML;
SET @xml = '
n.query('.').value('(/Client/ClientName/node())','nvarchar(max)') AS CLientName,
n.query('.').value('(/Client/Phone/PhoneNumber/node())','nvarchar(max)') AS FirstPhoneNumber,
n.query('.') AS ContactXML
FROM @xml.nodes('/Clients/Client') a(n);
The nodes() method extracts Client node from XML and makes 2 rows from which we can get values.
We can also extract phone numbers from Phone node. In this case we should use common table expression (CTE) to extract Client nodes, then we can shred the Phone nodes.
x.query('.').value('(/Client/ClientName/node())','nvarchar(max)') AS CLientName,
x.query('.').value('(/Client/Phone/PhoneNumber/node())','nvarchar(max)') AS FirstPhoneNumber,
x.query('.') AS ClientXML
FROM @xml.nodes('/Clients/Client') a(x)
CROSS APPLY (
n.query('.').value('(/Phone/CountryCode/node())','nvarchar(max)') AS CountryCode,
n.query('.').value('(/Phone/PhoneNumber/node())','nvarchar(max)') AS PhoneNumber,
n.query('.') AS PhoneXML
FROM ClientXML.nodes('/Client/Phone') p(n)
The result is shown below.
Unit test in SQL Server Data Tools has Scalar Value test condition. This test condition receives a dataset (result set) from the query. You can choose the cell from that dataset with Row number and Column number properties.
If you want to check a value of variable you should use the following command:
This will return a dataset that contain only one cell, which means it has one row and one column. If you want to check a value from another cell, adjust the properties mentioned above.
I was wondering how SSDT understand the value that we set in the Expected value property. When there’s a comparison between numbers or strings, I don’t even think about conversion problems.
But what will be if we compare dates? I have not found a calendar or other tools to set datetime value. The Properties pane has a mere textbox for the Expected value. It seems that SSDT gets a string.
So I decided to conduct an experiment to check how conversion really works. I created a very simple test case. I use ISO format in cast to avoid any conversion problem.
SQL Server has a number of string format that can be used as a representation of a datetime. Full list is available in MSDN.
When I use ISO or unseparated format, the test passes. When I set the value as ‘8/15/2014 8:20:15’, it works too. But when I set the date as ‘15.08.2014 8:20:15’, the test fails.
That’s because SSDT (Visual Studio) or SQL Server has another locale or dateformat settings than the format dd.mm.yyyy.
This experiment proves that SSDT really performs conversion from string to datetime type. To avoid conversion problems always set the value in the Expected value property in ISO 8601 format:
I use the steps described in the MSDN article How to: Create Test Conditions for the SQL Server Unit Test Designer and found some variations while repeating it in Visual Studio 2013.
1) in step 5 we can use a newer version of dll if we have a newer version of SQL Server installed on local machine. Then number 110 in the path means SQL Server 2012, 120 – SQL Server 2014.
2) in step 8 we need to change the VS version number from 10.0 to 12.0. So you need to add these lines:
<Import Project="$(MSBuildExtensionsPath32)MicrosoftVisualStudiov12.0SSDTMicrosoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' == ''" />
<Import Project="$(MSBuildExtensionsPath32)MicrosoftVisualStudiov$(VisualStudioVersion)SSDTMicrosoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' != ''" />
Hint: import of Microsoft.CSharp.targets is located at the end of the file.
3) in step 10 we need to include two namespaces to derive from class TestCondition. Add these directives to your class:
4) in step 13 we need to copy dll file from bin\Debug or bin\Release folder to C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\Extensions\Microsoft\SQLDB\TestConditions. You will see readme.txt file there.
“SSDT now enables you to develop, debug and execute database unit tests interactively in Visual Studio…”
I was pleased when I read this in the post
in SSDT team blog.
But I was confused a little when I could not set a breakpoint in unit test as I always do in Visual Studio. Yeah! You can’t click on the grey sidebar to the left from the unit test code (it’s on the red frame on the screenshot below).
1) enable SQL debugging on test project:
- right-click test project in Solution Explorer, then click Properties
- click Debug
- check the option Enable SQL Server debugging
2) increase execution timeout:
- double-click on app.config in Server Explorer
- set CommandTimeout in the line ExecutionContext to a greater value (for example, 300 seconds)
- rebuild the test project to apply changes in app.config
3) set breakpoint in stored procedure we are debugging. You can open it in your database project or via SQL Server Object Explorer (View -> SQL Server Object Explorer).
4) start the debugging:
- open Test Explorer;
- right-click the test
- click Debug Selected Tests
Immediately after that you will see the debug panels in Visual Studio. But it’s too early to press any buttons to step into or over. You should wait some time while Visual Studio loads and runs all the actions before it reach the breakpoint. At this time the breakpoint is shown as a white circle with a red rim.
When the execution stops at breakpoint the line will be highlighted with yellow color. Now you can step forward over the commands of the stored procedure being debugged.
SQL Server has a capability to create many database objects with the same name. But they must belong to different schema. Let’s look how we can deal with this.
My simple test case:
- Create a new schema tst
- Create a new login test
- Create a new database user test for that login and assign schema tst to him
- Make the user test to be the owner of schema tst
- Create two table with the same name in dbo and tst schema
- Insert different values to the correspondent tables
Script can be downloaded from db_object_names_resolution
Now we can check the rows in both tables. That’s alright, we see exactly the same values that were inserted earlier. Pay attention to the fact we use schema names when accessing the tables.
This time we run SELECT command without schema name. What should we see? ‘dbo’ or ‘tst’?
The answer is ‘dbo’.
Try to change execution context to user test and run the same command.
We’ve just got the results from the table in tst schema. Why?
If you look at the User Mapping in the Login Properties window, you see the user and default schema for that particular login.
Do you understand what happens?
When I connect to SQL Server as Administrator, I run the query as user dbo and have a default schema dbo. The results are taken from [dbo].[Clients].
When I connect as test, I run the query as user test, so the default schema for that user is tst. The results are taken from [tst].[Clients].
The next question – what will be if the user test run the query from table in dbo schema without denoting the schema name? There are 2 possible answers: the query will return the results from that table or will not be executed. What do you think about that?
To sum up, I would recommend to use unique names for database objects even if they belong to different schema.
Hint: the first search to resolve the database object’s name is made in user default schema, then in dbo schema.
When I explored the negative unit testing (see here) I encountered a new property MatchFirstError in ExpectedSqlException.
According to MSDN, true value means that the expected severity, state, and error number must match to the first SqlError in the exception; false value – to any SqlError in the exception.
I was wondering in which situations a SQL error could not be the first. If it’s not the first, it can be the second error and so on. The next question – how can I raise two errors?
When SQL server executes T-SQL commands, the error interrupts that batch and raises an exception. So this error will be the the first and the only error.
But when we use try – catch blocks, there can be errors both in try and catch block. Therefore there can be multiple errors. It can be check in a very simple way.
We execute a code that tries to delete non-existing stored procedures in try and catch block.
The result is two(!) errors from one stored procedure.
Let’s return back to MatchFirstError property. The default value (true) means that expected error will be the only exception or the cause of other consequent exceptions (in another words, it won’t be raised by another error).
In one of my previous post I’ve faced with exception that occurred during unit test. Placing an appropriate command to disable constraints helps to avoid exception. But what should we do if we want to check that T-SQL code really fires an exception? The answer is very simple – we need to write a test that will handle the exception. This technique is called “negative testing”.
I will use the same example that I described in my previous test.
- I change the Pre-test section to enforce constraints. Just place this command:
ALTER TABLE Clients CHECK CONSTRAINT ALL;
- open the C# code of unit test: right-click the test in Solution Explorer, then click View Code
- find the definition of our test (search for string dbo_CreateClientTest)
- change the line in square brackets above the definition by adding a ExpectedSqlException attribute
Message number and severity code was taken from the output of failed test that was run before. All possible properties that can be used in ExpectedSqlException declaration are explained here.
Note: The test will work even if you omit all the properties but using the exact values gives you the better level of granularity to check the right kind of exceptions.
Now we can run our test. This time we see a green mark – our test passes.