As I wrote in one of my previous posts (see here) SSDT leaves test data in tables after it executes INSERT command. One of the methods to delete these data is to use a special script in post-test action. The second method is to use transaction to run the unit test.
Obviously we can use explicit BEGIN TRANSACTION and ROLLBACK commands to execute test script. But it needs some effort to change the testing script. SSDT offers another graceful solution as described in MSDN. We can declare the transaction in C# code (that’s .cs file for unit test).
To execute only one test from the test class in a transaction we should do the following:
- Right-click the unit test in Solution Explorer and select View Code.
- Find the declaration of your particular test
- Place the code from inside the function body into the following block:
using (TransactionScope ts = new TransactionScope(System.Transactions.TransactionScopeOption.Required))
.... inside-code of the function
The code will look like this:
To execute all tests of the class in a transaction we should do:
- Declare TransactionScope variable in a test class.
- Create a transaction in the Init() method.
- Dispose of the transaction in the Cleanup() method.
The code will look like this:
When we test SQL Server stored procedures or functions there may be situations where one object (for example, stored procedure) calls another object (for example, a scalar function). In this case we have dependency between objects.
One of the method to test the functionality of dependent objects is to use fake objects, i.e. we use a stub function instead of a real one. In this post I’ll show how to substitute a function with a fake.
I’ve created a very simple function that multiply an integer value by 2.
CREATE FUNCTION [dbo].[StubFunction]
RETURN @param * 2;
I’ve created a simple unit test for that function in SQL Server Data Tools with the only test condition to check that 1 * 2 = 2.
My test passes. Now I want to replace my function with a fake that will always return a value 2.
This can be made in pre-test section by renaming the real function and creating a stub.
There’s two important things in this script:
- It does not check the existence of a new function name and it does not delete it. If this new name already exists, the unit test throw an error.
- You can’t create a stub function with a direct CREATE FUNCTION command due to an error in SQL batch. Therefore, I use a wrapper to execute SQL command dynamically.
In post-test section I safe-delete my stub function and revert the name back.
The test passes. That’s all right!
Using a fake object allows us to test each database object separately but it needs some efforts from a developer.
Unit test in SQL Server Data Tools has Scalar Value test condition. This test condition receives a dataset (result set) from the query. You can choose the cell from that dataset with Row number and Column number properties.
If you want to check a value of variable you should use the following command:
This will return a dataset that contain only one cell, which means it has one row and one column. If you want to check a value from another cell, adjust the properties mentioned above.
I was wondering how SSDT understand the value that we set in the Expected value property. When there’s a comparison between numbers or strings, I don’t even think about conversion problems.
But what will be if we compare dates? I have not found a calendar or other tools to set datetime value. The Properties pane has a mere textbox for the Expected value. It seems that SSDT gets a string.
So I decided to conduct an experiment to check how conversion really works. I created a very simple test case. I use ISO format in cast to avoid any conversion problem.
SQL Server has a number of string format that can be used as a representation of a datetime. Full list is available in MSDN.
When I use ISO or unseparated format, the test passes. When I set the value as ‘8/15/2014 8:20:15’, it works too. But when I set the date as ‘15.08.2014 8:20:15’, the test fails.
That’s because SSDT (Visual Studio) or SQL Server has another locale or dateformat settings than the format dd.mm.yyyy.
This experiment proves that SSDT really performs conversion from string to datetime type. To avoid conversion problems always set the value in the Expected value property in ISO 8601 format:
I use the steps described in the MSDN article How to: Create Test Conditions for the SQL Server Unit Test Designer and found some variations while repeating it in Visual Studio 2013.
1) in step 5 we can use a newer version of dll if we have a newer version of SQL Server installed on local machine. Then number 110 in the path means SQL Server 2012, 120 – SQL Server 2014.
2) in step 8 we need to change the VS version number from 10.0 to 12.0. So you need to add these lines:
<Import Project="$(MSBuildExtensionsPath32)MicrosoftVisualStudiov12.0SSDTMicrosoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' == ''" />
<Import Project="$(MSBuildExtensionsPath32)MicrosoftVisualStudiov$(VisualStudioVersion)SSDTMicrosoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' != ''" />
Hint: import of Microsoft.CSharp.targets is located at the end of the file.
3) in step 10 we need to include two namespaces to derive from class TestCondition. Add these directives to your class:
4) in step 13 we need to copy dll file from bin\Debug or bin\Release folder to C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\Extensions\Microsoft\SQLDB\TestConditions. You will see readme.txt file there.
“SSDT now enables you to develop, debug and execute database unit tests interactively in Visual Studio…”
I was pleased when I read this in the post
in SSDT team blog.
But I was confused a little when I could not set a breakpoint in unit test as I always do in Visual Studio. Yeah! You can’t click on the grey sidebar to the left from the unit test code (it’s on the red frame on the screenshot below).
1) enable SQL debugging on test project:
- right-click test project in Solution Explorer, then click Properties
- click Debug
- check the option Enable SQL Server debugging
2) increase execution timeout:
- double-click on app.config in Server Explorer
- set CommandTimeout in the line ExecutionContext to a greater value (for example, 300 seconds)
- rebuild the test project to apply changes in app.config
3) set breakpoint in stored procedure we are debugging. You can open it in your database project or via SQL Server Object Explorer (View -> SQL Server Object Explorer).
4) start the debugging:
- open Test Explorer;
- right-click the test
- click Debug Selected Tests
Immediately after that you will see the debug panels in Visual Studio. But it’s too early to press any buttons to step into or over. You should wait some time while Visual Studio loads and runs all the actions before it reach the breakpoint. At this time the breakpoint is shown as a white circle with a red rim.
When the execution stops at breakpoint the line will be highlighted with yellow color. Now you can step forward over the commands of the stored procedure being debugged.
When I explored the negative unit testing (see here) I encountered a new property MatchFirstError in ExpectedSqlException.
According to MSDN, true value means that the expected severity, state, and error number must match to the first SqlError in the exception; false value – to any SqlError in the exception.
I was wondering in which situations a SQL error could not be the first. If it’s not the first, it can be the second error and so on. The next question – how can I raise two errors?
When SQL server executes T-SQL commands, the error interrupts that batch and raises an exception. So this error will be the the first and the only error.
But when we use try – catch blocks, there can be errors both in try and catch block. Therefore there can be multiple errors. It can be check in a very simple way.
We execute a code that tries to delete non-existing stored procedures in try and catch block.
The result is two(!) errors from one stored procedure.
Let’s return back to MatchFirstError property. The default value (true) means that expected error will be the only exception or the cause of other consequent exceptions (in another words, it won’t be raised by another error).
In one of my previous post I’ve faced with exception that occurred during unit test. Placing an appropriate command to disable constraints helps to avoid exception. But what should we do if we want to check that T-SQL code really fires an exception? The answer is very simple – we need to write a test that will handle the exception. This technique is called “negative testing”.
I will use the same example that I described in my previous test.
- I change the Pre-test section to enforce constraints. Just place this command:
ALTER TABLE Clients CHECK CONSTRAINT ALL;
- open the C# code of unit test: right-click the test in Solution Explorer, then click View Code
- find the definition of our test (search for string dbo_CreateClientTest)
- change the line in square brackets above the definition by adding a ExpectedSqlException attribute
Message number and severity code was taken from the output of failed test that was run before. All possible properties that can be used in ExpectedSqlException declaration are explained here.
Note: The test will work even if you omit all the properties but using the exact values gives you the better level of granularity to check the right kind of exceptions.
Now we can run our test. This time we see a green mark – our test passes.
When we run a piece of T-SQL code, it can be dependent on another T-SQL code such as function. For example, stored procedure queries data with only one SELECT command. In this command you can use scalar functions in SELECT clause, table-valued functions or views in FROM clause, scalar functions in WHERE clause (that’s a very bad habit, but it’s possible), etc.
Our T-SQL code can be affected by other database objects such as constraints and triggers. When we insert or update data, these objects can prevent to enter incorrect values, so our command would fail.
Let’s look at how they would impact on database unit testing.
I modify the test case I’ve created earlier in this post and add a simple constraint.
My test passed because ClientName (‘John Smith’) consists of 10 chars. Now I change to something shorter, and the test failed.
There’s no error in stored procedure I’ve been testing, it’s a violation of constraint. So to pass this test (I mean to test the functionality of stored procedure) I should eliminate constraint from being applied during the test. It can be done in the following manner:
- choose an item Pre-test in the dropdown list shown below
- Click the link Click here to create in the center of Test Designer.
- Write T-SQL code to disable the constraint.
ALTER TABLE Clients NOCHECK CONSTRAINT CK_Clients_ClientName_Length;
Alternatively, we can disable all constraints in a table with one command:
ALTER TABLE Clients NOCHECK CONSTRAINT ALL;
Now our test really passed.
But don’t forget to enable constraint in Post-Test section of unit test.
In my previous research about database unit testing in SQL Server Data Tools I’ve found that data inserted into tables during test had not been deleted automatically. You should create a cleanup action manually.
- Choose an item Post-test in dropdown list shown below.
- Click the link Click here to create in the center of Test Designer.
- Write T-SQL code to delete test data.
I truncate table Clients and run my test. Test is OK and there’s no test data in Clients table. So I could run my test as many times as needed.
There’s a possibility to include test condition even in post-test phase. Wonderful, I’ll check that test data is deleted successfully.
- Add simple SELECT command to query rows with test guid.
- Add test condition to check no data is returned. This can be made with Row Count = 0 condition or Empty ResultSet condition. I choose the second.
The whole post-test action looks like this:
Unit test has passed. If you are in doubt about all actions or not were really performed, you can check the output of the test. To see it click the Output link at the bottom of Test Explorer pane.
I have a very simple database for experiments with SQL Server unit tests. ClientsDB database contains just one table Clients with no records:
and one stored procedure CreateClients:
I have created a database project ClientsDatabase and a SQL Server unit test project ClientsDBTestProject to test my stored procedure. The autogenerated test (slightly formatted for readability) is shown below.
To create a simple test we must do:
- change assignments in first SELECT command and set test parameters such as client guid and name;
- delete the last SELECT command because the stored procedure does not return any value (there’s no RETURN statement in SP);
- insert a SELECT command before EXECUTE command to check that there’s no row with the test guid in Clients table;
- add a SELECT command after EXECUTE command to check that row with the test guid has inserted into Clients table;
- delete the autogenerated test condition inconclusiveCondition1;
- add first test conditions with type Row Count. Set the property Row Count to 0. Also you should check that property ResultSet is 1;
- add second test condition with type Row Count. Set the property Row Count to 1 and change ResultSet to 2.
Finally our unit test should look like this:
Now we are ready to start our unit test.
- Open Test Explorer (Test -> Windows -> Test Explorer)
- Right-click dbo_CreateClientTest, then Run Selected Tests or simply click Run All because we have only one unit test.
- If test has completed successfully, we see a green mark near the test name.
Our test is working, everything seems OK. But…
Check the data in Clients table (thanks to SSDT, it can be made from Visual Studio now, or you can use SQL Server Management Studio).
And now we see our test record. Is it funny? I think NO. This means that test data is not deleted after the test has been executed therefore the next time we run our test it would fail. Don’t forget about it!
It’s not a tragedy because SSDT has capabilities to solve this issue. I’ll write a post about it later.