This is a step-by-step instruction of how to debug CLR code in Visual Studio. It is based on MSDN article Debugging CLR Database Objects. As stated in the comment there, the process can differ for different versions of Visual Studio. I’ve got Visual Studio 2015 Community Edition with Update 3.
1) Run Visual Studio with Administrator rights. It’s needed for VS to attach the SQL Server process.
If you skip this, you might see an error later:
Unable to debug .NET code. Could not attach to SQL Server process on 'VS2015'. Operation not supported. Unknown error: 0x80004005.
2) Open SQL Server Object Explorer in Visual Studio.
3) Right-click server and select Allow SQL/CLR Debugging.
4) Drill down to CLR function, right-click and select Debug Function…
5) In the Debug Function window enter value for input parameter and press OK.
6) Visual Studio will generate a test script and enter into the debug mode.
7) Now you can press F11 to step into the test script code and CLR function.
During the last step Windows Firewall could ask you to allow VS debugger to connect through network. Please allow it.
Also Visual Studio will always ask you to attach to SQL Server process in order to debug CLR code. Press Attach here.
If you try to find an example how to create CLR code for SQL Server, you may encounter MSDN article How to: Create and Run a SQL Server User-Defined Function by using Common Language Run-time Integration, but it describes the process for Visual Studio 2010. Now I’ll show how to do it in VS 2015.
1) Create or open SQL Server Database Project.
2) Add new item and choose SQL CLR C# User Defined Function type.
3) Write the code like in this primitive case.
public partial class UserDefinedFunctions
public static SqlDouble ClrFunction(SqlInt32 value)
double x = value.IsNull ? 0 : (double)value * 2.0;
It’s important to use correct types for SQL Server and C# – SqlDouble and double respectively.
4) Publish the project.
Although the project was deployed successfully, you need to enable CLR on SQL Server. Run the following script:
sp_configure 'show advanced options', 1;
sp_configure 'clr enabled', 1;
If you won’t do that, you’ll get an error when you try to run this CLR function:
Msg 6263, Level 16, State 1, Line 1
Execution of user code in the .NET Framework is disabled. Enable "clr enabled" configuration option.
I work both with SSDT database projects and .NET projects based on Entity Framework Code First models. Now I’ll say some words about how these two types of projects do the deployment.
1. SSDT database project
You write the code (it is Transact-SQL code) to create tables, stored procedures, functions and other SQL Server database objects in declarative style. It means that you write it in a way of how it should be in the final stage. Don’t worry about changes, no ALTER commands – Visual Studio do it during deployment.
In SSDT database project terminology deployment is called “publishing”. To deploy your database you should find Publish command 🙂
During publishing Visual Studio compare the actual state of the database with the state that should be according to your project. Then it generate the ALTER script.
2. Code First Migration
You write your model in declarative style too but as a C# class.
To create the database you use NuGet Package Manager to run two commands: Add-Migration and Update-Database.
During migration Visual Studio looks into previous migration classes and generates the difference. Then it creates a new class with changes.
Update-Database generates T-SQL commands according to the code in the migration class. The info about applied migrations is saved in the database in the dbo.__MigrationHistory table.
The main difference between these two deployments is that SSDT database project produces change script based on actual database, whereas Code First Migration does not. So if you use migrations, change your tables only through migrations. Direct changes to database would not be taken into account by migration process.
While working with Entity Framework Code First model, I’ve faced with a need to extend the functionality of the database by writing some T-SQL code. In this scenario my database (primarily, database tables) would be created by EF Migration. Stored procedures, functions and views will be created in another separate database project.
Now SSDT needs to know what tables would be in my database in order to resolve the references correctly and to allow me to write queries. Unfortunately SSDT does not have any capability to connect to existing database and to provide a possibility to write to code upon this database. Th only option is to use reference either to a SSDT database project or a Data-tier application (DACPAC file). I will use the last option.
- Create new database project in your solution.
- Extract your database as a DACPAC file (of course, after a successful migration of EF model).
- Right-click References node inside database project in Solution Explorer.
- Choose Add Database Reference…
- Choose Data-tier Application (.dacpac) and select the file you’ve extracted in step 2.
- Select “Same database” option in Database location dropdown list.
There are a couple of tricks you can do to make you life a bit easy.
- You can write a script that will drop stored procedures and extract a “clean” DACPAC (only tables). This can be done, for example, with PowerShell script (look at my previous post Extracting DACPAC From SQL Server Database With PowerShell
- By default SSDT copies DACPAC file from original folder to your project. But I want that DACPAC would stay in one place, in the folder with PowerShell extractor. This time I need to go to Properties of my newly created database reference and change Copy Local parameter to False.
- In addition, if you’re bored with warnings about unresolved references, you can switch them off by checking a box in the Add Database Reference dialog or in Properties window by setting Suppress Reference Warnings parameter to False.
SQL Server Data Tools (SSDT), the component of Visual Studio, has a couple of comparison tools for schema and data comparison. They have some default behavor on saving comparison and choosing a table key. It’s quite obvious how they work but I wanted to test some features.
1. In schema comparison I can uncheck some database objects to exclude them from applying to target database. Then I can save the comparison to file. What will be when I would open the saved comparison whereas the source or target schema has changed?
The question is about whether SSDT saves all database objects names in file or not. The answer is SSDT saves only check/uncheck marks. If you schema is changed, comparison will show you the exact changes, and unchecked items will remain unchecked.
2. Data comparison needs a primary key to see the differences between each table row. It’s reasonable. If you don’t have a primary key, this table won’t be shown for comparison. What else can be used for comparison? Unique key?
In fact, SSDT needs keys in both tables. It can use primary key or unique key, but not the combination of primary key in one table + unique key in another. If you have primary key and unique key on the same table, SSDT will choose primary key. That’s reasonable too, because primary key does not allow null values, it’s a bit more strict than unique key.
I’ve found an interesting bug in SQL Server Data Tools, version is 12.0.50717.0.
I have a database project. After I published the project to SQL server, I wanted to examine the code via SQL Server Object Explorer. I double-clicked the stored procedure, and SSDT showed me the code (at this time it was identical to the code in database project).
Then I close the tab, and open the code of the same stored procedure in the database project. I’ve made some slight correction.
Now I return to SQL Server Object Explorer. Double-click SP again. But I see the modified code from the database project. Why? I expect that Explorer would show the code on the server.
To be sure, I also check the code on the SQL Server via SQL Server Management Studio. The code on the server really differs from the code in database project. So, it’s just a bug in SSDT 🙁
Update August 19, 2015
It’s not a bug, it’s just a feature of SSDT 🙂
It depends on the connection that was set in Debug tab of the database project’s Properties window. When you open an object in SQL Server Object Explorer from the same database (from Debug tab), SSDT shows the code from the database project. If this is the other database (different from Debug tab), SSDT shows the code from the server. In the last case it also shows a different tab name (something like dbo.SP_Name) and the Update button.
When you create database unit test project, you need to setup connections to your database. I always use only one connection to run the code and test conditions. But I was curious what is the second connection for? Where does that validation occur?
Now I’ve spent some time to check when this validation connection is really used. I’ve created a simple unit test and placed a simple command to check user name in every part of unit test:
Here are the results of my checks.
|Part of the unit test
As you see, only test execution phase uses connection 1 (“execution”), whereas all the other phases use connection 2 (“validation”) if you use 2 connections. If you use only one connection, so all the steps will be run under this unique username.
It’s a bit strange that SSDT uses a term “validation” for a connection for pre- and post-test steps. But that’s out of our will 🙂
When you work with SSDT comparison tools like Data or Schema comparison, you might enter many connections to databases. So your list of connections looks like as shown on the picture below. But there’s no button to delete unnecessary connections.
I saw an advice to choose a connection, click Edit and change to any already existing connection. Visual Studio will store only one of them, thus the unnecessary connection will be deleted. Fine! But Visual Studio needs to open this connections before you can edit it, If the connection is unavailable, so VS won’t open it.
There’s a more radical method via Windows registry. Open the branch HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\12.0\SSDT\ConnectionMruList and delete ConnectionN and ConnectionNameN records there.
I’ve received a note that SQL Server Data Tools has some problems when working with references to another database or server in a database project. I wanted to find out what is really happens in SSDT when I import a database with dependencies. I’m using Visual Studio 2013 with Update 4.
I’ve created a database with the stored procedure that will use the following things in queries:
- linked server;
- four-part name;
The import of database to SSDT database project has finished successfully. I noticed that the linked server has not been imported. I think that’s because linked server is a server object, not a database one. But you can add a linked server to your database project manually.
When I open the stored procedure which queries the linked server, I see a warning about unresolved reference.
The same warning appears when I open a procedure with a four-part name reference. As for the last case with a synonym, SSDT does not show any warning. But in all cases we’ve got some difficulties/features such as:
- Intellisense does not work on that objects;
- Build passes with a lot of warnings SQL71502 or SQL71562 about unresolved references;
- Build passes even if I’ve made an error in the name of a referenced table/field/function, etc.;
- Errors can be found only in the deployment phase.
- we can use any form of references to another databases via linked server, four-part name notation and synonym;
- linked server is not imported to database project automatically;
- SSDT is unaware of objects through the references that will lead to a late discovering of errors.
During my session on 24 Hours of PASS 2015 Russian Edition I was asked about the Import database wizard in SQL Server Data Tools. The particular option is “Maximum files per folder”.
This option allows you to split the whole number of database objects to separate folders with a specified quantity of these objects. The dropdown list contains three values: 100, 500 and 1000. You can enter the value manually, but the minimum is 20 (SSDT auto-corrects it if you enter a quantity less then 20).
As a result you will get the following structure in you database project.