Offline Help for SQL Server 2014

To use Help in SQL Server 2014 in offline mode you should perform the following steps:

1) install Documentation Components – at least you need the Help Viewer to see the content of help library

Documentation components

2) download the installer from Microsoft site Product Documentation for Microsoft SQL Server 2014 for firewall and proxy restricted environments

3) run the installer SQLServer2014Documentation_August2014_EN.exe

4) unzip files to a folder (I prefer to manually create a folder and unzip files there)

5) launch Help Settings

This can be made from Windows Start menu (Microsoft SQL Server 2014 -> Documentation & Community -> Manage Help Settings) or from SQL Server Management Studio (Help -> Manage Help Settings)

6) click Install content from disk in the Help Library Manager window

Install content from disk

7) locate the folder to which you have unzipped the files at step 4 and select the file HelpContentSetup.msha, then press Open

8) press Next in the Help library Manager

9) click the links in Actions column to select the desired sections, then press Update

Help sections

10) after you’ll see a message “Finished updating” press Finish button, then press Exit

11) now you can delete the folder with unzipped files

Tip: to check that you are really using the local help start Manage Help Settings again, click the first link Choose online or local help, and ensure the setting is correct.

Local help setting

Implicit Database Upgrade and Internal Database Version Numbers

When you run a database from previous version on a SQL Server instance with a higher version, the database is automatically upgraded to the version of the instance. It happens on:

  1. attaching a database from older version;
  2. restroring a database from older version;
  3. and, of course, when you make an in-place upgrade of SQL Server.

For example, when I attach a database from SQL Server 2008 R2 in SQL Server 2014, I’ll get these messages telling about database upgrade.
Converting database 'test' from version 661 to the current version 782.
Database 'test' running the upgrade step from version 661 to version 668.
Database 'test' running the upgrade step from version 668 to version 669.
...
Database 'test' running the upgrade step from version 781 to version 782.

The version numbers you see in Messages window is internal database version numbers. The list of these numbers can be found at this post.

Be aware this is a one-direction upgrade. There’s no possibility to downgrade the database nor open it in SQL Server with original version or any intermediate version prior to upgraded one. This means that if your database was created in SQL Server 200, then you opened it in SQL Server 2014, you never can open in SQL Server version from 2005 till 2012. The error message is self-explanatory.
Msg 1813, Level 16, State 2, Line 1
Could not open new database 'test'. CREATE DATABASE is aborted.
Msg 948, Level 20, State 1, Line 1
The database 'test' cannot be opened because it is version 782. This server supports version 663 and earlier. A downgrade path is not supported.

Another issue is that you can NOT prevent SQL Server from upgrading the database. If you make the primary database file read-only in order to get a read-only database and try to open it in a higher version, you’ll get an error 3415.
Msg 3415, Level 16, State 2, Line 3
Database 'test' cannot be upgraded because it is read-only, has read-only files or the user does not have permissions to modify some of the files. Make the database or files writeable, and rerun recovery.

Ghost Transaction Log File

I’ve encountered a very intriguing case with transaction log files in SQL Server 2014. Maybe it is present in earlier version.

I open a database properties, click Files and then I see two transaction log files. But when I look at Windows folder I see only one. Is it funny?

Ghost log file

Now I’ll tell you how it was happened.

I’ve created a database with a fixed size transaction log file, then add a second log file. I’ve inserted some stuff data, made some actions for the second log files would be truncated. After that I’ve removed the second log file.

As a result physical log file was really deleted, but it’s still present in SQL Server. When you look at sys.database_files you can notice that state for the second log file is different from the first.

Ghost log file state

Script to repeat this case is ghost_transaction_log_file

Missing Performance Monitor Counters for SQL Server 2014

I’ve encountered a strange situation when I could not find SQLServer:Databases group of counters in Windows Performance Monitor.

Missing PerfMon counteres

It was happened on my test system with SQL Server 2014. I checked it on production SQL Server 2008 R2 – everything is OK.

The most important thing to solve this problem was found in Windows Application Log. It was error 8310.

Error 8310

Simple search in Google with the error message leads me to the description of that particular problem. It’s on SQL Server 2014 installation and is corrected by installing Cumulative Update 2. I’ve installed CU7 and now it works.

You can check all CUs in KB2936603 SQL Server 2014 build versions

The latest Service Pack can be checked in KB2958069 How to obtain the latest service pack for SQL Server 2014

Unlimited Size of the Transaction Log File. Really?

When you create a new database in SQL Server and select Unlimited log file size, it seems that log will be unlimited, so it can grow and grow until the free space in the disk will be exhausted.

Unlimited log file size

But when you look at database file properties you’ll see another value. SSMS shows 2 097 152 MB. It’s merely 2 TB.

DMV shows another value.

Log file max size

It’s a maximum number of pages 8KB each. So it’s 2TB too.

This is the fact that described in MSDN article Maximum Capacity Specifications for SQL Server. So transaction log file can be as large as 2TB. And no more larger!

There’s another strange fact. According to the above mentioned article data file can be max 16Tb. At the same time SSMS shows unlimited size and sys.database_files shows -1.

SQL Server Transaction Log Behavor. Gaps Between Active VLFs

After watching a Paul Randal’s webcast Transaction Log Performance on PASS Virtual Chapter, I was asking myself how SQL Server appends new virtual log files (VLF) and rotates when some of the VLF would be free to reuse.

Guess #1:

I have committed and uncommitted transaction which was written to transaction log in a newly created database sequentially as shown in picture 1.

Picture 1. Committed and uncommitted transactions
Picture 1. Committed and uncommitted transactions

I expect that after transaction log backup the VLFs corresponding to committed transaction will be inactive (they will have status 0 in DBCC LOGFILE).

Picture 2. Expected VLF structure
Picture 2. Expected VLF structure

Test

I’ve inserted some data to a newly created table. The VLF structure is shown below.

VLF after transaction 1

The VLF # 1 is occupied only by the transaction 1.

I’ve started a transaction in the second connection and insert data.

VLF after transaction 2.1

The VLF #3 is occupied only by the uncommitited transaction 2.1.

In the first connection we ran transaction 3.

VLF after transaction 3

The VLF #5 is occupied only by the transaction 3.

In the second connection I ran transaction 2.2 without commit.

VLF after transaction 2.2

So eight VLFs from 1 to 8 is active. I expect that VLF occupied only by transaction 1 and 3 (i.e. VLF #1 and # 5) would become inactive after the transaction log backup. But…

VLF after log backup

Only VLF # 1 has become inactive and can be reused. VLF #5 is still active.

After I committed transaction 2 and made a transaction log backup, all VLF except #8 has become inactive.

VLF after commit and log backup

Conclusion:

At a first glance it seems that inactive VLFs can appear only from the beginning of the transaction log file. SQL Server does not make a gap of inactive VLF between active VLFs even that VLF is occupied by the committed transaction and was actually written to log backup.

I’ve found a Sharepoint config database with gaps between active VLFs.

VLF gaps

The next interesting thing to investigate is how SQL Server fills inactive VLFs. That will be on one of my future posts.

Reading Data from DBF files to SQL Server

There was a need to import address dictionary to SQL Server. The major problem was that the source files was a DBF files (dBASE format). I’ve searched the Google for using OPENROWSET function with DBF and found some examples. But no one could work.

The only working thing is SQL Server Integration Services (SSIS) package. In a very simple form it consists of 2 connections (one to DBF, the other to SQL server) and Data Flow task. The latter has OLE DB Source and SQL Server destination.

The most important is how to connect to DBF files. At first, you need to create a Connection Manager. It’s quite simple.

SSIS connection to dBase - 1

  1. Choose Native OLE DBMicrosoft Jet 4.0 OLE DB Provider
  2. Enter the path to directory where you DBF files is placed in the field “Database file name”.
  3. Leave User name and Password as is. I think OLE DB doesn’t use it when connecting.
  4. Click the All button on the left sidebar.
  5. Finally, enter “dBASE 5.0” in the “Extended Properties” field.

SSIS connection to dBase - 2

A particular DBF files is selected in the OLE DB Source component. SSIS automatically retrieves the list of all DBF files from the folder that you’ve entered in Connection Manager.

SSIS connection to dBase - 3

Problem With Duplicate Names of Database Objects And Execution Context in SQL Server

SQL Server has a capability to create many database objects with the same name. But they must belong to different schema. Let’s look how we can deal with this.

My simple test case:

  1. Create a new schema tst
  2. Create a new login test
  3. Create a new database user test for that login and assign schema tst to him
  4. Make the user test to be the owner of schema tst
  5. Create two table with the same name in dbo and tst schema
  6. Insert different values to the correspondent tables

Script can be downloaded from db_object_names_resolution

Now we can check the rows in both tables. That’s alright, we see exactly the same values that were inserted earlier. Pay attention to the fact we use schema names when accessing the tables.
explicit schema names

This time we run SELECT command without schema name. What should we see? ‘dbo’ or ‘tst’?
The answer is ‘dbo’.

implicit dbo

Try to change execution context to user test and run the same command.

implicit tst

We’ve just got the results from the table in tst schema. Why?
If you look at the User Mapping in the Login Properties window, you see the user and default schema for that particular login.

user mapping

Do you understand what happens?
When I connect to SQL Server as Administrator, I run the query as user dbo and have a default schema dbo. The results are taken from [dbo].[Clients].
When I connect as test, I run the query as user test, so the default schema for that user is tst. The results are taken from [tst].[Clients].

The next question – what will be if the user test run the query from table in dbo schema without denoting the schema name? There are 2 possible answers: the query will return the results from that table or will not be executed. What do you think about that?

To sum up, I would recommend to use unique names for database objects even if they belong to different schema.

Hint: the first search to resolve the database object’s name is made in user default schema, then in dbo schema.

NULL Values In WHERE Predicate of SQL Queries (IN, NOT IN And EXISTS). How to Avoid Wrong Results

As you know SQL Server can hold NULL values in table’s columns. It usually means that a value of the column is unknown. For example, what is better to answer the question about on which floor does your friend live: on the 1st or 2nd, maybe 5th, N-th or just say “I’m afraid I don’t know exactly”. So NULL values can save this situation.

But when we try to match values we should be cautious about NULL values. Let’s look at the following examples.
I create two tables with sample data in tempdb.

data
So we have two matches by first name and one row in Clients table has no match. You can find matches with the help of subqueries starting with IN or EXISTS.
matched rows
OK! We have the correct result. Now we want to find mismatches. Just add NOT before a subquery.
unmatched rows with null
OK! It works correctly.
Now I will insert a new row into Clients table with no first name (there will be a NULL value in the column).
data with null
Match queries find the exact rows as shown above but the results of mismatch queries are different.
unmatched rows with null
It’s somewhat surprising that first query with NOT IN could not find the row with NULL value. To include NULL values you can add additional comparison like OR FirstName IS NULL in ther WHERE clause.
The T-SQL code snippets can be downloaded from null_values_in_where_predicate
To sum up, I would recommend the following:
  1. Don’t use NULL columns if it doesn’t really needed
  2. Test thoroughly your queries for NULL/NOT NULL matches

Assigning Value To Variable in SQL Server: One Big SELECT vs Many Small SELECT/SET

In my previous post I compared one big DECLARE statement vs many small statements. The result was there’s no significant difference between these two approaches. This time I wonder is assignment sensitive to how it is implemented? I mean if you will use one big SELECT command or many small SELECT or SET commands.

At first I would clarify that there’s no difference to SQL Server how you write a single assignment – with SELECT or SET command. Just look at the execution plan – each command imply the ASSIGN operation.

select and set

For experiment I use stored procedure from previous test and write a new one that implement one assignment in SELECT command. If we look at estimated execution plan we’ll see that first stored procedure OneBigDeclare has 30 ASSIGN operation, on the other side the stored procedure OneBigDeclareWithBigSelect has only one ASSIGN operation. Probably the speed of the second SP will be higher.

single and composite assignment

All source code can be downloaded from assignment_execution_test

My results
Average execution time of a cycle:
OneBigDeclare – 77.98ms
ManySmallDeclare – 140.22ms

Great! One big assignment is quite twice faster than 30 single assignments.