Monthly Archives: May 2011

RangeS-U – Key-Range Locks in SERIALIZABLE isolation level – (part 2)

I will continue in this part describing other types of Key – Range locks.

As a reference I will use the same table created in part 1 – range_lock.

RangeS-U Locks

RangeS-U locks are locks acquired on Index key values from a specific range when the following conditions are met:

  • The transaction contains an UPDATE statement
  • WHERE clause is used to specify a condition that limit the rows that are updated. The condition will include at least one column from the Index. This will define the range of key value(s) from the Index. In this case, locks are needed on defining key value(s) that refer to the rows updated.
  • The UPDATE modifies a column(s) which is(are) not contained in the Index definition or:
  • The UPDATE modifies a column(s) which is(are) contained in the Index definition.

I hope from the below example the explanation will more straightforward. I will use a larger range of values in this example, meaning I will use an inequality operation. The equality operation will have the same types of locks.

Start Serializable Transaction

Open a new SQL session and issue below query without commiting the transaction.
Until the end of this blog this session will be known to readers as session 55.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE 
go
begin transaction
go

update range_lock set sname ='surname' where rname between 'anna' and 'arlen'

Analyze the Query Execution Plan

RangesS-U - Execution Plan

RangesS-U - Execution Plan

 
 
 
 
 
 
You will see a Clustered Index Update, and an Index seek on ix_rname (rname column) – this defines the range of the key values to seek in.

The update on PK_range_locks happens because in this case the PK is a Clustered Index that orders the data rows in the table based on its (Primary Key) key values. The update is made on another column than the defined Primary key column. All data rows from a table are called the leaf level of the index. The Primary Key values contain pointers to all data rows from the table. Any update on another column than the primary key will result to an update on the leaf level of the Clustered Index.

Check Held Locks

Execute:

sp_lock 55

Here is what you would get:

RangeS-U for inequality operation

RangeS-U for inequality operation

I see 3 Exclusive locks (X) are held on IndId=1 (Primary Key) which are needed for the Clustered Index update, and I see that RangeS-U locks are acquired on IndId = 2 (ix_rname) on four key values. If I go back in part 1, and take a look on table rows and their order, I will see that between “anna” and “arlen” are 3 rows, not 4. The fourth lock is on the next key value after “arlen” – “barry”.
 
 
 
 
 
 
 
 
As the name suggests Shared – Update locks mean that:

  • Other transactions that need to Select the same rows inside the range will be permitted.
  • Other transactions that perform an update on any of the key values from the range – will be blocked by the first one until the first one is commited or rollbacked.

Create Blockage

Open a second SQL session (here will be known as session 58) and execute:

begin transaction 

select rname from range_lock where 	
rname = 'barry' 

This statement will work fine. Rollback this transaction.

Inside the same session execute:

begin transaction 

update range_lock set rname = 'barri' where
rname = 'barry'  

You will see that this transaction will be blocked by the first one.

Session 58 is blocked by session 55

Session 58 is blocked by session 55


 
 
 
 

Check Locked resources on both transactions

To figure out why the update operation on rname column for the “barry” row is blocked by the first update on sname column for the rows between “anna” and “arlen” check this out:

Execute:

sp_lock 55 
go
sp_lock 58
go
RangeS-U for inequality operation

RangeS-U for inequality operation

I see one WAIT type of lock for the second transaction on same resource the first transaction has put RangeS-U lock. It is clear that we are speaking about same row: “barry”.

Microsoft explains WAIT locks as:

WAIT – Lock is blocked by another process.

Execute Rollback on both sessions (55 and 58) to preserve the initial image of the table.
This is all about RangeS-U locks and further in part 3, I will detail RangeX-X locks and RangeX-X combined with Range S-U locks.

RangeS-S – Key-Range Locks in SERIALIZABLE isolation level (part 1)

Overview
 

Locking mechanism in SQL is by far one of the most complex mechanism and hard to explain. Blockages and Locks are a DBA’s daily problems when confronting with concurrent processes which access same records of the same table. In order to have a high – performance system, a very important aspect to consider is isolation of concurrent operations. In high-activity systems like web-sites, ERP’s or online shops there is a huge amount of concurrent transactions that select data from same tables, modify data and insert new data. In order to isolate some processes, the isolation level of transactions must be changed. The Transaction Isolation Level controls the default transaction locking behavior for all SQL statements issued by a connection.

READ COMMITTED is the default level of isolation which is also the most recommended for most processes. But this level is not always enough in order to have an ACID transaction because the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data. In order to isolate completely a Transaction which has a BEGIN TRANSACTION in front – you can SET TRANSACTION ISOLATION LEVEL to SERIALIZABLE inside a stored procedure. This does not affect other transactions and their isolation level.
Also, If you issue SET TRANSACTION ISOLATION LEVEL in a stored procedure or trigger, when the object returns control, the isolation level is reset to the level in effect when the object was invoked. For example, if you set REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns control to the batch.

SERIALIZABLE isolation level is equivalent HOLDLOCK hint and has the same behavior when dealing with locks.

When SERIALIZABLE isolation level is set – a range lock is placed on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. And also… it is the less explained on WWW. This is mainly the reason I want to write about the Locking types inside SERIALIZABLE transaction.

Key-Range Lock Modes
 

In serializable isolation level Key – Range locks are used. Serializability implies that any query executed inside a transaction must obtain the same set of data from the beginning until the end of the transaction. The set of data locked inside the serializable transaction is based on a range of key values of the index that covers the rows extracted. It is assured that the values from the range of values locked will not be modified or new values will not be inserted for the same range of values by other concurrent transactions. Any attempt of a second transaction to insert, update, or delete any row within the same range as the first one requires a modification to the index. Thus, the second transaction is blocked until the first transaction completes because key-range locks cover the index entries.

There are 4 types of locks:

  1. RangeS-S – Shared range, shared resource lock; serializable range scan.
  2. RangeS-U – Shared range, update resource lock; serializable update scan.
  3. RangeX-X – Exclusive range, exclusive resource lock; used when updating a key in a range.
  4. RangeI-N – Insert range, null resource lock; used to test ranges before inserting a new key into an index.

There are couple of basic things to know about key-range locking which are not mentioned in MSDN documentation:

  1. In equality operations (“=”, “IN”) – Key-Range locks are not held on key values that are found inside an Unique Index even if the transaction is issued in Serializable isolation level.

    Example:
    If the ix_range index created on range_lock table would be Unique, the following statement would not require Key-Range locks.

    Shared locks will be enough.

    select rname from range_locks where rname = 'anna' 
    
  2. In equality operations (“=”, “IN”) – Key-Range locks are held on “next” key values that are not found inside an Unique or Non-Unique Index.

    Example:
    Whether ix_range index created on range_lock table is Unique or Non-Unique, the following statement will hold Key-Range locks on the “next” value of the key value from the equality operation that was not found inside the index.

    In this example “annabella” field does not exist so, Key-Range locks will be put on next value from the list: “antony”.

    select rname from range_locks where rname = 'annabella' 
    
  3. In in-equality operations (“>”, “<", BETWEEN, LIKE, "<>“) – Key-Range locks are held on all key values (found) from the range specified and the “next” value. This is true for both Unique or Non-Unique Indexes.

    Example:
    Whether ix_range index created on range_lock table is Unique or Non-Unique, the following statement will hold Key – Range locks on all key values in the range when using “BETWEEN” and on the “next” value of the key value that is outside the range.

    In below example RangeS-S locks will be acquired on all values between “antony” and “barry”.
    “annabella” is missing – so the starting value of the range will be considered the next value in alphabetical order after “annabella”. Also an additional RangeS-S lock will be put on the value after “barry”: “benedict”.

    select rname from range_locks where rname between 'annabella' and 'barry'
    

RangeS-S Locks
 

I will explain through examples each type of locking. In this part I will go through Key – Range Shared – Shared locks. In order to not repeat on following parts I will write only in this part the prerequisites for my tests.
I need to create a table and insert few rows into it. The table will have a Primary Clustered Key and an Index Non-Clustered on one of the three columns.

Here you go:

create table range_lock 
(rid int NOT NULL IDENTITY (1, 1),
rname nvarchar (20),
sname nvarchar (20))

ALTER TABLE range_lock ADD PRIMARY KEY (rid);

CREATE NONCLUSTERED INDEX [ix_rname] ON [dbo].[range_lock] 
(
[rname] ASC
)

Populate the table with names. The rows will be displayed in alphabetical order due to the ix_rname index.

INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('anna')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('antony')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('angel')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('ARLEN')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('BARRY')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('BENEDICT')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('BILL')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('BRYCE')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('CAROL')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('CEDRIC')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('CLINT')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('DARELL')
INSERT INTO [recovery_test_2].[dbo].[range_lock]
           ([rname]) VALUES ('DAVID')

A simple select in the range_lock table will show the inserted values ordered by rname column ascending.

Rows ordered in ascending order by the rname column.


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Start Serializable Transaction

Open a new SQL session and issue below query without commiting the transaction.
Until the end of this blog this session will be known to readers as session 55.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE 
go
Begin Transaction
go
select rname from range_lock where rname = 'anna'

Analyze the Query Execution Plan

RangeS-S locks - Execution Plan

RangeS-S locks - Execution Plan


 
 
 
 
 
 
 
 
You will see an Index Seek on ix_rname index because the ix_rname contains the rname column needed in the SELECT. Index Seek only touches only the rows that qualify.

IMPORTANT!!!

If I would issue a SELECT on all table columns, then the Query Optimizer will perform an Index Scan on the Primary Key in order to retrieve rid and sname values position from the Clustered Index. This is not optimal and must be avoided. So to avoid this, the rid and sname columns should be added as included columns inside ix_rname index.

Check Held Locks

Execute:

sp_lock 55

Here is what you would get:

RangeS-S for equality operation - Non-Unique Index

It is obvious that RangeS-S locks are acquired on IndId = 2 (ix_rname) for two key values.

Remember what I wrote earlier about equality operations and Unique Indexes – that Range locks are not held? Well, here we deal with a Non-unique Index so, it is quite normal that RangeS-S locks are held. The question that pops in, is why two RangeS-S locks are acquired when the statement only retrieves data about “anna”?
That is because Range locks are acquired on the key value from the “=” operator and on the next value. This is to ensure that no row can be inserted updated or deleted between the requested key and the one after that. It means that “anna” and “antony” rows would become locked if some other transactions tried to modify the fields. Let’s see that.

If you look above in the range_lock table content you will see that after “anna” comes “antony”. Further I will try to delete the “antony” row inside another transaction from the default isolation level and see what happens.

Create Blockage

Open a second SQL session (here will be known as session 58) and execute:

begin transaction
go
delete from range_lock where rname ='antony'

The second transaction will wait for the first one to be committed.
 
Execute sp_who2 active to make sure session 58 is blocked by session 55.

58 blocked by 55


 
 
 
 

Check Locked resources on both transactions

To figure out why the delete operation on “antony” row is blocked by the first select on “anna” row check this out:

Execute:

sp_lock 55 
go
sp_lock 58
go

RangeS-S and CNVT locks


 

I see one CNVT type of lock for the second transaction on same resource the first transaction has put RangeS-S lock. It is clear that we are speaking about same row: “antony”.

Microsoft explains CNVT locks as:

CNVT – Lock is being converted to another lock. A lock being converted to another lock is held in one mode but is waiting to acquire a stronger lock mode (for example, update to exclusive). When diagnosing blocking issues, a CNVT can be considered similar to a WAIT.

Execute Rollback on both sessions (55 and 58) to preserve the initial image of the table.
This is all about RangeS-S locks and further in part 2 and 3, I will detail more complex locks such as RangeS-U and RangeX-X.

Spring in Zakopane

Zakopane (Poland) is mostly known for the winter sports and the great views while skiing or climbing the high Tatra mountains. Here where organized a lot of winter events like the Nordic World Ski Championships in 1929, 1939, and 1962; the winter Universiades in 1956, 1993, and 2001; the biathlon World Championship; several ski jumping world cups; and several Nordic combined, Nordic and Alpine European Cups. It hosted the Alpine World Ski Championships in 1939, the first outside the Alps and the last official world championships prior to World War II.

Most of tourists are keen to come here between December and February for winter holidays in order to ski. There are plenty of great ski trails and ski jumps, for beginners or as well for very experienced skiers.

But Zakopane is not perfect for a getaway only in winter time, it is perfect in spring time and summer time as well. I visited Zakopane in the middle of april, when the weather registered most high degrees at that period: 15/22 Celsius degrees. I can say I was fascinated by this little village at the south of Poland. The mountains are absolutely spectacular, covered in snow at most high peaks and green at their basis. There are a lot of unique places, forests, parks where you can loose yourself and spent the greatest vacation being surrounded by the most beautiful mountains I have ever seen.

Zakopane would certainly not be what it is if it was not so close to the Tatra which many consider the most beautiful mountains in the world. The Tatra Mountains are located in the middle of the massive arch of the Carpatians, the mountains which spread over a distance of more than a thousand kilometres and pass through a few Slavic countries. The Tatra are the highest part of the Carpathian arch.
The Highest peak is Gerlach (2,654m over the sea level ) – located on Slovac side. On the Polish side the highest peak is Rysy (2,499 m). I will highlight top excursions that can be done in Zakopane and around it.

 
1. Kasprowy Wierch – cable car (half a day)

The highest peak which can be visited by cable-car is Kasprowy Wierch (1,987 m), very popular among tourists. The mountain delineates the border between Poland and the Slovak Republic and it is possible to cross the border of each country, weather permitting. As both countries are part of the Schengen zone, it is not controlled or forbidden. Kasprowy Wierch is also known as the best place in Poland for advanced skiers and snowboarders. At this period of the year (april) it is still possible to ski.

Tatra mountains

Tatra mountains

Kasprowy Wierch peak

Kasprowy Wierch - meteorological observatory

Kasprowy Wierch

Kasprowy Wierch

Kasprowy Wierch

Kasprowy Wierch - a view on Tatra mountain chain

Kasprowy Wierch - a view from the cable car



























































2. Morskie Oko (one day)

One of the most popular destinations in the vicinity of Zakopane is Morskie Oko. It is the largest lake in the mountains at a distance of about 25 km from Zakopane. You can start your trip by car or bus from Zakopane to Lysa Polana. At Lysa Polana, however you have to park the car and the remaining 8 km make it on foot or horse-drawn carriage. This place owes its popularity for the incredible views over Tatra mountains. The most captivating peak is Mnich, which literally means ‘the Monk’. It is called thus as its sharply pointed peak resembles that of a monks habit.

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko





































3. Zakopane walks

It is very relaxing to walk by Zakopane streets when there is no crowd. In April, there where few tourists and it was just as I needed after a year of busy days. The streets, beginning with the main pedestrian zone of Zakopane – Krupowki street are surrounded with so many temptations that you don’t know what to buy first. There are a lot of shops and restaurants. I, personally don’t like to spend money on little souvenirs, as these don’t have real usage for me. As an alternative, I suggest to spend each evening in a new restaurant. Let’s say that I come form Romania where the cooking art is highly appreciated. But I have never tasted a better steak of any kind (chicken, pork, lamb) in my life. I am so in love with the polish “shashlik” that I will most surely come back again to the same place and repeat the experience.

Zakopane

Zakopane

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - here I ate the best steak from my life

Zakopane - Krupowki street

Zakopane - cheese specialities

Zakopane - Krupowki street

Zakopane - Sabala restaurant, very popular in Zakopane

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Church on Krupowki street


































































4. Krakow (one day)

Krakow is the second most important city in Poland. Situated in the southeast of the country, between the Tatra Mountains, right on the Wisla (Vistula) River, has one of the best-preserved medieval city centres in Europe.There are dozens of churches that cover almost every architectural period and are surrounded by monasteries and abbeys. Krakow is surely a must see city, because it can be seen as a testimony of a bygone era. Points of interest: Rynek Glowni and Wawel (cathedral, university, museum & royal palace) which was also the former residence of polish kings.

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the inside






























5. Extreme adventures (ATV & Rafting)

Now the fun begins. For the ATV freaks there are many ATV companies that organize excursion around Zakopane, for beginners and for experimented ATV – riders. You will get dirty, but you will be so happy ūüôā

Also, rafting is possible in summer time on Dunajec river. You can go with a wooden boat which in my opinion is not as fun as the real rafting with inflatable boats, as there are no waves or sharp descents.

I didn’t have the chance to do any of these because it was still cold for getting wet and dirty.

Configure MSDTC for Distributed Transactions

Few days ago, inside a Software project, I had to enable and configure MSDTC (Distributed Transaction Coordinator) between 2 machines: a Web Server using NServiceBus to process messages in a transaction and a SQL Server.
I encountered some issues and I would like to detail the order of few configuration steps as well as testing methods that should be done/checked before MSDTC in fully functional.

 
Step 1: Network connectivity

Make sure both servers can ping each other by the machine name, cause MSDTC uses netBIOS to establish a connection.

Start a command prompt window and execute on both servers:

  • ping [SERVER_NAME1]
  • ping [SERVER_NAME2]

Step 2: Enable MSDTC on both servers.

There are pictures for Windows 2008 Server and Windows 2003 Server because MSDTC location is changed from one WIndows edition to another. The options to check are the same though.

  1. Open Component Services:
  2. Component Services

  3. Access DTC Properties
  4. Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2008 - MSDTC location

    Windows 2008 - MSDTC location

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  5. Enable MSDTC according to below options selected.
    Check only the red check boxes and click Apply.
     
  6. Windows 2008 - MSDTC properties

     
     
     
     
     
     
     
     
     
     
     
     

    A warning message will be popped in to inform that the Distribution Transaction Coordinator Windows service will be started (or restarted).
    MSDTC restart

  7. Set the startup type of Distribution Transaction Coordinator Windows service to Automatic.

If you don’t have Firewalls that prohibit external access to machine’s ports than you can stop here with the MSDTC configuration.
MSDTC will function great with basic configuration (Step 1 & Step 2) done only when there is no Firewall involved in network external communication. What do you do when there is a Firewall afterwards? In most Network configurations it is mandatory for securing the external access. If you follow all the steps detailed below, you should be able to make the MSDTC work without problems. Take a look at next steps.

Step 3: Restrict MSRPC dynamic port allocation.
 
MSDTC service depends on RPC protocol which stands in front of every Windows process that uses RPC. When you deal with MSDTC, you also have to consider the RPC access points. When MSRPC protocol is left with its default setting, all depending services including MSDTC is free to use a dynamically allocated port in the port range 1024-65535. Basically, it means that each re-start of MSDTC will result in a different port number. Fortunately you can restrict this port range which means that, instead of creating a rule in Firewall that opens all the ports from 1024 – 65535 you only need to insert the range of ports restricted in RPC setting.

There is one thing to be considered though:
 
There can be up to 100 services that depend on RPC and will be affected by this change. Make it not too small… not to big. Doing a little reading on the internet I saw that 50 – 100 ports would be a minimum – maximum for RPC depending services to function, but again it depends on each machine and how many processes depend on RPC. If you want to find out which are these look at RPC service at Dependencies tab and count the active ones.

RPC_Service

Perform these steps on both machines in order to configure a different port range. The port range does not have to be the same on both machines.

  1. Open Component Services properties windows
  2. Component Services Properties

  3. Access Default Protocols tab and insert a new port range.
  4. Change Port Range

Next, in order to be able to start a Distributed Transaction through MSDTC service – both participating servers must be configured to trust each other by allowing access to the each other’s port used by MSDTC service.

Step 4: Add Firewall Inbound rules

on SERVER 1: to allow inbound requests to the port range configured.
on SERVER 2: to allow inbound requests to the port range configured.
This will enable the communication between SERVER 1 and SERVER 2 through MSDTC service.

SERVER 1 will be able to access MSDTC allocated port from SERVER 2 and SERVER 2 will be able to access MSDTC allocated port from SERVER1.

Step 5: Restart both machines

You need to restart both machines where the port range was changed in order for these modifications to take effect.

 
Step 4: Testing MSDTC connectivity
 
After you had done all of the above, you might want to test first if a Distributed Transaction can be initialized and committed. There are 2 important Microsoft troubleshooting tools that I used and worked for me: DTCping and DTCtester. I will not detail the testing steps because all of them are covered in this post at step 2 and step 4.

Top 5 Steps To Determine Bad Performance of Queries

I will talk today about Performance of SQL queries. The performance of queries is a task which always has multiple causes and seen from different angles gives different solutions. Personally, I have sort-of a rule when confronting with bad queries: before coming up with o solution to improve execution of a specific query, I look at every possible angle of the problem and make sure all is covered according to performance best practices.

First, when a performance problem occurs? Basically there are few hints:

  • Application users make complaints about slow running processes.
  • Developers report bad performance for some queries and need DBA analyze before going in Production with the code.
  • Any of following¬†resources¬†is often triggered with high usage: CPU, Disk, Memory
  • Frequent blockages due to long-running or high I/O queries .

Next I will go through most important steps to do when handling a bad query

1. Analyze Execution Query Plan

Checking the Execution Plan of a query is the first thing to look at because it will give you a lot of information about how the statements are executed and provide estimates of the resources requirements for the statements as well at their order. It will outline the query plan chosen for a specific query. Do it in two ways:

  • From SQL Server Management Studio: at runtime click on “Include Actual Execution Plan” ¬†or without executing the query click on “Display Extimated Execution Plan”
  • Issue¬†SET SHOWPLAN_ALL ON at runtime.¬†When SET SHOWPLAN_ALL is ON, SQL Server returns execution information for each statement without executing it. This will detail the exact order of each statement.

Next, look inside Query Plan for statements which have Query Cost relative to batch higher than the rest of statements.

Pay attention to following events:

  • Index / Table Scan – Shows the need for an Index update or Index Creation. Check that every join or filtering (=, >, <) operation has an index on the column(s) used to filter data. For better performance include the columns from the select statement in the Index structure as included columns. It will avoid Primary Key Lookups.¬†Sometimes correct Index is being used but still Index scan is performed: It can happen if the amount of rows returned is bigger than the half of total rows from one table. SQL will choose to make a scan than an Index seek.
  • Index Create hints – Double¬†Check the Index hint syntax and make sure you don’t have already an index that can be updated with additional columns.
  • Use of Convert_Implicit function in Index Scan operations – in compare, equality or inequality operations use same type of datatype. Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. SQL will use this function to transform the value to another datatype before performing the actual operation which can take a lot of unnecessary I/O. I saw many Stored Procedures with declared variables with varchar datatype which were being assigned to nvarchar table columns. Same with int / bigint. This is not needed and is a frequent programming error that has to be fixed.
  • Index Update – A Clustered Index Update when an Index key is modified is normal but Non-Clustered Index updates should be avoided since every Index update puts an extra load on SQL. Having too much Indexes leads to many Index updates. Check the current Indexes from the specific table and decide whether you really all Indexes. Remember that Indexes are good for reading data, and bad for data ¬†modifications.
  • Nested Loops and Key Lookups – shows the need for¬†covering Indexes.
  • Sorts – Try to use Indexes instead which can avoid the sort operation.

2. Analyze the query with Tunning Advisor

This is an effortless double check measure for a query after the execution plan is analyzed. Create an .sql file with the Query code and load it to Tunning Advisor. SQL will create a report if it considers that there are missing indexes or statistics that can improve the performance. Tunning Advisor will report hints along with their syntax to create Indexes and show an improvement percent if the modifications suggested are done.

3. Use Profiler to track query compilations/re-compilations frequency.

Use Profiler to see if your Query has many compilations or re-compilations that can delay the query execution. I wrote a special post about this particular step that can be read at this link.

4. Avoid Parameter Sniffing

This applies to Stored Procedures with input parameters only. Parameter sniffing is the process whereby  SQL Server creates an optimal plan for a stored procedure by using the calling parameters that are passed the first time a stored procedure is executed. When a Stored Procedure is executed for the very first time, SQL Query Optimizer will compile the query and create an optimal Execution Plan that suites best the passed parameters. In order to save re-compilation time, The Execution Plan will be used at subsequent executions of  the procedure if this is still is the cache.

In theory, there is no worry if each subsequent procedure call will be done using the exact number of parameters and values. The difference is when each call of the procedure is not the same as the compiled one in terms of the number of parameters passed as well as of the number of rows returned.

So, if you have a Stored Procedure that runs slowly, first thing to do is to check whether the execution plan is optimal for the result set returned.

  • Tune the procedure’s body script in SSMS (SQL Server Management Studio)

Execute the Procedure body script inside SSMS with a set of predefined variable values – same ones encountered at the slow execution of the stored procedure with parameters. At runtime press on “Include Actual Execution Plan” so that you can see the plan generated and have a compare start point. If the duration is considerably smaller than previous executions, ¬†than you should spend some time in analyzing this behavior.

  • Tune the long running procedure execution with Profiler.

Inside a Profiler trace add these events:

Profiler Events to display Execution Plan

 

 

 

 
 

Filter by Database Name to find easier the procedure call and click on Run. After you have caught some results check the Execution plan at Showplan All event for the SPID displaying your procedure call. Compare this Execution Plan to the one from the SSMS.

  • Avoid Parameter Sniffing

To avoid parameter sniffing the stored procedure can be written in a different way with identical output. Instead of using the calling parameters that are passed at execution Рreplace them with local variables which will receive the value of the calling parameters passed. This little correction will lead the Query Optimizer to keep track of the statistics and create an optimal Query Plan based on a average of the real values passed as parameters.

A second option would be to use WITH RECOMPILE in Procedure body. SQL will not cache the query plan for this procedure, forcing it to be compiled each time it is executed.

Or you could execute the procedure with this hint:

EXEC Procedure 'value' WITH RECOMPILE

The Compilation will be done only this time for this procedure only. The new execution plan created will apply only this time to the called procedure.

  • Check this example of handling Parameter Sniffing on AdventureWorks database:

Procedure containing parameter sniffing:

CREATE PROCEDURE [DBO].[PROCEDURE_PARAM_SNIFFING] (@FIRST int)
AS
select name, [ProductSubcategoryID] from AdventureWorks.production.product where [ProductSubcategoryID] = @FIRST
GO

Execute from 2 different sessions:

EXECUTE [PROCEDURE_PARAM_SNIFFING] 2
EXECUTE [PROCEDURE_PARAM_SNIFFING] 7

The call of [PROCEDURE_PARAM_SNIFFING] with different parameters will have the same Execution plan and scan a range of 43 rows even if second execution returns only one row.

 

Execution Plan ‚Äď Parameter Sniffing

 

 

 

 

 

 

 

 

 

 

 

 

 

Procedure re-written to avoid parameter sniffing:

CREATE PROCEDURE [PROCEDURE_WITHOUT_PARAM_SNIFF] (@FIRST int)
AS
DECLARE @LOC_FIRST int = @FIRST
select name, [ProductSubcategoryID] from AdventureWorks.production.product where [ProductSubcategoryID] = @LOC_FIRST
go

Execute from 2 different sessions:

EXECUTE [PROCEDURE_WITHOUT_PARAM_SNIFF] 2
EXECUTE [PROCEDURE_WITHOUT_PARAM_SNIFF] 7

The call of [PROCEDURE_WITHOUT_PARAM_SNIFF] with different parameters will still have the same Execution plan, but the scan range will represent an average (7.97) on all values grouped by¬†ProductSubcategoryID.The estimated number of rows is not related to first compilation of this procedure, rather it’s is calculated according to tables statistics.

 

Execution Plan ‚Äď Avoid Parameter Sniffing

 

 

 

 

 

 

5. Check Query Parallelism

This applies to multi processor servers. Quoting msdn: SQL Server provides parallel queries to optimize query execution and index operations for computers that have more than one microprocessor (CPU). Because SQL Server can perform a query or index operation in parallel by using several operating system threads, the operation can be completed quickly and efficiently. During query optimization, SQL Server looks for queries or index operations that might benefit from parallel execution.

The SQL Server configuration option that manages the number of CPU’s for sqlserver process is called: max degree of parallelism and is set at server level. With this option enabled (sp_configure), SQL Engine will use multiple threads to manage the so-cold¬†parallel-query execution plan. It can have big performance improving when speaking about complex – transactions which need to be processed all at once in multiple threads – taking advantage of the¬†powerful¬†CPU in place. But sometimes, when running a simple query, which does not needs to be divided into threads takes longer with the¬†degree of parallelism enabled than without it. This is all about testing and figuring out which option is best.

The Degree of parallelism can be¬†overwritten in singular query executions if MAXDOP hint is used. MAXDOP can be used to reduce the number of CPU’s used in multi-processing. I found a nice explanation of Pinal Dave about MAXDOP – that can help on starting with the analyze: SQL Server Maxdop settings.

To conclude this post, top 5 steps in analyzing a query’s bad performance before heading to fast solutions are:

1. Analyze Execution Query Plan
2. Analyze the query with Tunning Advisor
3. Use Profiler to track query compilations/re-compilations frequency.
4. Avoid Parameter Sniffing
5. Check Query Parallelism