Spring in Zakopane

Zakopane (Poland) is mostly known for the winter sports and the great views while skiing or climbing the high Tatra mountains. Here where organized a lot of winter events like the Nordic World Ski Championships in 1929, 1939, and 1962; the winter Universiades in 1956, 1993, and 2001; the biathlon World Championship; several ski jumping world cups; and several Nordic combined, Nordic and Alpine European Cups. It hosted the Alpine World Ski Championships in 1939, the first outside the Alps and the last official world championships prior to World War II.

Most of tourists are keen to come here between December and February for winter holidays in order to ski. There are plenty of great ski trails and ski jumps, for beginners or as well for very experienced skiers.

But Zakopane is not perfect for a getaway only in winter time, it is perfect in spring time and summer time as well. I visited Zakopane in the middle of april, when the weather registered most high degrees at that period: 15/22 Celsius degrees. I can say I was fascinated by this little village at the south of Poland. The mountains are absolutely spectacular, covered in snow at most high peaks and green at their basis. There are a lot of unique places, forests, parks where you can loose yourself and spent the greatest vacation being surrounded by the most beautiful mountains I have ever seen.

Zakopane would certainly not be what it is if it was not so close to the Tatra which many consider the most beautiful mountains in the world. The Tatra Mountains are located in the middle of the massive arch of the Carpatians, the mountains which spread over a distance of more than a thousand kilometres and pass through a few Slavic countries. The Tatra are the highest part of the Carpathian arch.
The Highest peak is Gerlach (2,654m over the sea level ) – located on Slovac side. On the Polish side the highest peak is Rysy (2,499 m). I will highlight top excursions that can be done in Zakopane and around it.

1. Kasprowy Wierch – cable car (half a day)

The highest peak which can be visited by cable-car is Kasprowy Wierch (1,987 m), very popular among tourists. The mountain delineates the border between Poland and the Slovak Republic and it is possible to cross the border of each country, weather permitting. As both countries are part of the Schengen zone, it is not controlled or forbidden. Kasprowy Wierch is also known as the best place in Poland for advanced skiers and snowboarders. At this period of the year (april) it is still possible to ski.

Tatra mountains

Tatra mountains

Kasprowy Wierch peak

Kasprowy Wierch - meteorological observatory

Kasprowy Wierch

Kasprowy Wierch

Kasprowy Wierch

Kasprowy Wierch - a view on Tatra mountain chain

Kasprowy Wierch - a view from the cable car

2. Morskie Oko (one day)

One of the most popular destinations in the vicinity of Zakopane is Morskie Oko. It is the largest lake in the mountains at a distance of about 25 km from Zakopane. You can start your trip by car or bus from Zakopane to Lysa Polana. At Lysa Polana, however you have to park the car and the remaining 8 km make it on foot or horse-drawn carriage. This place owes its popularity for the incredible views over Tatra mountains. The most captivating peak is Mnich, which literally means ‘the Monk’. It is called thus as its sharply pointed peak resembles that of a monks habit.

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

Morskie Oko

3. Zakopane walks

It is very relaxing to walk by Zakopane streets when there is no crowd. In April, there where few tourists and it was just as I needed after a year of busy days. The streets, beginning with the main pedestrian zone of Zakopane – Krupowki street are surrounded with so many temptations that you don’t know what to buy first. There are a lot of shops and restaurants. I, personally don’t like to spend money on little souvenirs, as these don’t have real usage for me. As an alternative, I suggest to spend each evening in a new restaurant. Let’s say that I come form Romania where the cooking art is highly appreciated. But I have never tasted a better steak of any kind (chicken, pork, lamb) in my life. I am so in love with the polish “shashlik” that I will most surely come back again to the same place and repeat the experience.



Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - here I ate the best steak from my life

Zakopane - Krupowki street

Zakopane - cheese specialities

Zakopane - Krupowki street

Zakopane - Sabala restaurant, very popular in Zakopane

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Krupowki street

Zakopane - Church on Krupowki street

4. Krakow (one day)

Krakow is the second most important city in Poland. Situated in the southeast of the country, between the Tatra Mountains, right on the Wisla (Vistula) River, has one of the best-preserved medieval city centres in Europe.There are dozens of churches that cover almost every architectural period and are surrounded by monasteries and abbeys. Krakow is surely a must see city, because it can be seen as a testimony of a bygone era. Points of interest: Rynek Glowni and Wawel (cathedral, university, museum & royal palace) which was also the former residence of polish kings.

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the outside

Krakow - Wawel the Royal Castel from the inside

5. Extreme adventures (ATV & Rafting)

Now the fun begins. For the ATV freaks there are many ATV companies that organize excursion around Zakopane, for beginners and for experimented ATV – riders. You will get dirty, but you will be so happy ūüôā

Also, rafting is possible in summer time on Dunajec river. You can go with a wooden boat which in my opinion is not as fun as the real rafting with inflatable boats, as there are no waves or sharp descents.

I didn’t have the chance to do any of these because it was still cold for getting wet and dirty.

Configure MSDTC for Distributed Transactions

Few days ago, inside a Software project, I had to enable and configure MSDTC (Distributed Transaction Coordinator) between 2 machines: a Web Server using NServiceBus to process messages in a transaction and a SQL Server.
I encountered some issues and I would like to detail the order of few configuration steps as well as testing methods that should be done/checked before MSDTC in fully functional.

Step 1: Network connectivity

Make sure both servers can ping each other by the machine name, cause MSDTC uses netBIOS to establish a connection.

Start a command prompt window and execute on both servers:

  • ping [SERVER_NAME1]
  • ping [SERVER_NAME2]

Step 2: Enable MSDTC on both servers.

There are pictures for Windows 2008 Server and Windows 2003 Server because MSDTC location is changed from one WIndows edition to another. The options to check are the same though.

  1. Open Component Services:
  2. Component Services

  3. Access DTC Properties
  4. Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2008 - MSDTC location

    Windows 2008 - MSDTC location


  5. Enable MSDTC according to below options selected.
    Check only the red check boxes and click Apply.
  6. Windows 2008 - MSDTC properties


    A warning message will be popped in to inform that the Distribution Transaction Coordinator Windows service will be started (or restarted).
    MSDTC restart

  7. Set the startup type of Distribution Transaction Coordinator Windows service to Automatic.

If you don’t have Firewalls that prohibit external access to machine’s ports than you can stop here with the MSDTC configuration.
MSDTC will function great with basic configuration (Step 1 & Step 2) done only when there is no Firewall involved in network external communication. What do you do when there is a Firewall afterwards? In most Network configurations it is mandatory for securing the external access. If you follow all the steps detailed below, you should be able to make the MSDTC work without problems. Take a look at next steps.

Step 3: Restrict MSRPC dynamic port allocation.
MSDTC service depends on RPC protocol which stands in front of every Windows process that uses RPC. When you deal with MSDTC, you also have to consider the RPC access points. When MSRPC protocol is left with its default setting, all depending services including MSDTC is free to use a dynamically allocated port in the port range 1024-65535. Basically, it means that each re-start of MSDTC will result in a different port number. Fortunately you can restrict this port range which means that, instead of creating a rule in Firewall that opens all the ports from 1024 – 65535 you only need to insert the range of ports restricted in RPC setting.

There is one thing to be considered though:
There can be up to 100 services that depend on RPC and will be affected by this change. Make it not too small… not to big. Doing a little reading on the internet I saw that 50 – 100 ports would be a minimum – maximum for RPC depending services to function, but again it depends on each machine and how many processes depend on RPC. If you want to find out which are these look at RPC service at Dependencies tab and count the active ones.


Perform these steps on both machines in order to configure a different port range. The port range does not have to be the same on both machines.

  1. Open Component Services properties windows
  2. Component Services Properties

  3. Access Default Protocols tab and insert a new port range.
  4. Change Port Range

Next, in order to be able to start a Distributed Transaction through MSDTC service – both participating servers must be configured to trust each other by allowing access to the each other’s port used by MSDTC service.

Step 4: Add Firewall Inbound rules

on SERVER 1: to allow inbound requests to the port range configured.
on SERVER 2: to allow inbound requests to the port range configured.
This will enable the communication between SERVER 1 and SERVER 2 through MSDTC service.

SERVER 1 will be able to access MSDTC allocated port from SERVER 2 and SERVER 2 will be able to access MSDTC allocated port from SERVER1.

Step 5: Restart both machines

You need to restart both machines where the port range was changed in order for these modifications to take effect.

Step 4: Testing MSDTC connectivity
After you had done all of the above, you might want to test first if a Distributed Transaction can be initialized and committed. There are 2 important Microsoft troubleshooting tools that I used and worked for me: DTCping and DTCtester. I will not detail the testing steps because all of them are covered in this post at step 2 and step 4.

Top 5 Steps To Determine Bad Performance of Queries

I will talk today about Performance of SQL queries. The performance of queries is a task which always has multiple causes and seen from different angles gives different solutions. Personally, I have sort-of a rule when confronting with bad queries: before coming up with o solution to improve execution of a specific query, I look at every possible angle of the problem and make sure all is covered according to performance best practices.

First, when a performance problem occurs? Basically there are few hints:

  • Application users make complaints about slow running processes.
  • Developers report bad performance for some queries and need DBA analyze before going in Production with the code.
  • Any of following¬†resources¬†is often triggered with high usage: CPU, Disk, Memory
  • Frequent blockages due to long-running or high I/O queries .

Next I will go through most important steps to do when handling a bad query

1. Analyze Execution Query Plan

Checking the Execution Plan of a query is the first thing to look at because it will give you a lot of information about how the statements are executed and provide estimates of the resources requirements for the statements as well at their order. It will outline the query plan chosen for a specific query. Do it in two ways:

  • From SQL Server Management Studio: at runtime click on “Include Actual Execution Plan” ¬†or without executing the query click on “Display Extimated Execution Plan”
  • Issue¬†SET SHOWPLAN_ALL ON at runtime.¬†When SET SHOWPLAN_ALL is ON, SQL Server returns execution information for each statement without executing it. This will detail the exact order of each statement.

Next, look inside Query Plan for statements which have Query Cost relative to batch higher than the rest of statements.

Pay attention to following events:

  • Index / Table Scan – Shows the need for an Index update or Index Creation. Check that every join or filtering (=, >, <) operation has an index on the column(s) used to filter data. For better performance include the columns from the select statement in the Index structure as included columns. It will avoid Primary Key Lookups.¬†Sometimes correct Index is being used but still Index scan is performed: It can happen if the amount of rows returned is bigger than the half of total rows from one table. SQL will choose to make a scan than an Index seek.
  • Index Create hints – Double¬†Check the Index hint syntax and make sure you don’t have already an index that can be updated with additional columns.
  • Use of Convert_Implicit function in Index Scan operations – in compare, equality or inequality operations use same type of datatype. Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. SQL will use this function to transform the value to another datatype before performing the actual operation which can take a lot of unnecessary I/O. I saw many Stored Procedures with declared variables with varchar datatype which were being assigned to nvarchar table columns. Same with int / bigint. This is not needed and is a frequent programming error that has to be fixed.
  • Index Update – A Clustered Index Update when an Index key is modified is normal but Non-Clustered Index updates should be avoided since every Index update puts an extra load on SQL. Having too much Indexes leads to many Index updates. Check the current Indexes from the specific table and decide whether you really all Indexes. Remember that Indexes are good for reading data, and bad for data ¬†modifications.
  • Nested Loops and Key Lookups – shows the need for¬†covering Indexes.
  • Sorts – Try to use Indexes instead which can avoid the sort operation.

2. Analyze the query with Tunning Advisor

This is an effortless double check measure for a query after the execution plan is analyzed. Create an .sql file with the Query code and load it to Tunning Advisor. SQL will create a report if it considers that there are missing indexes or statistics that can improve the performance. Tunning Advisor will report hints along with their syntax to create Indexes and show an improvement percent if the modifications suggested are done.

3. Use Profiler to track query compilations/re-compilations frequency.

Use Profiler to see if your Query has many compilations or re-compilations that can delay the query execution. I wrote a special post about this particular step that can be read at this link.

4. Avoid Parameter Sniffing

This applies to Stored Procedures with input parameters only. Parameter sniffing is the process whereby  SQL Server creates an optimal plan for a stored procedure by using the calling parameters that are passed the first time a stored procedure is executed. When a Stored Procedure is executed for the very first time, SQL Query Optimizer will compile the query and create an optimal Execution Plan that suites best the passed parameters. In order to save re-compilation time, The Execution Plan will be used at subsequent executions of  the procedure if this is still is the cache.

In theory, there is no worry if each subsequent procedure call will be done using the exact number of parameters and values. The difference is when each call of the procedure is not the same as the compiled one in terms of the number of parameters passed as well as of the number of rows returned.

So, if you have a Stored Procedure that runs slowly, first thing to do is to check whether the execution plan is optimal for the result set returned.

  • Tune the procedure’s body script in SSMS (SQL Server Management Studio)

Execute the Procedure body script inside SSMS with a set of predefined variable values – same ones encountered at the slow execution of the stored procedure with parameters. At runtime press on “Include Actual Execution Plan” so that you can see the plan generated and have a compare start point. If the duration is considerably smaller than previous executions, ¬†than you should spend some time in analyzing this behavior.

  • Tune the long running procedure execution with Profiler.

Inside a Profiler trace add these events:

Profiler Events to display Execution Plan





Filter by Database Name to find easier the procedure call and click on Run. After you have caught some results check the Execution plan at Showplan All event for the SPID displaying your procedure call. Compare this Execution Plan to the one from the SSMS.

  • Avoid Parameter Sniffing

To avoid parameter sniffing the stored procedure can be written in a different way with identical output. Instead of using the calling parameters that are passed at execution Рreplace them with local variables which will receive the value of the calling parameters passed. This little correction will lead the Query Optimizer to keep track of the statistics and create an optimal Query Plan based on a average of the real values passed as parameters.

A second option would be to use WITH RECOMPILE in Procedure body. SQL will not cache the query plan for this procedure, forcing it to be compiled each time it is executed.

Or you could execute the procedure with this hint:

EXEC Procedure 'value' WITH RECOMPILE

The Compilation will be done only this time for this procedure only. The new execution plan created will apply only this time to the called procedure.

  • Check this example of handling Parameter Sniffing on AdventureWorks database:

Procedure containing parameter sniffing:

select name, [ProductSubcategoryID] from AdventureWorks.production.product where [ProductSubcategoryID] = @FIRST

Execute from 2 different sessions:


The call of [PROCEDURE_PARAM_SNIFFING] with different parameters will have the same Execution plan and scan a range of 43 rows even if second execution returns only one row.


Execution Plan ‚Äď Parameter Sniffing














Procedure re-written to avoid parameter sniffing:

select name, [ProductSubcategoryID] from AdventureWorks.production.product where [ProductSubcategoryID] = @LOC_FIRST

Execute from 2 different sessions:


The call of [PROCEDURE_WITHOUT_PARAM_SNIFF] with different parameters will still have the same Execution plan, but the scan range will represent an average (7.97) on all values grouped by¬†ProductSubcategoryID.The estimated number of rows is not related to first compilation of this procedure, rather it’s is calculated according to tables statistics.


Execution Plan ‚Äď Avoid Parameter Sniffing







5. Check Query Parallelism

This applies to multi processor servers. Quoting msdn: SQL Server provides parallel queries to optimize query execution and index operations for computers that have more than one microprocessor (CPU). Because SQL Server can perform a query or index operation in parallel by using several operating system threads, the operation can be completed quickly and efficiently. During query optimization, SQL Server looks for queries or index operations that might benefit from parallel execution.

The SQL Server configuration option that manages the number of CPU’s for sqlserver process is called: max degree of parallelism and is set at server level. With this option enabled (sp_configure), SQL Engine will use multiple threads to manage the so-cold¬†parallel-query execution plan. It can have big performance improving when speaking about complex – transactions which need to be processed all at once in multiple threads – taking advantage of the¬†powerful¬†CPU in place. But sometimes, when running a simple query, which does not needs to be divided into threads takes longer with the¬†degree of parallelism enabled than without it. This is all about testing and figuring out which option is best.

The Degree of parallelism can be¬†overwritten in singular query executions if MAXDOP hint is used. MAXDOP can be used to reduce the number of CPU’s used in multi-processing. I found a nice explanation of Pinal Dave about MAXDOP – that can help on starting with the analyze: SQL Server Maxdop settings.

To conclude this post, top 5 steps in analyzing a query’s bad performance before heading to fast solutions are:

1. Analyze Execution Query Plan
2. Analyze the query with Tunning Advisor
3. Use Profiler to track query compilations/re-compilations frequency.
4. Avoid Parameter Sniffing
5. Check Query Parallelism

Transfagarashan – a magical road across the mountains

I will start the travelling category with a unique place located here where I live: in Romania. A beautiful and breath taking panorama situated at 2042 m altitude.

In order to reach this place you don’t have to be a trained mountain climber. Pack the bags, with a few¬†friends, find a car and start the adventure. There is an actual road, a paved road that goes across the mountain peak from one side to another like a meander. Many¬†tourists¬†from all over the world come to see this place each year in the summer time. It is worth spending at least 3 days here and enjoying the sides.

It is called: Transfagarashan. It lies two historical romanian regions: Muntenia and Transilvania.

The road is not opened entirely to¬†tourists¬†all the year because of the bad weather and snow. This is reachable by car only in July – October period each year. At the top you will find the “Balea Lac” a lake inside the heart of the mountains with a splendid¬†view¬†over the Fagaras mountains – the highest mountain chain from Romania which is part of Meridional Carpatians mountains.

In winter time, basically between 1 november and 30 June the¬†sector between km 104 (“Piscul Negru”) and 130 km (“cabana Balea Cascada”), is closed to road traffic and you can there only by cable car from¬†“cabana Balea Cascada”. Near “Balea” Lake in the south -est there is another lake “Capra” to which one can get only following a mountain route.

Portion of the road closed in winter time


Glacial Lake "Capra" at 2230 m altitude

A view on the meander


Lake "Balea" at the top of "Transfagarashan" road - at 2000 m altitude





Check Tables & Indexes size for your Database

Before accepting that your database is big because it needs all the data inside it and buy extra storage to overcome the potential space problem, you can find out if the existing space can be reduced by removing unused objects, indexes.

First find out what are the biggest tables:


DECLARE @query varchar(100)
    DECLARE @schema as sysname
    DECLARE @objectname as sysname
    IF (object_id('tempdb..#Table_Size') IS NOT Null)
    drop table [dbo].[#Table_Size]
    CREATE TABLE #Table_Size 
      ( TableName varchar(50),
        [RowCount] int,
        Table_Size varchar(50),
        Data_Space_Used varchar(50),
        Index_Space_Used varchar(50),
        Unused_Space varchar(50)
    DECLARE Object_Cursor CURSOR FOR
    SELECT s.name, o.name  FROM sysobjects o inner join sys.schemas s ON o.uid = s.schema_id
    WHERE xtype='U' 
    OPEN Object_Cursor
    FETCH NEXT FROM Object_Cursor 
    INTO  @schema, @objectname
    set @query = 'sp_spaceused ''[' + @schema + '].[' + @objectname + ']'''
    INSERT INTO #Table_Size EXEC(@query)
    FETCH NEXT FROM Object_Cursor 
    INTO   @schema, @objectname
    CLOSE Object_Cursor
    DEALLOCATE Object_Cursor
    select TableName,[RowCount],
    convert(NUMERIC(12,2) ,left(ltrim(rtrim(Table_Size)),len(ltrim(rtrim(Table_Size)))-2))/1024 [Table_Size (MB)],
    convert(NUMERIC(12,2) ,left(ltrim(rtrim(Data_Space_Used)),len(ltrim(rtrim(Data_Space_Used)))-2))/1024 [Data_Space_Used (MB)],
    convert(NUMERIC(12,2) ,left(ltrim(rtrim(Index_Space_Used)),len(ltrim(rtrim(Index_Space_Used)))-2))/1024 [Index_Space_Used (MB)],
    convert(NUMERIC(12,2) ,left(ltrim(rtrim(Unused_Space)),len(ltrim(rtrim(Unused_Space)))-2))/1024 [Unused_Space (MB)]
    from #Table_Size
    order by convert(int ,left(ltrim(rtrim(Table_Size)),len(ltrim(rtrim(Table_Size)))-2))  desc

After you have a list of biggest tables, check their Indexes. During my DBA work I have found many times unused indexes, doubled with same definition which not only occupied the disk space for nothing but also made worse the performance of any DML query as these should have been updated. So for a single Insert imagine updating 4 indexes. Sometimes these can be re-designed.


DECLARE @query varchar(100)
    DECLARE @index_id INT
    DECLARE @index_size BIGINT SET @index_size = 0
    DECLARE @name as NVARCHAR(500)
    DECLARE @indextype as NVARCHAR(256)
    DECLARE @TableName AS NVARCHAR(256)
    SET @TableName = 'TABLENAME'
    IF (object_id('tempdb..#Index_Table') IS NOT Null)
    drop table [dbo].[#Index_Table]
    CREATE TABLE #Index_Table 
     (	Table_Name		 varchar(250),
    	Index_ID		 varchar(250),
     	Index_Name		 varchar(250),
     	IndexType		 varchar(250),	
     	Index_Size_MB	 INT
    SELECT index_id, name FROM sys.indexes WHERE object_id = OBJECT_ID(@TableName) 
    OPEN Index_Table
    FETCH NEXT FROM Index_Table 
    INTO  @index_id, @name
    SELECT @index_size = @index_size + (avg_record_size_in_bytes * record_count)
    FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(@TableName), @index_id , NULL, 'DETAILED')
    set @index_size = @index_size / 1048576
    select @indextype = (index_type_desc) 
    from sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(@TableName), @index_id , NULL, 'DETAILED')
    insert into #Index_Table values (@TableName,  @index_id,  @name, @indextype, convert(int ,@index_size))
    FETCH NEXT FROM Index_Table
    INTO   @index_id, @name
    CLOSE Index_Table
    DEALLOCATE Index_Table
    select * from  #Index_Table order by Index_Size_MB desc

Trace DDL & DCL operations in SQL Server Log – fn_trace_gettable

Did you know that starting with SQL 2005 all future versions of SQL includes the default trace of all DDL or DCL modifications done on all DB objects?

I was inspired by Jonathan Kehayias blog to dig deeper and check other useful output from the default trace.

Let me remind you what are Data Definition Language (DDL) and Data Control language (DCL) statements:
DDL statements: are used to define the database structure or schema. Some examples:

  • CREATE – to create objects in the database
  • ALTER – alters the structure of the database
  • DROP – delete objects from the database
  • TRUNCATE – remove all records from a table, including all spaces allocated for the records are removed
  • COMMENT – add comments to the data dictionary
  • RENAME – rename an object

DCL statements : are used to manage users permissions. Some examples:

  • GRANT – gives user’s access privileges to database
  • REVOKE – withdraw access privileges given with the GRANT command

For example, I am pretty sure that you wanted at least once to find out if somebody with access on Production databases altered a certain Procedure (by create and drop), Deleted an Object or Created without letting you know or even by mistake? As well, granting rights to Logins just because some people can do that is a very bad¬†habit with which ¬†maybe you had to deal at some point. Windows Administrators or DEV team leaders can take advantage of their high rights and escalate the DBA’s ¬†in some urgent matter.

Asking¬†around¬†didn’t bring you any answer right? Perhaps the person who did that doesn’t even know, or maybe it was part of a faulty deployment?

The modification may have caused some errors, or lost in permissions… At last, it is very¬†useful¬†to know right away if this kind of modification took place and you or your team didn’t do that.

To see what Events are logged Рquery the fn_trace_geteventinfo (trace_id) function by joining it with sys.trace_categories and sys.trace_events tables:

SELECT DISTINCT tge.eventid, tc.name AS CategoryName,te.name AS EventName 
FROM fn_trace_geteventinfo(1) AS tge INNER JOIN sys.trace_events AS te ON tge.eventid = te.trace_event_id 
INNER JOIN sys.trace_categories AS tc ON te.category_id = tc.category_id
ORDER BY CategoryName, EventName

If you are interested in collecting all information in the last week displaying all Created/Deleted/Altered Objects as well as db_roles or server roles assigned to new Logins not by you, than you can issue following SQL code Рtested only on SQL 2008 :



IF (object_id('tempdb.dbo.#ddl_stmt') IS NOT Null)
   DROP TABLE tempdb.dbo.#ddl_stmt
CREATE TABLE   tempdb.dbo.#ddl_stmt
	   HostName nvarchar(250) null, 
       ApplicationName  nvarchar(250) null, 
       NTUserName  nvarchar(250) null, 
       NTDomainName  nvarchar(250) null, 
       LoginName  nvarchar(250) null, 
       SPID int null, 
       EventClass int null, 
       Name  nvarchar(250) null,
       EventSubClass int null,      
       TEXTData  nvarchar(250) null, 
       StartTime datetime, 
       ObjectName  nvarchar(250) null, 
       DatabaseName  nvarchar(250) null, 
       TargetLoginName nvarchar(250) null,
       TargetUserName nvarchar(250) null )
DECLARE @filename VARCHAR(255) 
SELECT @FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'  
FROM sys.traces   
WHERE is_default = 1;  

DECLARE @starttime datetime = getdate () -7

INSERT INTO tempdb.dbo.#ddl_stmt ( [HostName]
SELECT gt.HostName, 
FROM [fn_trace_gettable](@filename, DEFAULT) gt 
JOIN master.sys.trace_events te ON gt.EventClass = te.trace_event_id 
WHERE EventClass in (164, 46,47,108, 110, 152) and gt.StartTime > = @starttime
and gt.LoginName not in ('NT AUTHORITY\NETWORK SERVICE') 

select * from #ddl_stmt

How to find out which Queries have non-reusable Query Plans?

Following a post on Stack Exchange asking how to check if a Query Plan of an SQL statement was compiled/re-compiled or cached for the first time, I was driven to write a more detailed explanation on this.

This query will update 1000 rows.

Update t1 set c1 = 10

If I execute it for the first time, SQL needs to create first a Query Plan for it, Compile it and store it in the SQL plan cache for the first time. This will be used for next executions. Plan re-use is desirable for OLTP workloads because re-creating the same plan (for similar or identical transactions) is a waste of CPU resources. What if, the Plan is not cached long enough and each time we execute the Update SQL creates the same Query Plan and compiles it over and over again? This is a performance problem that must be analysed. How we find these queries with many Query Plans compilations?

Profiler is my best friend, along with Performance Monitor when analyzing Poor Queries.

To find out how many compilations SQL Server is doing, you can monitor the SQLServer: SQL Statistics: SQL Compilations/Sec counter. This counter measures how many compilations are performed by SQL Server per second. To have a better undarstanding monitor also SQLServer: SQL Statistics: Batch requests/Sec and SQLServer: SQL ReCompilations/Sec counters.

Performance Monitor ‚Äď SQL Compilations counters

A general acceptance is that if SQL Compilations/Sec counter measures over 100 compilations per second, then you may be experiencing  unnecessary compilation overhead.

Some of the reasons when Query Compilation/Re-compilation occurs:

– Changes in Object Schema

– Index rebuild, drop, create

– Update Statistics

– Stored Procedure alter – can’t use the old Query Plan.

As well Batch requests/Sec should be compared with the SQL Compilations/Sec. Say I have 500 Batches per sec and my Compilation rate is at 200/sec. It is close to 50% of all batches. So it is good to have a compilation rate as low as possible.

So, the rate only shows that the system is high on Compilations but how to know which queries are not cached?

Profiler can be used to find this out. I will explain here how to extract data from Profiler testing the Performance of a known SQL Statement but you will probably wish to find what query is getting compiled over and over again. The results can be overwhelming and hard to filter. I suggest you store your results into a table to a different database (I use a different SQL instance as well) than the one you are monitoring. It will allow you to select all the Events for a single SPID. The SPID will be easy to extract from Profiler.

Below are described the steps to catch the Poor Queries for which the Query Plan is Compiled more often that it needs to.


To check out the IO and Execution time enable statistics output:

set statistics time on
set statistics io on


Start a Profiler new trace and include following events:

  • Performance
    • Performance Statistics – Indicates that a compiled plan was cached for the first time, re-compiled or evicted from plan cache.
  • Stored Procedures
    • RPC:Completed
    • SP:CacheHit – Indicates when a procedure is found in cache.
    • SP:CacheInsert – Indicates when an item is inserted into procedure cache.
    • SP:CacheMiss – Indicates when a Stored Procedure is not found in the procedure cache.
    • SP:CacheRemove –¬†Indicates when an item is removed from procedure cache.
    • SP:Recompile –¬†Indicates that the Stored Procedure is recompiling.
  • TSQL
    • SQL:BacthCompleted
    • SQL:BatchStarting

Check to Show All Columns and select each column for Performance Statistics event. Click Run.


To achieve this post goal I removed all elements from plan cache for testing purposes (!!! never do this on Production !!!)

dbcc freeproccache
dbcc dropcleanbuffers

After Starting Profiler –> I will run below query:

set statistics time on
set statistics io on
Update t1 set c1 = 10


Let it run for some time (10 min) and then identify the SPID of the rows where Performance Statistics, SP:CacheMiss and SP:CacheInsert are displayed. Filter those SPID’s that contain all of three Events.

Now you identified basically those Processes SPID for which the Query Plan was not found in the cache, compiled and inserted in the cache.


Execution Output:

SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 90 ms.
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.
Table 't1'. Scan count 1, logical reads 17738, physical reads 3, read-ahead reads 1000, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 234 ms,  elapsed time = 2454 ms.

(1000 row(s) affected)

Profiler Output:

Profiler ‚Äď Stored Procedure not in cache

First indication that the query is not cached is given by first occurence of Performance Statistics event class, that has a duration of 86 ms used to compile the statement within an adhoc query (EventSubClass: 2 РBatch:Plan). To see all types of EventSubClass descriptions, look here at msdn documentation. Afterwards, SQL checks whether the statement exists in procedure cache РSP:CacheMiss shows that it is not there. The statement is inserted into the procedure into procedure cache : SP:CacheInsert. The actual duration of the statement is 2544 ms. In SQL statistics we can see a difference between the time reported for compilation 90 ms (IO statistics) against 86 ms (Profiler)


I will execute again the Update after it was cached to see the difference.

Profiler ‚Äď Stored Procedure found in cache

I no longer have Performance Statistics event class occurrence Рso the compiled plan was found in the cache. Moreover I see a new event SP:CacheHit which confirms that the Query Optimizer found the Statement in procedure cache. It is obvious also the difference in Duration: 1223 ms. versus 2544 ms. Same reads and writes which is normal. CPU usage: 171 versus 234.

So the SQL physical resources are less used when query plan does not have to be compiled again. It is an important fact when designing web applications as well as writing SQL code. The DBA should be able to report excessive SQL Compilations and work with the DEV team to try to cache as much as possible.

Some good documents on this subject can be found also here:





Handling SQL Deadlocks – One Resource Conflict

From a DBA point of view, Deadlocks occurred in Production environment are probably the trickiest and time consuming problems to fix. Every time I came across this, it took me a while to understand where it is coming from, why, and how to avoid a particular Deadlock. This blog will go through some types of deadlocks and how to deal with them.

Here I will analyze a Deadlock caused by an After Insert trigger that acts like a constraint.

Usually people see the Deadlock error after the Deadlock already happened – thrown by client applications or DB Monitoring tools (ex: SQL Diagnostic Manager – Idera).

1. Provoke Deadlock:

a) Create a table with a Primary Key and a Trigger on it that check that same Security Number is not inserted for a new client.

CREATE TABLE [dbo].[Clients]
([clientid] [int] NOT NULL,[name] [nvarchar](50) NULL,[SecurityNR] [int] NULL,
([clientid] ASC)) ON [PRIMARY]
CREATE TRIGGER [dbo].[SecurityNR] ON [dbo].[Clients]
SET @SecNR  = (select SecurityNR from Inserted)
           FROM Clients c
           where c.SecurityNR = @SecNR having count (1) >1)
RAISERROR ('This client''s SecurityNR is assigned to another Client', 16, 1);

Let’s assume that there are many concurrent transactions running on the Server that insert 1..to..many new clients inside one transaction.

b) Start a Transaction and Insert one client and do not commit it.

INSERT INTO [test2].[dbo].[clients]       
VALUES (3,'Chris',12345680)

c) Start a second Transaction and Insert one client and do not commit it.

INSERT INTO [test2].[dbo].[clients]       
VALUES (4,'Lya',12345682)

At this point the second Transaction will be blocked by the first one.

d) Return to first Transaction and insert another client and commit transaction

INSERT INTO [test2].[dbo].[clients]       
VALUES (5,'Mary',12345681)

Now, the Deadlock will occur.

Transaction (Process ID 56) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

2. Start Debugging:

Look at the error you caught. If you have a DB monitoring tool which captures deadlocks (I have worked with Idera and I recommend it), than you will probably have already some informations about the deadlocked process.

Alert displayed in IDERA:

A deadlock occurred on server [SERVER NAME]. 

The deadlock victim was spid 58 with application name '[APPLICATION NAME]' 
by user '[USER NAME]' on host '[MACHINE NAME]'.
Last command issued:
Proc [Database Id = 13 Object Id = 1157579162]

Whether you have or not a monitoring tool, next step is to activate the Deadlock traceflag and analyse its output from SQL Server Log after you encounter another deadlock:

-- SQL 2005 - 2008: 
DBCC TRACEON (1222, -1)

-- SQL 2000 :
DBCC TRACEON (1204, -1)

3. Collect Deadlock Trace Output

Here is the important info that must be extracted from the Log file.

A: deadlock-list

  1. deadlock victim=process1816da088

B: process-list

    1. process id = process1816da088 waitresource = KEY: 7:72057594040483840 (03000d8f0ecc) lockMode = S spid = 53 clientapp = Microsoft SQL Server Management Studio РQuery loginname = irina-VAIOirina


    1. frame procname = test2.dbo.SecurityNR


IF EXISTS (SELECT count (1) FROM Clients c where c.SecurityNR = @SecNR
having count (1) >1)
    1. frame procname = adhoc


INSERT INTO [test2].[dbo].[clients]([clientid],[name],[SecurityNR])
    1. process id = process1816da508 waitresource = KEY: 7:72057594040483840 (0400b4b7d951) lockMode = S spid = 54 clientapp = Microsoft SQL Server Management Studio РQuery loginname = irina-VAIOirina


    1. frame procname = test2.dbo.SecurityNR


IF EXISTS (SELECT count (1) FROM Clients c where c.SecurityNR = @SecNR
having count (1) >1)
    1. frame procname = adhoc


INSERT INTO [test2].[dbo].[clients]([clientid],[name],[SecurityNR])

C: resource-list

  1. keylock dbid = 7 objectname = test2.dbo.Clients indexname = PK_Clients mode = X
  2. owner id = process1816da508 mode = X
  3. waiter id= process1816da088 mode = S requestType = wait
  4. keylock dbid = 7objectname = test2.dbo.Clients indexname = PK_Clients mode = X
  5. owner id = process1816da088 mode = X
  6. waiter id = process1816da508 mode = S requestType = wait

3. Analyze Deadlock Trace Output

At A: deadlock-list I see the deadlocked process number – process1816da088.
Lower at B: process-list, I can see the SQL statements of the 2 processes caught in the deadlock:

    • process1816da088 – spid 53 – executed:


INSERT INTO [test2].[dbo].[clients]([clientid],[name],[SecurityNR]) values(4,'Lya',12345682)
IF EXISTS (SELECT count (1) FROM Clients c where c.SecurityNR = @SecNR having count (1) >1)
    • process1816da508 – spid 54 – executed:


INSERT INTO [test2].[dbo].[clients]([clientid],[name],[SecurityNR]) values(5,'Mary',12345681)
IF EXISTS (SELECT count (1) FROM Clients c where c.SecurityNR = @SecNR having count (1) >1)

Lower at C:resource-list:

process1816da508  has X (exclusive) lock on PK_Clients
process1816da088 waits for S (Shared) lock on PK_Clients
From these lines I can get the name of the first resource demanded by both processes: primary key PK_Clients from Clients table.

The second resource demanded by both processes seems to be also the primary key PK_Clients of Clients table. It can be extracted from the other 2 types of locks held:

process1816da088 has X (exclusive) lock on PK_Clients
process1816da508 waits for S (Shared) lock on PK_Clients

How can this be possible? In all documentation the deadlock is described as a conflict between at least 2 different resources.

Now, I have the same resource on which each one of the¬†concurrent¬†processes has X locks and claims S lock. The single logical explanation refers to the fact that SQL would put Row Locks (default Locking level) if the Clustered Index didn’t exist. In Row Lock mechanism transactions are not blocked because each one is modifying a different row from a table. The presence of PK makes a little different but not completely.

After each Insert SQL will have to make a Clustered Index Insert with a new value inserted on ClientID column. So it will request Exclusive Key Lock on  PK_Clients pointing a different row from the Clustered Index. This allows other concurrent transactions to update the Index. So, first conclusion is that the Deadlock problem happens inside the trigger.

After each Insert, the trigger is fired Рthe select from trigger will do an Index Scan on PK_Clients, in this way claiming Shared locks. How do I know all this? Looking at the execution Plan for Trigger code РI see a Clustered Index Scan to extract the SecurityNR value.

Shared locks read only commited data. It means that the second Transaction will wait after the first row has been commited from the First Transaction. But the first Transaction has one more Insert coming which will have the exact behavior – only now it will claim Shared Lock on PK_Clients that is already locked by second Transaction – inserted row from second Transaction.

This is how the Locking occured to my example:

Deadlock schema


4. Fix the Problem

There are more ways to fix this problem:

Avoid Index Scan on PK_Clients when trigger is fired –> Create Index on SecurityNR column which will avoid the PK_Clients index. Instead the trigger will perform an Index Seek on new NonClustered Index.

Putting No_Lock hints at the Select from trigger will fix the deadlock as well, but this is very risky because you can have dirty data reads and corrupt data.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Рoption set per session/transaction. This is the highest Isolation Level and will prevent concurrent Inserts, causing a lot of blockages. More on this here.

SQL 2005/2008 ‚Äď Configure Memory (part 2)

How can we be sure that SQL is low on memory? Or what to set up on max Memory value?

Here is a short guide on how to analyze memory needs for an Instance of SQL Server:
Use Perfmon Counters to understand SQL Memory needs
Considering that you did not set up yet the maximum SQL Server Memory limit, start Perfmon and include the below 3 counters when the system is most highly used. A stress test would be good enough in order to catch the highest/lowest values.

  1. Memory: Available MBytes –¬†reports the available physical memory. This gives a good indication of whether you are running low on physical memory. If the machine is running low on physical memory, paging will either be happening or will soon begin to happen. This is useful data for diagnosing OOM issues. If this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM and don’t need to worry.
  2. Process(sqlservr): Private Bytes – indicates the amount of memory being used that can’t be shared with other processes. You should monitor this if you want to see how much memory your process uses. This counter also gives a good indication of how your application impacts the entire system‚ÄĒusing a lot of private bytes has a big impact on the machine since the memory cannot be shared with other processes
  3. SQLServer: Buffer ManagerPage Life expectancy –¬†normal value: < = 300 sec. According to Microsoft, 300 seconds is the minimum target for page life expectancy. If the buffer pool flushes your pages in less than 300 seconds, you probably have a memory problem.

a) SQL is short on memory
First indication that SQL/Machine is low on memory will be that SQL process will assign as much Memory as it finds free and¬†available on the system – until the system will run out of memory. So check the First counter from the list.¬†It should be over 10% of Total RAM. If it’s not, look at the third counter to see whether SQL is really hungry for memory or just very greedy. If Page Life Expectancy is above 300 sec, but the SQL still requires more memory – than it is the time to limit the SQL Memory so that at least 1 GB is left for other OS processes. If, at contrary Available MB is very low and Page Life Expectancy counter is below 300 sec, than you actually are forcing the system to cope with less memory than it needs. Add few GB to RAM and check again.
b) SQL has sufficient memory
The second case is the¬†easiest¬†meaning that SQL didn’t grab all of available memory and the system still has enough memory for its own processes. I usually take the value from Private Bytes and add few GB (1-3) to set maximum Memory limit on SQL engine setting.

In short, The Page Life Expectancy is a very good indication that SQL has less memory that it would be comfortable with and I suggest collecting this value periodically along with other counters mentioned.

Basically when PLE goes under 300 sec it means, the activity on the Disk will grow (more Paging will occur) in order to get the pages which should still be in buffer pool. At this point CPU will also grow in order to manage the High Disk Activity. So, don’t take fast conclusions if you see high disk activity or CPU increasing. Memory, Disk and CPU are very tightly related.

In addition you could add these counters to check CPU and Disk activity:

  • Processor(_Total)%: Processor Time – determines the percentage of time the processor is busy by measuring the percentage of time the thread of the Idle process is running and then subtracting that from 100 percent. If you begin to see values of 70 percent or more for the Processor% Processor Time counter, investigate your processor’s activity further, as follows:
    1. Examine SystemProcessor Queue Length.
    2. Identify the processes that are running when Processor% Processor Time and SystemProcessor Queue Length values are highest.
  • PhysicalDisk(_Total): Current Disk Queue Length ‚Äď used to determine the number of I/O requests queued for service. Notice that this might overstate the true length of the queue, because the counter includes both queued and in-service requests. If the value of Current ¬†Disk Queue Length exceeds twice the number of spindles, then you are likely developing a bottleneck. With a volume set, a queue that is never shorter than the number of active physical disks indicates that you are developing a bottleneck.¬† Normally, for a disk with two spindles the value should be max 2.