Category Archives: Management Tools

sysprocesses table – a handy tool for monitoring when you are not there

There are several SQL native queries to collect information about the server while confronting a bottleneck and I mean:

  • sp_who2 active – check active processes and if there are blockages
  • dbcc inputbuffer (spid) – see what the process behind a specific spid number is doing. This command is similar to using the ::fn_get_sql function that converts the sql_hadle into humanly readable text.
  • querying sys.dm_exec_requests and the other related DMVs

But all of these are practically transforming in a better form the output from master.dbo.sysprocesses table.
This table has a lot of useful information, but some of it may not be used at its real value by all of you. I want to show how you can use most of it and really understand its contents. I will not speak about the obvious columns which are displayed by the sp_who2 procedure call, such as: SPID, Status, Login, BlkBy, DBName, Command, CPUTime, DiskIO, LastBatch, ProgramName. There are all very nice and give you fast overview over your busy system if executed when the problem is noticed. I personally use it each time somebody is complaining about a heavy workload. But what if the workload happens when you are not at work? or not in front of a computer and receiving alerts? or maybe, you found that sometimes between 1 AM and 4 AM during the off hours, your system gets very busy, and you don’t have any extra expensive tool like (Diagnostic manager, Spotlight, Redgate etc) installed that will show you next day what really happened when your system was overused? So how do you explain to your IT director what happened and what can be done to overcome the future problems?

Here I will show you what you can do. It is free and I will be using only SQL tools, meaning the output from sysprocesses.
So, basically when you execute sp_who2 you will not see the underlining query that is behind a specific SPID. I bet you know that, this information is held inside sysprocesses under the same SPID number but in binary form. something like: 0x01000100099BA83AA0E4BEAD3100000000000000. Also you probably know that you can retrieve the sql text using the above mentioned function ::fn_get_sql. There are 2 other columns that can give you the exact statement if the statement that is executing and causing problems is inside a Stored Procedure. Then, you’ll know what part of your stored procedure needs more attention. These are stmt_start and stmt_end. They mark the starting and ending offset of the current SQL statement for the specified sql_handle.

What I am trying to say is that you could make a job inside the SQL Agent that will start at a specified hour and execute a stored procedure containing a Time based Cursor (from 1 AM to 4 AM) with a waiting delay of 3-4 seconds, that will insert the output from sysprocesses into a user table, available for you to query and see the history of all queries. Be careful though, when scripting the algorithm of the insert, because some SPID numbers can execute at a given time one query and after completion the SPID will be taken by another thread, other database. In order to not have a bunch of duplicate values and make the history table very hard to query, my advice is to insert once all the SPID numbers found in sysprocesses which don’t exist in your table, then that start the cursor based on existing SPID numbers from your table and insert only changed values for same SPID in: Status, loginame, hostname, cmd, dbid, and blocked from sysprocesses.

Example: MUST BE FILLED with right values.


/*
CREATE TABLE [dbo].[your_table](
	[date] datetime,
	[spid] [smallint] NOT NULL,
	[kpid] [smallint] NOT NULL,
	[blocked] [smallint] NOT NULL,
	[waittype] [binary](2) NOT NULL,
	[waittime] [bigint] NOT NULL,
	[lastwaittype] [nchar](32) NOT NULL,
	[waitresource] [nchar](256) NOT NULL,
	[dbid] [smallint] NOT NULL,
	[uid] [smallint] NULL,
	[cpu] [int] NOT NULL,
	[physical_io] [bigint] NOT NULL,
	[memusage] [int] NOT NULL,
	[login_time] [datetime] NOT NULL,
	[last_batch] [datetime] NOT NULL,
	[ecid] [smallint] NOT NULL,
	[open_tran] [smallint] NOT NULL,
	[status] [nchar](30) NOT NULL,
	[sid] [binary](86) NOT NULL,
	[hostname] [nchar](128) NOT NULL,
	[program_name] [nchar](128) NOT NULL,
	[hostprocess] [nchar](10) NOT NULL,
	[cmd] [nchar](16) NOT NULL,
	[nt_domain] [nchar](128) NOT NULL,
	[nt_username] [nchar](128) NOT NULL,
	[net_address] [nchar](12) NOT NULL,
	[net_library] [nchar](12) NOT NULL,
	[loginame] [nchar](128) NOT NULL,
	[context_info] [binary](128) NOT NULL,
	[sql_handle] [binary](20) NOT NULL,
	[stmt_start] [int] NOT NULL,
	[stmt_end] [int] NOT NULL,
	[request_id] [int] NOT NULL
) ON [PRIMARY]

*/
declare @datefrom as datetime 
declare @dateto as datetime
set @datefrom = getdate () 
set @dateto = dateadd(second, 120, @datefrom) - the loop will run for 2 minutes.

WHILE @datefrom < @dateto 

BEGIN

waitfor delay '00:00:03'

Insert into [yourtable] (date, column_list) select getdate(), column_list from sysprocesses
where SPID not in (select SPID from your_table) -- you can add filters for background processeses. 

DECLARE spid_cursor for 
select SPID from sysprocesses where SPID in (select SPID from your_table) 


[...] - open, fetch, while clauses 

Insert into your_table () 
select  
    from sysprocesses a 
    where spid = @spid and 
    (a.status not in (select status from your_table b where b.spid=@SPID) or 
    [...] - the rest of the columns that must be compared 

[...] fetch next , close deallocate clauses 

set @datefrom = GETDATE ()
IF @@ROWCOUNT = 0 BREAK
END

But there is one catch of course. sql_handle contains the MD5 hash of the entire batch text and is stored in the SQL Manager Cache (SQLMGR). So you could make an easy cursor that will insert the output from “select * from sysprocesses” but the actual SQL query might or might not be in Cache when you arrive at work and analyze them.
What you need to do is to convert the sql_handle into readable SQL text commands as it is collected. One way to do it is to create 2 user functions (one for the statement, one for the Stored Procedure name). These will serve as formula for 2 new computed columns inside your table.

create FUNCTION [dbo].[text_handle] (@handle binary(20), @start int, @end int)
RETURNS nvarchar (max) 
AS
BEGIN
declare @text nvarchar(max) 
set @text = (SELECT  substring(text, (@start + 2)/2, 
CASE @end WHEN -1 THEN (datalength(text)) ELSE (@end -@start + 2)/2 END) 
FROM ::fn_get_sql(@handle))
return @text
END
go

create FUNCTION [dbo].[proc_handle] (@handle binary(20), @db int) 
RETURNS nvarchar (max) 
AS
BEGIN
declare @proc nvarchar(max) 
set @proc = (SELECT 'DB:' + convert (nvarchar(4),@db) + ', SP: ' +  convert (nvarchar(max),text) FROM ::fn_get_sql(@handle))
return @proc
END
go 

ALTER TABLE your_table
    ADD sql_text AS dbo.text_handle (sql_handle,stmt_start,stmt_end)

ALTER TABLE your_table	
    ADD sql_proc AS dbo.proc_handle (sql_handle,dbid)

That should be enough for you to start narrowing the problem and mauybe you will want to collect some perfmon counters as well, because a bottleneck usually doesn’t come from a single source. There might be disk pressure, cpu or memory leak that makes the SQL processes run poorly.

Collect Backup & Database Size over time & forecast

There are frequent cases when you have to create a graph with Backup size growth over time. For example, to estimate their growth in order to plan buying more space. If you don’t have any collection scripts put in place to gather sizes of databases or backups it is very simple to query msdb system database.

In msdb is stored each backup command along with database size, backup size, compressed backup size, file location, and many other usefull information.

The only case you cannot rely completely on msdb information is when you migrated user databases from a different server and chose to do it to a newly installed SQL server with new system databases. In this case, you will not find any information about backups, older than migration date in msdb. You will have to recover the information from older msdb backups if you archived them of course.

  • Here is one script that you can use to get the sum of backup sizes for all databases that are being backed up on your server for the last year.
  •  

    select convert (date,backup_start_date) Backup_Date,
    sum(compressed_backup_size/1073741824) as Sum_Compressed_Backup_Size_GB, 
    sum(backup_size/1073741824) as Sum_Backup_Size_GB,COUNT(database_name)
    Total_DB_count
    from msdb.dbo.backupset 			
    where	[type] in ('D','I','L') 
    and backup_start_date > GETDATE ()-365 
    group by convert (date,backup_start_date)
    order by convert (date,backup_start_date)
    
  • This one got me the information I was looking for, namely: sum of all databases full backups + their diff backups from last full backup done. If you have for example the Full backup scheduled on Saturday then you will want to know the differential done after this one, in order to get the exact size needed per database to be restored if the need comes. Adapt the @dw_[…] variable to your needs.
  • &nbsp

    declare @dw_sunday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =1)
    declare @dw_monday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =2)
    declare @dw_tuesday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =3)
    declare @dw_wednesday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =4)
    declare @dw_thursday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =5)
    declare @dw_friday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =6)
    declare @dw_saturday date = (select max(CONVERT (date, backup_start_date)) from msdb..backupset where DATEPART (dw,backup_start_date) =7)
    
    ;with a as 
    (SELECT 
       CONVERT (date,backup_start_date) as data, 
        database_name, 
        sum(compressed_backup_size/1048576) as [Backup_total(MB) Full]
        FROM msdb..backupset 
        where	[type] in ('D') and 
    convert (date,backup_start_date) = @dw_saturday
    group by  database_name,  CONVERT (date,backup_start_date)
    ) ,
    b as 
    (SELECT 
        convert (date,backup_start_date) as data, 
        b.database_name, 
        sum(compressed_backup_size/1048576) as [Backup_total(MB) Diff]
        FROM msdb..backupset b
        inner join a on a.database_name = b.database_name and convert (date,backup_start_date) > CONVERT(date,a.data) 
        where	[type] in ('I') 
    group by  b.database_name,  CONVERT (date,backup_start_date))
    
    select a.database_name,
    		a.data as full_data, 
    		b.data as diff_data, 
    		a.[Backup_total(MB) Full], 
    		b.[Backup_total(MB) Diff]	
     from a, b
     where 
     a.database_name = b.database_name and
     b.data > a.data 
     order by 1,2,3
    
    
  • If you have Full backups done once per week (like I do) and Differential + Log Backups daily then, you will want to know the backup size per week, in order to be able to do a forecast for a number of weeks in advance.
  •  

    SELECT distinct
        datepart(wk,backup_start_date) AS weekNumber,
        sum(compressed_backup_size/1073741824) as [Backup_total(GB)],
        COUNT(database_name) as Files_Total
        FROM msdb.dbo.backupset 
        where	
    backup_start_date > GETDATE ()-365 and backup_start_date < '2014-01-01 00:00:00.000'
    group by  datepart(wk,backup_start_date)
    order by  datepart(wk,backup_start_date)
    
  • And this script will show you all backups (here are selected Full ones but you can change the letter from type column in Where clause)
    order by date descending, including information about database file names and their sizes. This will allow you to estimate the growth of Database files, not only their backups.
  •  

    select distinct b2.database_name, b2.backup_start_date, b2.type,
    		b2.backup_size/1048576 as backupsize_GB, m2.physical_device_name, b.file_size/1048576 as File_size_MB, b.logical_name 
    		as Logical_FileName
    from				msdb.dbo.backupfile b, 
    				msdb.dbo.backupset b2, 				
    				msdb.dbo.backupmediafamily m2
    
    where	b.backup_set_id=b2.backup_set_id and 		
    		b2.media_set_id = m2.media_set_id and
    		--b2.database_name = 'master' AND
    		b2.type = 'D' 
    order by 2 desc
    

    How to update system tables SQL 2008 & 2012 (FIX: “Ad hoc updates to system catalogs are not allowed”)

    I came across an old version update that was enabled for all SQL 2005 + versions, meaning: update on system tables. In older versions, there was the possibility to update system tables, thus allowing you to perform some administrative tasks/fixes without having to shut down the server.
    I agree that updating system tables are not meant to be done by an usual user, and therefore hacking SQL should be done only by Database Administrators and by those who have enough knowledge to not ruin everything and put the production on hold 🙂 yes… that D smile is on my face now.
    So, let’s assume I have restored the master database on a new machine and I don’t have enough space on C drive or I don’t want to use it, and want to put the databases on a SAN. Drive C was used by all user databases from the old server. In order to attach the databases rather that perform a restore on each one of them from backups (reasons: time consuming, backups are not up to date, backups don’t exist), I have to change the physical path for all user databases inside master database to point to an existing location with enough space, so that at the next SQL startup the databases will all come online as well. In this way, you can even change the databases storage by putting data files on a separate drive from the log files.

    Before I go on, I am strongly reminding that this method is not documented and should be used very carefully, and only by those who feel confident in their knowledge. Backup to master database is a must before every hack on it.

    These said… In past SQL 2000 version, the update should have been done on 2 system tables contained inside master database:

    • sysaltfiles – contains the physical location of each data / log file for each database and
    • sysdatabases – contains the physical location for each data file from each database.

    Today in SQL 2005 – 2012 versions, these 2 tables have been kept for backward compatibility as system views. SQL recommends that you use the current SQL Server system views instead. To find the equivalent system view or views, see Mapping SQL Server 2000 System Tables to SQL Server 2005 System Views.

    To spare you from research, sysaltfiles has been replaced by sys.master_files and sysdatabases by sys.databases.

    A simple select would show that all of these are views.

    use master
    go
    select name, xtype from sysobjects
    where name in ('sysaltfiles','sysdatabases','master_files','databases')
    order by name 
    
    

     

    I want to update the system table which is on top of above system views. How do I know which one?
    In order to gain access to all system tables/views you need to open a DAC connection from SSMS. So go on and open a DAC connection for future queries. Don’t use the “Connect” option to make the DAC connection from SSMS but “NEW QUERY”.

    I will limit myself and run the select only on sysaltfiles and sysdatabases to look at their execution plan.
    I assume that master_files and databases have the same referenced tables.

     
    select name, filename from sysdatabases 
    
    

    For this query SQL will actually query sys.sysdbreg and sys.sysbrickfiles. In sys.sysdbreg SQL stores the database names and some additional information, but only sys.sysbrickfiles table is used as source for filenames location.

     
    select name, filename from sysaltfiles
    
    

    For this query SQL will actually query sys.sysbrickfiles.
     

    The following query will confirm that both are system tables. xtype = S stands for a system table.

    select name,xtype from sysobjects where name in ('sysbrickfiles','sysdbreg')
    
    

     
    In conclusion, we need to update only a single table: sys.sysbrickfiles.

    Le’t go and do that.

    1. Close all opened connections and Stop the SQL Server.
    2.  

    3. Start SQL in single server mode
       
      From an elevated cmd (Run as administrator), access the right folder where sqlservr.exe executable is located and execute below commands. Normally you will find it at “C:\Program Files\Microsoft SQL Server\MSSQL[InstanceName]\MSSQL\Binn”. Run below commands:
    4.  

      D:\>CD /D "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn"
      C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn>sqlservr.exe -m
      

      You will see a lot of information in the shell, allow it to start the SQL and when it’s done, let the shell opened.
       

    5. Start another elevated cmd or open a DAC connection from SSMS to perform the update on your system table.
    6.  

      • Connect to SQL from cmd:
      • sqlcmd -SMACHINENAME\SERVERINSTANCE -E
      • Make sure you are connected to right server and then enable the update on system tables.
      •  

        select @@servername
        GO
        use master
        GO
        sp_configure 'allow updates', 1
        GO
        reconfigure with override
        GO
        

        A small note on this: If you had done it with SQL started in normal (multi) user mode, the above configuration change would have worked fine without any warning, but the update step on system table would have failed with the error message:
        “Ad hoc updates to system catalogs are not allowed”
         

      • Identify logical file names and their physical name locations for the database in question: – my Database Name is “data1”
      •  

        select lname, pname from sys.sysbrickfiles s1 
        inner join sys.sysdbreg s2 on s1.[dbid] = s2.[id] 
        where s2.name = 'data1' 
        
        
        
      • Perform the update.
      •  

        update sys.sysbrickfiles set pname = 'D:\MSSQL\data1.mdf' where lname ='data1';
        GO
        update sys.sysbrickfiles set pname = 'D:\MSSQL\data1_log.ldf' where lname ='data1_log';
        GO
        select lname, pname from sys.sysbrickfiles s1 
        inner join sys.sysdbreg s2 on s1.[dbid] = s2.[id] 
        where s2.name = 'data1' 
        
        
    7. Close the first cmd shell – It will stop SQL Server.
    8.  

    9. Copy database data file and log file to the new “D:\MSSQL” location.
    10.  

    11. Allow SQL Service Account full rights on “D:\MSSQL\” location – Right click on folder –> Properties –> Security –> Edit –> Add –> enter the SQL Service Account, in my case it was Network Service –> press OK –> check Full Control and press OK
    12.  

    13. Restart SQL Server (Normal mode)
    14.  
      The database should come online and its files will point to the new location.

    Key SQL performance counters – identify bottlenecks

    I was planning for a long time to write this post but time is never enough and timing is always not the right one 🙂
    This time I managed to gather all the info and present a simple to follow, brief and explanatory list of most important performance counters used in SQL monitoring. I came up with it, by studying a lot of other similar documents from important SQL professionals and adding personal insight on each of them.

    Let’s begin by saying that these counters are helpful when are collected throughout the whole day with a refresh interval set at 1 minute. It can be done easily if you go to: Control Panel\Administrative Tools\Performance Monitor\Data Collector Sets\User defined. Right click and open: New–> Data Collector Set –> check on Create Manually –> check on Performance counters –> start adding performance counters.

    Analyze Disk
     
    1. Logical Disk

    • Disk Transfers/sec
    • - slow, needs attention : < (less than) than 80 I/O’s per second when 
      "Physical Disk\Avg. Disk sec/Transfer" is > (greater than) 25ms. 
      This may indicate too many virtual LUNs using the same physical disks on a SAN. 

    2. Physical Disk

    • Avg. Disk sec/Read – average time, in seconds, of a read of data to the disk
    • – very good	 	         < 10 ms 
      – okay               		 10-20 ms			
      – slow, needs attention   	 20-50 ms			 
      – Serious I/O bottleneck  	 > 50 ms 
      
    • Avg. Disk sec/Write – average time, in seconds, of a write of data to the disk
    • – very good	 	         < 10 ms 
      – okay               		 10-20 ms			
      – slow, needs attention   	 20-50 ms			 
      – Serious I/O bottleneck  	 > 50 ms 
      
    • Avg. Disk sec/Transfer – Disk latency
    • To determine the impact of excessive paging on disk activity, 
      multiply the values of the "Physical Disk\Avg. Disk sec/Transfer"
      and "Memory\Pages/sec counters.  If the product of these counters 	
      exceeds 0.1, paging is taking more than 10 percent of disk access time, 
      which indicates that you need more RAM.  If this occurs over a long 
      period, you probably need more memory. 
      However, "Page Life Expectancy" counter must be checked as well. 
      
    • Disk Bytes/sec – total number of bytes sent to the disk (write) and retrieved from the disk (read) over a period of one second.
      If the Perfmon capture interval is set for anything greater than one second, the average of the values captured is presented.
    • Disk Read Bytes/sec
    • Disk Write Bytes/sec
    • Disk Reads/sec
    • Disk Writes/sec

    The above 5 are useful in analyzing any ups or downs for a certain period. It will help you know the nr of reads/writes in normal behavior and if a problem occurs you can link several values and see if the difference is notable.

    Analyze Memory
     
    1. Memory

    • Available MBytes – amount of physical memory available to processes running on the computer, in Megabytes
    •  
      - low– < than 10% available
      - very low – less than 5% available
      
    • Page file usage %
    • - good - < 70 % 
    • Pages input/sec - the rate at which pages are read from disk to resolve hard page faults. Hard page faults occur when a process refers to a page in virtual memory that is not in its working set or elsewhere in physical memory, and must be retrieved from disk.
    • - good - < 10
      - bad  - > 15 

    • Pages/sec - rate at which pages are read/written to disk
    • - slow on a slow disk subsystem  - 100 - 600 
      - slow on a powerful disk system - 600 - 2500 
      - very slow                      - 2500 - 5000
      - extremely slow                 - > 5000 
      

    2. SQL Server: Memory Manager

    • Memory Grants pending - Number of waits on processes
    • should stay close to 0 
    • SQL Cache Memory (KB) - how much physical RAM is devoted to SQL Server’s data cache
    • It should be = (Total amount of RAM in the server) - (RAM used by Windows Server) 
      - (RAM used by SQL Server) - (and any utilities you have running on the server). 
      If much lower, than it may be possible that you aren't allowing SQL Server to dynamically
      allocate RAM, and instead have accidentally specified that SQL Server uses less RAM 
      than it should have for optimal performance. This is not a critical counter for memory
      leak but it's worth looking at it. 

    3. SQL Server : Buffer Manager

    • Buffer cache hit ratio - how oftem SQL Server can get data from buffer rather than disk
    • - good - > 90 % OLAP systems 
      - good - > 95% OLTP systems
    • Free list stalls/sec - the frequency of requests for database buffer pages are suspended because there's no buffer available
    • - good - < 2 
    • Free pages - total nr of 8k data pages on all free lists
    • - good -  > 640 
    • Lazy writes / sec - nr of times per sec that lazy writer moves dirty pages from buffer to disk to free buffer space
    • - good - < 20 
    • Page lookups/sec - nr of requests to find a page in the buffer. This counter should be analyzed in relation with "SQL statistics\ Batch Requests/sec" counter.
    •  

    • (Page Lookups/sec) / (Batch Requests/sec) - reveals if query plan is efficient or inefficient
    • -good - < 100 
    • Page reads/sec - reveals index, memory problems
    • - good - < 90 
    • Page writes/sec
    • - good - < 90
    • Page Life Expectancy - time (seconds) that a data page set is kept into buffer memory.
    • - good - > 300 

      Hint: Less than that on a regular basis is a clear indication that your SQL Server needs more RAM allocated to function properly.

    Analyze CPU
     
    1. System

    • Processor Queue Length
    • - good         - < 4 per cpu 
      - also good    - 4-6  
      - fair         - 7 - 12 
      - bad          - > 12 is 
    • Context Switches/sec - happens when a higher priority thread preempts a lower priority thread that is currently running or when a high priority thread blocks
    •  
      - high        - > 5000 
      - very high   - > 15000 
      

    2. Processor

    • Processor time % (total & per process)
    • - Fair - <80% 
    • Privileged time % (total & per process)
    • - Fair - < 75% (higher bottleneck)
    • Interrupt Time % - indicates the percentage of time the processor spends receiving and servicing hardware interrupts
    • - bad         - > 30 % (hardware or driver problem) 
      - very bad    - > 50%
      

    Analyze Query Performance
     
    1. SQL Server : Access Methods IO Intensive Operations

    • Forwarded records /sec - Occurs when a column's data gets expanded more (for varchar types) than the initially defined space. It is happening only for tables without clustered indexes.
    • - good - < 10 per 100 batch requests/sec

      Hint: Evaluate creating clustered indexes for heap tables.

    • (Index searches /sec) / (Full scans/sec) - Monitors the ratio between Index searches (good) over Full Scans (bad) on base tables or indexes
    • - bad - > 1000 and CPU high 

      Hint: Queries with missing indexes or too many rows requested will have a large number of logical reads and an increased CPU time.
      For OLTP applications, optimize for more index searches and less scans (preferably, 1 full scan for every 1000 index searches). Index and table scans are expensive I/O operations.

    • Pages splits / sec - Occurs when a 8k page fills and must be split to a new 8k page
    • - fair - < 20 per 100 batch requests/sec 
      

      Hint: If higher, than you have a problem that can be solved by clustered index, good fill factor.
      An increased fill factor helps to reduce page splits because there is more room in data pages before it fills up and a page split has to occur.

    2. SQL Statistics

    • Batch requests/sec - shows you how busy is your server.
    • - busy server  - > 1000 
      - very busy    - > 3000 
      

      Hint: A typical 100 Mbs network card is only able to handle about 3000 batch requests per second. If you have a server that is this busy, you may need
      to have two or more network cards, or go to a 1 Gbs network card.

    • SQL Compilations/Sec - The number of times per second that SQL Server compilations have occurred.
    • - good - as low as possible.
      - bad  -  > 100 
      
    • SQL ReCompilations/Sec - The number of times per second that SQL Server recompilations have occurred.
    • - good - as low as possible.
      - bad  -  > 100 
      

      Hint: Evaluate the number of (re)compilations linked to the number of Batch requests/sec, and check if the resulting ratio is less than 30% or higher. If higher than you should start checking which Stored procedures, or ad-hoc queries are getting compiled / recompiled over and over again and causing CPU usage.
      I wrote about this in detail at How to find out which Queries have non-reusable Query Plans? and Top 5 Steps To Determine Bad Performance of Queries

    So basically these counters should give you an overall view if your system is low on CPU, Memory or Disk resources or if you should re-design some your queries, tables, indexes. The perfmon counters are a very good indication of how tuned is your system. However, these should be seen together with SQL statistics that can be gathered from the multitude of dynamic views available starting SQL 2005 + editions. In a next post I will write about those important SQL counters, how I prefer to call them. If you start collecting windows counters and SQL counters you should have no problem in identifying a certain bottleneck happened in a timeframe you where not at office or skipped the critical period and couldn't analyze on time everything.

    Have a great time until then!

    Trace DDL & DCL operations in SQL Server Log – fn_trace_gettable

    Did you know that starting with SQL 2005 all future versions of SQL includes the default trace of all DDL or DCL modifications done on all DB objects?

    I was inspired by Jonathan Kehayias blog to dig deeper and check other useful output from the default trace.

    Let me remind you what are Data Definition Language (DDL) and Data Control language (DCL) statements:
     
    DDL statements: are used to define the database structure or schema. Some examples:

    • CREATE – to create objects in the database
    • ALTER – alters the structure of the database
    • DROP – delete objects from the database
    • TRUNCATE – remove all records from a table, including all spaces allocated for the records are removed
    • COMMENT – add comments to the data dictionary
    • RENAME – rename an object

     
    DCL statements : are used to manage users permissions. Some examples:

    • GRANT – gives user’s access privileges to database
    • REVOKE – withdraw access privileges given with the GRANT command

    For example, I am pretty sure that you wanted at least once to find out if somebody with access on Production databases altered a certain Procedure (by create and drop), Deleted an Object or Created without letting you know or even by mistake? As well, granting rights to Logins just because some people can do that is a very bad habit with which  maybe you had to deal at some point. Windows Administrators or DEV team leaders can take advantage of their high rights and escalate the DBA’s  in some urgent matter.

    Asking around didn’t bring you any answer right? Perhaps the person who did that doesn’t even know, or maybe it was part of a faulty deployment?

    The modification may have caused some errors, or lost in permissions… At last, it is very useful to know right away if this kind of modification took place and you or your team didn’t do that.

    To see what Events are logged – query the fn_trace_geteventinfo (trace_id) function by joining it with sys.trace_categories and sys.trace_events tables:

    SELECT DISTINCT tge.eventid, tc.name AS CategoryName,te.name AS EventName 
    FROM fn_trace_geteventinfo(1) AS tge INNER JOIN sys.trace_events AS te ON tge.eventid = te.trace_event_id 
    INNER JOIN sys.trace_categories AS tc ON te.category_id = tc.category_id
    ORDER BY CategoryName, EventName

    If you are interested in collecting all information in the last week displaying all Created/Deleted/Altered Objects as well as db_roles or server roles assigned to new Logins not by you, than you can issue following SQL code – tested only on SQL 2008 :

    SET ANSI_NULLS ON 
    SET ANSI_WARNINGS ON
    SET NOCOUNT ON
    
    GO
    
    ---
    IF (object_id('tempdb.dbo.#ddl_stmt') IS NOT Null)
       DROP TABLE tempdb.dbo.#ddl_stmt
     
    CREATE TABLE   tempdb.dbo.#ddl_stmt
     (	
    	   HostName nvarchar(250) null, 
           ApplicationName  nvarchar(250) null, 
           NTUserName  nvarchar(250) null, 
           NTDomainName  nvarchar(250) null, 
           LoginName  nvarchar(250) null, 
           SPID int null, 
           EventClass int null, 
           Name  nvarchar(250) null,
           EventSubClass int null,      
           TEXTData  nvarchar(250) null, 
           StartTime datetime, 
           ObjectName  nvarchar(250) null, 
           DatabaseName  nvarchar(250) null, 
           TargetLoginName nvarchar(250) null,
           TargetUserName nvarchar(250) null )
    	
    DECLARE @filename VARCHAR(255) 
    SELECT @FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'  
    FROM sys.traces   
    WHERE is_default = 1;  
    
    DECLARE @starttime datetime = getdate () -7
    
    INSERT INTO tempdb.dbo.#ddl_stmt ( [HostName]
          ,[ApplicationName]
          ,[NTUserName]
          ,[NTDomainName]
          ,[LoginName]
          ,[SPID]
          ,[EventClass]
          ,[Name]
          ,[EventSubClass]
          ,[TEXTData]
          ,[StartTime]
          ,[ObjectName]
          ,[DatabaseName]
          ,[TargetLoginName]
          ,[TargetUserName])
    SELECT gt.HostName, 
           gt.ApplicationName, 
           gt.NTUserName, 
           gt.NTDomainName, 
           gt.LoginName, 
           gt.SPID, 
           gt.EventClass, 
           te.Name,
           gt.EventSubClass,      
           gt.TEXTData, 
           gt.StartTime, 
           gt.ObjectName, 
           gt.DatabaseName, 
           gt.TargetLoginName,
           gt.TargetUserName
    FROM [fn_trace_gettable](@filename, DEFAULT) gt 
    JOIN master.sys.trace_events te ON gt.EventClass = te.trace_event_id 
    WHERE EventClass in (164, 46,47,108, 110, 152) and gt.StartTime > = @starttime
    and gt.LoginName not in ('NT AUTHORITY\NETWORK SERVICE') 
    ORDER BY StartTime DESC; 
    
    select * from #ddl_stmt