Category Archives: Errors and Fixes

7 Common Design Errors – that can be avoided

Being a DBA usually means fixing a lot of design issues when confronting with performance problems. If the design is done with good planning ahead, then the DBA doesn’t have much to do at the design part. Developers, in the absence of a Database Architect are preoccupied mostly about functional scripting rather that functional + optimal scripting. It is not developers fault, it is because a database design should be performed together with a Database Specialist who can figure out the possible issues at the design phase. A lot of firms don’t invest in this position but rely on developer’s professional experience. I came up with a list of possible and frequent Developer errors that should be avoided in order to prevent SQL thinking twice or more before doing a simple operation.

  1. When filtering the results by the WHERE clause, in equality or unequality operators (=, Like, <>), pay attention when assigning to Non-Unicode types of columns (varchar, char, text) values of Unicode type (nvarchar, nchar, ntext). Doing so, SQL will add a CONVERT_IMPLICIT operation at the execution plan which is a pointless thing to do. Example:
     (varchar_column varchar(50) PRIMARY KEY)
     INSERT INTO @table VALUES ('Maria')
    DECLARE @varchar_param AS varchar(50) = 'Maria'
    SELECT *
    FROM @table
    WHERE varchar_column = @varchar_param
    DECLARE @Nvarchar_param AS Nvarchar(50) = N'Maria'  
    SELECT *
    FROM @table
    WHERE varchar_column = @Nvarchar_param

  2. Based on the same reason as the upper one (CONVERT_IMPLICIT), always use in JOIN operations same types of columns, not different ones which SQL can auto convert without an explicit CAST or CONVERT in your code. Here are all implicit and explicit conversions done by SQL: Data Type Conversion
  3. Avoid using In WHERE clauses and JOIN conditions – functions. If a function is called for column that has an index on it, than the index might not be used as it should be. Or if you do, analyze the execution plan so that optimal index usage is made. The below query is one example of a bad usage of functions inside WHERE clause:

  4. NOLOCK hint is overused. Please be aware that NOLOCK causes dirty reads. This means that at the beginning of the transaction you can retrieve one set of data which by the end of the transaction might get modified but you don’t care about it and go along with it. In the end, the transaction conditions might or might not apply to some data that shouldn’t be touched after all. An alternative to NOLOCK hint is to enable Snapshot Isolation in SQL Server.
  5. Some Stored Procedures are created with parameters that have the datatype different than the assigned value. Same is valid for Variables declaration. Keep the datatype identical for each type of equality operation or assignment.
  6. You can choose some datatypes in favor of other considering: storage and data inserted. Sometimes there is no point in using INT if you will have values up to 255, same as using INT when you expect a lot of data – more that 2 billion. A Column Alter later on is not a valid solution. This means you should use TINYINT datatype in first case or BIGINT in second.
    Also, if you will not handle strings with special characters or chinese/japanese words, than you should use any of the non-unicode datatypes: char, varchar, text rather than the unicode ones because of the doubled size in storage. For example same string stored as NVARCHAR format will be twice as big the VARCHAR format, only because SQL will use one extra byte to encode each character. It causes more disk writes and therefore more disk reads, in the end less performance.

  7. Usage of Common Table Expression: ;WITH (CTE). It is very simple to write and easy to understand but, there is something to think about before using it everywhere.
    First of all, all recursive table definitions must be evaluated for each following operation, each time. The data is not evaluated once for re-use later on and is used only inside the scope of the query. The SQL treats the whole expression more like a view and is held in memory; it is not treated like a temporary table. The advantage with the #temp table is that, the data gets evaluated once and then the results are re-used in the join.
    Second, you can not create Indexes on recursive table definitions used in following Joins or filter operations. In conclusion, the CTE expression is advisable to be used when you have a small data set to use in joins but when you want to join millions of rows, than CTE is not a good option. You should choose temporary tables. It will always run faster.

How to update system tables SQL 2008 & 2012 (FIX: “Ad hoc updates to system catalogs are not allowed”)

I came across an old version update that was enabled for all SQL 2005 + versions, meaning: update on system tables. In older versions, there was the possibility to update system tables, thus allowing you to perform some administrative tasks/fixes without having to shut down the server.
I agree that updating system tables are not meant to be done by an usual user, and therefore hacking SQL should be done only by Database Administrators and by those who have enough knowledge to not ruin everything and put the production on hold 🙂 yes… that D smile is on my face now.
So, let’s assume I have restored the master database on a new machine and I don’t have enough space on C drive or I don’t want to use it, and want to put the databases on a SAN. Drive C was used by all user databases from the old server. In order to attach the databases rather that perform a restore on each one of them from backups (reasons: time consuming, backups are not up to date, backups don’t exist), I have to change the physical path for all user databases inside master database to point to an existing location with enough space, so that at the next SQL startup the databases will all come online as well. In this way, you can even change the databases storage by putting data files on a separate drive from the log files.

Before I go on, I am strongly reminding that this method is not documented and should be used very carefully, and only by those who feel confident in their knowledge. Backup to master database is a must before every hack on it.

These said… In past SQL 2000 version, the update should have been done on 2 system tables contained inside master database:

  • sysaltfiles – contains the physical location of each data / log file for each database and
  • sysdatabases – contains the physical location for each data file from each database.

Today in SQL 2005 – 2012 versions, these 2 tables have been kept for backward compatibility as system views. SQL recommends that you use the current SQL Server system views instead. To find the equivalent system view or views, see Mapping SQL Server 2000 System Tables to SQL Server 2005 System Views.

To spare you from research, sysaltfiles has been replaced by sys.master_files and sysdatabases by sys.databases.

A simple select would show that all of these are views.

use master
select name, xtype from sysobjects
where name in ('sysaltfiles','sysdatabases','master_files','databases')
order by name 


I want to update the system table which is on top of above system views. How do I know which one?
In order to gain access to all system tables/views you need to open a DAC connection from SSMS. So go on and open a DAC connection for future queries. Don’t use the “Connect” option to make the DAC connection from SSMS but “NEW QUERY”.

I will limit myself and run the select only on sysaltfiles and sysdatabases to look at their execution plan.
I assume that master_files and databases have the same referenced tables.

select name, filename from sysdatabases 

For this query SQL will actually query sys.sysdbreg and sys.sysbrickfiles. In sys.sysdbreg SQL stores the database names and some additional information, but only sys.sysbrickfiles table is used as source for filenames location.

select name, filename from sysaltfiles

For this query SQL will actually query sys.sysbrickfiles.

The following query will confirm that both are system tables. xtype = S stands for a system table.

select name,xtype from sysobjects where name in ('sysbrickfiles','sysdbreg')

In conclusion, we need to update only a single table: sys.sysbrickfiles.

Le’t go and do that.

  1. Close all opened connections and Stop the SQL Server.

  3. Start SQL in single server mode
    From an elevated cmd (Run as administrator), access the right folder where sqlservr.exe executable is located and execute below commands. Normally you will find it at “C:\Program Files\Microsoft SQL Server\MSSQL[InstanceName]\MSSQL\Binn”. Run below commands:

    D:\>CD /D "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn"
    C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn>sqlservr.exe -m

    You will see a lot of information in the shell, allow it to start the SQL and when it’s done, let the shell opened.

  5. Start another elevated cmd or open a DAC connection from SSMS to perform the update on your system table.

    • Connect to SQL from cmd:
    • Make sure you are connected to right server and then enable the update on system tables.

      select @@servername
      use master
      sp_configure 'allow updates', 1
      reconfigure with override

      A small note on this: If you had done it with SQL started in normal (multi) user mode, the above configuration change would have worked fine without any warning, but the update step on system table would have failed with the error message:
      “Ad hoc updates to system catalogs are not allowed”

    • Identify logical file names and their physical name locations for the database in question: – my Database Name is “data1”

      select lname, pname from sys.sysbrickfiles s1 
      inner join sys.sysdbreg s2 on s1.[dbid] = s2.[id] 
      where = 'data1' 
    • Perform the update.

      update sys.sysbrickfiles set pname = 'D:\MSSQL\data1.mdf' where lname ='data1';
      update sys.sysbrickfiles set pname = 'D:\MSSQL\data1_log.ldf' where lname ='data1_log';
      select lname, pname from sys.sysbrickfiles s1 
      inner join sys.sysdbreg s2 on s1.[dbid] = s2.[id] 
      where = 'data1' 
  7. Close the first cmd shell – It will stop SQL Server.

  9. Copy database data file and log file to the new “D:\MSSQL” location.

  11. Allow SQL Service Account full rights on “D:\MSSQL\” location – Right click on folder –> Properties –> Security –> Edit –> Add –> enter the SQL Service Account, in my case it was Network Service –> press OK –> check Full Control and press OK

  13. Restart SQL Server (Normal mode)
    The database should come online and its files will point to the new location.

msdb.dbo.sp_readrequest;1 – long running process

When monitoring the new installed version of SQL 2008 R2 SP1, I have encountered that Database mail leaves a hung process few minutes. This doesn’t do any blockages or other trouble to your SQL installation, either that it is anoying to see over and over again this long running process. In order to get rid off it, just check the system parameter value for “DatabaseMailExeMinimumLifeTime” running the following query:

Use msdb
exec sysmail_help_configure_sp 'DatabaseMailExeMinimumLifeTime'

The default setting will be 600. You need to change it to a lower period.

Use msdb
exec sysmail_configure_sp 'DatabaseMailExeMinimumLifeTime', 30

That’s all!


Read memory dump file after a BSOD event

I decided to post about this problem since I saw many questions about this issue. There are several tools that can be used to read a memory dump like Windbg.exe or Dumpchk.exe. In this article I will explain the usage of the Windbg debugger.

First of all what is a memory dump and in what circumstances we have to deal with it?

The BSOD bluescreen or “blue screen of death” is a stop error screen of Windows Operating System, caused by a fatal system error of a non-recoverable nature, that causes the system to “crash.” When the recovery option is set to write debugging information, a program called SAVEDUMP.EXE is invoked during a fatal system error which writes the entire contents of memory to the system paging file. When the system is rebooted Windows copies the paging file to a file called MEMORY.DMP. This file can be found at this location: C:Windows

I will use a memory minidump file for analyze because it is smaller and easier to read: C:WindowsMinidumps

STEP – by – STEP guide:

  1. Set the Windows to create mini dump files: Control Panel -> System -> Advanced -> Startup and Recovery -> Settings -> Write debugging information -> Small memory dump.
  1. Windows  Memory Dump setting

2.   Download and install the Debugging Tools for Windows for 64 bits or 32 bits systems. Windbg is contained in this package.

3.   If you don’t have any dump files you can use Windows feature to create one to test how this all works. Open Task Manager -> select a process -> right click and press create Dump File. It will be located in c:usersyour userAppdataLocalTemp***.DMP

4.   Open from Start -> Programs -> Debugging Tools fro Windows -> Windbg Select from File -> Open Crash Dump and specify the location of your dump.

You will receive an error that no symbols have been loaded. Symbols are needed to effectively debug.

*** ERROR: Symbol file could not be found.  Defaulted to export symbols for ntdll.dll –
*** WARNING: symbols timestamp is wrong 0x4a5bdf57 0x4a5be125 for wow64cpu.dll
*** WARNING: symbols timestamp is wrong 0x4a5bdf57 0x4a5bda1b for IPHLPAPI.DLL
*** ERROR: Symbol file could not be found.  Defaulted to export symbols for IPHLPAPI.DLL –

5.    Open File -> Symbol File Path and insert this path:


6.    Press Reload. and Restart the analyze (CTRL + SHIFT + F5).

7.    You will receive a summary of the Bucheck Analysis and to view details press the !analyze-v link from the below message:

Use !analyze -v to get detailed debugging information.

SQL ERROR Fix: sp_MS_marksystemobject: Invalid object name ‘[cdc].[cdc_TABLE1]’

Hi, one of the new features in Microsoft SQL Server 2008 is the ability to track changes on a table. You can enable change tracking on a table using the Change Data Capture feature. It is extremely useful for BI (Business Intelligence) processes because it allows to track only the changes done on a table instead of importing the whole table each day for processing. Today me and a DEV guy faced a curious problem trying to enable an additional table for cdc on a Database already enabled for change data Capture feature. Just FYI, you need to be sysadmin or have db_owner role to manage the cdc feature.

Normally when you run:

EXEC sys.sp_cdc_enable_table_change_data_capture
@source_schema = 'dbo',
@source_name = 'TABLE1',
@role_name = 'cdc_TABLE1'


Job 'cdc.Test_capture' started successfully.
Job 'cdc.Test_cleanup' started successfully.

The SQL creates 2 new SQL Agent jobs, and a new table named as you specified it at the @role_name parameter, which will reflect the changes on the tracked table.
The table will be market as system table and found under “System Tables” folder from Database/Tables tree in SQL Management Studio.

In my case the cdc table was created inside master database with few warnings and NOT as system table. It was inserted under User tables leaf. At first look it was working as cdc table, but I didn’t want to take the risk.

These where the warnings:

sp_MS_marksystemobject: Invalid object name '[cdc].[cdc_TABLE1]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_insdel_605961235]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_upd_605961235]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_batchinsert_605961235]'

After digging a little on the Internet I found out that, when the stored procedure sys.sp_cdc_enable_table_change_data_capture is executed, the undocumented procedure sp_MS_marksystemobject tries to mark the table as system table. In our case, the task failed and the table was marked as user table which was not what we wanted.
So first thing was to try to run manually the command which I thought could have done the trick and mark the table as system:

sp_MS_marksystemobject '[cdc].[cdc_TABLE1]'

Same warning. Not what I expected…

After another couple of hours of research, I found the issue. The resolution came from a totally unsuspected area. It happened that cdc schema under which all cdc tables are created had a different schema owner than the cdc SQL user. Somehow, in past it was modified to another SQL user. The reason it failed with “Invalid object name” warning became clear for me. The user which owned the schema didn’t have rights to see/change cdc objects. I had to change the cdc schema owner to cdc as it is supposed to be.

This command solved the issue and let me enable the cdc feature on the table without warnings:


SQL ERROR Fix : Error : Msg 9514, Level 16, State 1, Line 1 Xml data type is not supported in distributed queries. Remote object ‘LINKED_SERVER.Database.dbo.Table’ has xml column(s).

I ran into a rather senseless restriction put by MS when using INSERT INTO…SELECT FROM to transfer few rows through a Linked Server from a table that has a XML data type column even if that column is not listed in “insert into” statement. See detailed the reproduction of this error.

The insert statement from this example uses a variable for filtering options.


DECLARE @maxid as nvarchar (max)
SET @maxid=(SELECT TOP 1 col1 from Table_Local ORDER BY col1 DESC)
Insert into Database.dbo.Table_Local (col1, col2, col3, XMLcol4)
select col1, col2, col3, XMLcol4
from  [Linked_Server].Database.dbo.Table_Remote t2  where col1 > @maxid )t2


Msg 9514, Level 16, State 1, Line 4
Xml data type is not supported in distributed queries.
Remote object 'LINKED_SERVER.Database.dbo.Table' has xml column(s).


1. Change the statement using OpenQuery function instead of Linked Server.
2. Cast the XML column retrieved to a varchar (max).

Basically, the data is queried on the remote server, converts the XML data to a varchar, sends the data to the requesting server and then reconverts it back to XML.

Here is the right statement:

DECLARE @TSQL varchar(8000)
DECLARE @maxid as nvarchar (max)
SET @maxid=(SELECT TOP 1 col1 from Table_Local ORDER BY col1 DESC)
SELECT  @TSQL = 'insert into Database.dbo.Table_Local
                          (col1, col2, col3, XMLcol4)
select col1, col2, col3, Cast(t2.XMLcol4 as XML) as XMLcol4
from OPENQUERY([LINKED_SERVER_NAME],''SELECT col1, col2, col3,
Cast(XMLcol4 as Varchar(max)) as XMLcol4
DATABASE.dbo.Table_Remote where col1 >  ''''' + @maxid + ''''''') t2'