Monthly Archives: November 2010

SQL 2005/2008 Database Recovery procedure – Data File deleted (Part 2)

Sometimes we need to deal with unfortunate and unplanned events like: hardware failure, human mistakes that lead to a SUSPECT database. Let’s face it, this is the moment when the management actually values the fact that he hired a DBA. Usually these unforgettable moments, either makes you a hero or…. you could search for another job. Let’s be heroes instead 🙂 and make sure that you did all that you could to recover from some catastrophic events. There are more situations that I can think of, but I will describe some of them which can come on handy when it happens. Let’s think of it like a disaster test. It is always easier to treat disaster recoveries when you have tested before similar situations.

Possible factors for a database to be marked SUSPECT:

  • Denial of access to a database resource by the operating system
  • Anti-virus/3rd party software blocking access
  • The hardware/network failure or leading to corruption of database file(s)

Any of these factors affect the database files: Data file becomes corrupted, Log file becomes corrupted  or both are corrupted.
If the database can’t be recovered after failure it will be marked as Suspect and you can’t bring ONLINE to users until you fix the SUSPECT state.
You’ll get an Error when accessing the Suspect database:

Database 'recovery_test_1' cannot be opened due to inaccessible
files or insufficient memory or disk space. See the SQL Server
errorlog for details. (Microsoft SQL Server, Error: 945

Next take a look in the SQL Server Log file and you will see the reason the database could not be opened.
 
When the Data File is corrupted or missing the error will look like this:

Message
FCB::Open failed: Could not open file C:Program FilesMicrosoft SQL Server
MSSQL10.TESTMSSQLDATArecovery_test_simple.mdf for file number 1.  
OS error: 2(failed to retrieve text for this error. Reason: 15105).

 
In this case you absolutely need a recent  Full backup and a couple of Log backups for Full or Bulk-Logged Recovery models. The Data File contains the actual Data, all tables, Indexes, Procedures etc.

1. Simple Recovery model

The things are quite simple:

  • Delete the suspect database
  • Restore the most recent Full backup with RECOVERY option and that’s it.

2. Full Recovery model

You would want to recover the Suspect database to the last point of failure including the last transactions that where not backed up in a log backup.
For this to work you need a Full backup and at least one valid Log backup

Here is how I will simulate the crash and recovery of a database: I will create a new database, take a Full backup, a log backup, make some changes, put the database Offline, simulate the crash by deleting the data file and then try to bring it Online.

2.1. Prerequisites For A Database Failure:
 

-- Create database
CREATE DATABASE [recovery_test_1] 

-- Create a Test table
CREATE TABLE Test1 (a1 INT IDENTITY, b2 nvarCHAR (100));
GO
-- Take a full backup
BACKUP DATABASE recovery_test_1 TO DISK =
'C:TESTSFull_recovery_test_1.bck' WITH INIT;
GO 

-- Take a Log backup
BACKUP LOG recovery_test_1 TO DISK =
'C:TESTSTran_recovery_test_1.trn' WITH INIT;
GO 

-- Make some transactions

INSERT INTO Test1 VALUES ('record 1');
INSERT INTO Test1 VALUES ('record 2');
INSERT INTO Test1 VALUES ('record 3');
INSERT INTO Test1 VALUES ('record 4');

2.2. Provoke Database Failure:
 

-- Simulate failure
ALTER DATABASE recovery_test_1 SET OFFLINE 

-- Physically Rename or Delete DATA File of recovery_test_1 database  

-- Try to re-open the database
ALTER DATABASE recovery_test_1 SET ONLINE;

Error:

Msg 5120, Level 16, State 101, Line 2
Unable to open the physical file "C:Program FilesMicrosoft SQL Server
MSSQL10.TESTMSSQLDATArecovery_test_simple.mdf".

ALTER DATABASE statement failed.

-- Check database status
SELECT DATABASEPROPERTYEX ('recovery_test_1', 'STATUS') AS 'Status';

Status
--------
SUSPECT
 

2.3. Recover The Database:
 
First thing to do is to immediately try to backup the tail-of-log.

BACKUP LOG recovery_test_1 TO DISK = 'C:TESTSTran_recovery_test_1_2.trn'
WITH INIT, NO_TRUNCATE;
GO 

Results:
Processed 1 pages for database 'recovery_test_1', file
'recovery_test_simple_log' on file 1. BACKUP LOG successfully
processed 1 pages in 0.037 seconds (0.211 MB/sec).

OK, so now I have all that I need to make a successful recovery: the Full backup + 2 Log Backups (one made previously the crash and of the tail-of-log).

  1. Drop Database:
  2.  

    Drop database recovery_test_1
  3. Remove the original Log file because the Drop will not do it.
  4.  

  5. I need to find out the logical names of the crashed database from the full backup. I can’t do a restore if I don’t know them.
  6.  

    RESTORE FILELISTONLY FROM DISK = 'C:TESTSFull_recovery_test_1.bck'
  7. Restore Full backup with NORECOVERY option:
  8.  

    RESTORE DATABASE recovery_test_1 FROM  DISK =
    N'C:TESTSFull_recovery_test_1.bck' WITH  FILE = 1, MOVE N'recovery_test_simple'
    TO N'C:Program FilesMSSQLDATArecovery_test_simple.mdf', MOVE N'recovery_test_simple_log'
    TO N'C:Program FilesMSSQLDATArecovery_test_simple_log.ldf', NORECOVERY, NOUNLOAD, STATS = 10
    GO
    
    Results:
    Processed 176 pages for database 'recovery_test_1',
    file 'recovery_test_simple' on file 1.
    Processed 1 pages for database 'recovery_test_1',
    file 'recovery_test_simple_log' on file 1.
    RESTORE DATABASE successfully processed 177 pages
    in 0.279 seconds (4.956 MB/sec).
  9. Restore 1’st and 2’nd  Log backups with NORECOVERY and sequential log with RECOVERY option:
  10.  

    RESTORE LOG recovery_test_1 FROM  DISK =
    N'C:TESTSTran_recovery_test_1.trn'
    WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10
    GO
    
    RESTORE LOG recovery_test_1 FROM  DISK =
    N'C:TESTSTran_recovery_test_1_2.trn'
    WITH  FILE = 1,  RECOVERY,  NOUNLOAD,  STATS = 10
    GO
    
    

Hooray… the database is recovered and a select on Test1 table reveals the last rows inserted before the crash:

a1   b2
--------------
1    record 1
2    record 2
3    record 3
4    record 4



Previous <<- Part 1

Next ->> Part 3

SQL 2005/2008 Database Recovery Procedure – Recovery Models (part 1)

When you create a new database and prepare the environment for a certain application to be deployed, you must take into consideration the database recovery model configuration. Each new database created will have the same Recovery model set to model system database. model database works as a template for each new created database. The default setting is “Full”.

The first thing to do is to decide whether you need Full, Simple or Bulk-Logged Recovery Model on your databases. I will not enter in many details about what each model means as there a lot of very good articles describing the pros and cons for each (links to them are posted at the bottom on this post).

I will mainly focus on recovery procedures in specific types of recovery models. They are different and there are some factors that need to be considered before you decide which model to set on your databases.

1. Full Recovery Model

In Full recovery model all transactions are logged and you have full control over your business – human or technical mistakes are not longer a disaster. So, you can recover the database to the last transactional backup log done – be it 30 minutes or 10 minutes ago – depending on your schedules.

In order to be able to do so: Full backups, Differential backups (if you want to reduce the recovery process to a smaller backup chain) and Log backups must be enabled according to specific schedules that best fit your transaction flows. The transaction log needs to be backed up regularly to prevent not only the loss of work but also to reduce the transaction log size.

The recovery process of a database can be done by:

  1. Recovering the last Full backup + each one of the Log backups performed since the last Full Backup.
  2. Recovering the the last Full backup + last Differential made after last Full + each one of the Log backups performed after the last Differential Backup.
  3. Alternatively if the database is damaged, corrupted and has entered in suspect state you can recover the database to the last transaction from the current transaction log (not-backed up). This can be done by backing up the tail-of-the-log if possible and try applying the 1’st or 2’nd of the above methods + the tail-of-the-log just backed – up. (see Database Recovery procedure – Data File deleted (Part 2)

Conclusion: Stick to the Full recovery option if your business is critical and transactions can’t be re-created or restored easily from other tools.

2. Simple Recovery Model

I will keep it simple. Choose this model if your databases are for DEV/Test purposes or…. are Production but not critical ones such as: databases with static data, with data that be re-created or recovered from other tools and mostly databases containing read-only data (data warehouse databases).

You must be sure that your system can leave with a loss of all transactions processed since the last Full or Differential was made. Although Log backups are not possible, I suggest you enable anyway more frequent differentials as these can be really useful for lost wide developments.

The recovery process of a database can be done by:

  1. Recovering the last Full backup + last differential since the last Full Backup.

3. Bulk-Logged Recovery Model

It is quite similar to Full recovery model, with the single difference that in Bulk-Logged recovery model NOT all transactions are fully logged and you CAN’T have full control over your business. Transactions that are minimally logged are: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations are not recoverable. You still need to do full, differential and transaction log backups.

If a disaster forces you to recover a database set in Bulk-Logged recovery model you will able to do so by:

The recovery process of a database can be done by:

  1. Recovering the last Full backup + last Differential made after last Full + each one of the Log backups performed since the last Full Backup that DO NOT contain any bulk- logged transaction. In other words, you will be able to recover completely the database to the last log backup made only if it’s not containing bulk activity since the last Full backup made. If there was, you will be able to recover the database using log backups only to the beginning of that log backup which has bulk activity.
  2. Contrary to Full recovery model there are exceptions when you can backup the tail-of-log in order to completely recover the database to the last transaction made.
  • If there has been a bulk-log operation performed in the BULK-LOGGED recovery model since the last log backup made – in that case a tail-of-the-log backup is not possible at all, and you’ll have lost all transaction log generated since the last log backup.
  • If you are lucky and there are no bulk-logged operations performed in the Bulk-Logged recovery model since the last log backup made – in that case a tail-of-the-log backup is possible, but if there are other bulk-logged operations in some of the previous log backups since the last Full – the tail-of-log can’t be used in a sequential restore.
  • You can backup the tail-of-log and use it only if there are no bulk-logged operations caught in any of the needed previous log backups and non in the current not-backed up transaction log.

Conclusion: Why you should use Bulk-Next Logged recovery model? If you have a big amount of bulk insert operations that are scheduled at specific hours (nightly) the bulk-logged recovery model may be the best choice. Do a Full backup after the Bulk insert and you will have a good backup log chain to recover from until the next bulk operation.

Any option you choose be sure you choose it wisely and plan your recovery options accordingly. In part 2 and part 3, I will simulate several recovery methods considering different scenarios.

Log File Usage – in Full and Simple Recovery Model

I heard many times people complaining that their database log file is growing huge even if the database is in Simple recovery model. It is absolutely normal because the SQL transaction mechanism (ACID) doesn’t change when recovery model is changed. In order to keep a transaction atomic transaction log will be used no matter which recovery model you use. Atomicity refers to the idea that a transaction needs to be processed completely or not at all. In other words, if a transaction fails at any point in the process, the entire transaction must be rolled back and the log is the place where all modifications are recorded.

At each DML statement (INSERT / DELETE / UPDATE) the transaction log will be used. How? At each data modification statement, SQL will read the data pages affected by the modification into buffer cache, updates with new values also in Buffer cache and records the modification into the Log file. This is why the Log is used and you can see big Log growth during a transaction.

In Simple recovery model – when a Checkpoint occurs (usually an SQL internally managed operation) – the dirty data pages (which were modified in buffer cache) are written (flushed) to disk. Also at checkpoint the inactive part (not used by any transaction) of the log is truncated making the Transaction log re-usable for other transactions. After the log was truncated, the Log file than can be shrinked in order to reduce the physical size if needed.

In Full recovery model, the log is truncated only after a log backup. The checkpoint does not truncate the inactive part of the log file. This is the main difference between Full and Simple recovery mode. A good start in understanding better all this can serve the following articles:

  1. MSDN article – Transaction Log Truncation
  2. Louis Nguyen article – SQL Server Database Engine Basics
  3. SQL Server Performance blog –Database Recovery Models in SQL Server


Next ->> Part 2

Read memory dump file after a BSOD event

I decided to post about this problem since I saw many questions about this issue. There are several tools that can be used to read a memory dump like Windbg.exe or Dumpchk.exe. In this article I will explain the usage of the Windbg debugger.

First of all what is a memory dump and in what circumstances we have to deal with it?

The BSOD bluescreen or “blue screen of death” is a stop error screen of Windows Operating System, caused by a fatal system error of a non-recoverable nature, that causes the system to “crash.” When the recovery option is set to write debugging information, a program called SAVEDUMP.EXE is invoked during a fatal system error which writes the entire contents of memory to the system paging file. When the system is rebooted Windows copies the paging file to a file called MEMORY.DMP. This file can be found at this location: C:Windows

I will use a memory minidump file for analyze because it is smaller and easier to read: C:WindowsMinidumps

STEP – by – STEP guide:

  1. Set the Windows to create mini dump files: Control Panel -> System -> Advanced -> Startup and Recovery -> Settings -> Write debugging information -> Small memory dump.
  1. Windows  Memory Dump setting

2.   Download and install the Debugging Tools for Windows for 64 bits or 32 bits systems. Windbg is contained in this package.

3.   If you don’t have any dump files you can use Windows feature to create one to test how this all works. Open Task Manager -> select a process -> right click and press create Dump File. It will be located in c:usersyour userAppdataLocalTemp***.DMP

4.   Open from Start -> Programs -> Debugging Tools fro Windows -> Windbg Select from File -> Open Crash Dump and specify the location of your dump.

You will receive an error that no symbols have been loaded. Symbols are needed to effectively debug.

*** ERROR: Symbol file could not be found.  Defaulted to export symbols for ntdll.dll –
*** WARNING: symbols timestamp is wrong 0x4a5bdf57 0x4a5be125 for wow64cpu.dll
*** WARNING: symbols timestamp is wrong 0x4a5bdf57 0x4a5bda1b for IPHLPAPI.DLL
*** ERROR: Symbol file could not be found.  Defaulted to export symbols for IPHLPAPI.DLL –

5.    Open File -> Symbol File Path and insert this path:

SRV*c:symbols*http://msdl.microsoft.com/download/symbols

6.    Press Reload. and Restart the analyze (CTRL + SHIFT + F5).

7.    You will receive a summary of the Bucheck Analysis and to view details press the !analyze-v link from the below message:

Use !analyze -v to get detailed debugging information.



New SQL 2005/2008 installation

When accepting such a role in a company you should be aware that you will not always be seen as  the nicest person that everyone will want to communicate over a coffee and create friendships. The DBA is usually the man/woman who solves problems as we speak, but in the same time – he is the same man who will be the first held responsible without analyzing other factors when something goes wrong with the database or server administration. The DBA is the man who must discover what goes wrong, what to do to solve the problem immediately, and how to prevent such situations coming back. It’s frustrating but mostly it’s nice and fun and quite often you get a chance to demonstrate your intelligence and technical skills and receive appropriate feedback when you are really good at what you’re doing.Now, this part with “really good” is hard to get. You don’t learn from books internet or universities how to be a good DBA. There are not many people who can teach you the best practices. And generally, there aren’t many DBA’s in a company given the big responsibility and trust offered. Basically you just need to have a global vision of the entire system that you manage and treat very seriously any problem. That’s what it is. A DBA deals with the problems, anticipates them and solves them. Funny? I say yes …

Although there are many articles, a DBA really learns a lot practicing on a daily basis, bravely facing any problem that arises or is presented by your technical or functional team. Nobody is really made for it, but some feel that it suits them. I opened this blog to write about the problems I face or had and I lost time to find the solution. I practice this job for almost five years and I turned into many difficult situations and I got helped a lot by other bloggers with similar problems, whom I am thankful. Although never a problem that seems just like another will have the same solution, there is always one for yours.

For starters I want to describe what should you do if you’re on a new installation of SQL 2005/2008. It’s always important who did the installation and how the server is configured. Some bad configurations, unfortunately can only be securely undone after a re-install.

If you’re on a new installation of SQL 2005 or SQL 2008: The installation process is very different from 2005-2008 but it is important to consider the following for both these installations:

1. Installation options

  1. Be careful of what edition of SQL you plan to install and on what type of hardware (32 bit or 64 bit). As a rule, Enterprise is installed for the production environment, Developer for development environments and Standard for production but with a lot limitations compared to Enterprise features. It’s good to check what version is required before the application is deployed, because the costs differ substantially.
  2. Check with the DEV team, if you need all the components proposed in the kit. SQL Server is a high consuming resources software – especially in what memory is concerned. For example: if you do not need Analysis Services (for data warehouse) do not install it. If needed you will be able to install it later.
  3. If you want to install multiple instances on the same machine, you should give a different name for you instance instead of using the DEFAULT one. Default instance ID will be MSSQLSERVER and the Named Instance will be what you want it to be – ex: MYINSTANCE.

2. Startup Accounts

  1. The 2005 install will ask you to insert the details for the Service’s account used to run SQL Server. You can use the Local System account, Domain System account if the server is connected to a domain and you previously created a Service account specifically for this purpose in Active Directory or you can use the Domain System account option to insert a local user created for this purpose with Administrator role on the server. After installation you can change the Service Accounts to your components installed from SQL Server Configuration Manager or Services from Administrative Tools.
  2. In the 2008 install you can select in the same window: the Service Accounts for all components installed, their startup type (Manual/ Automatic/Disabled) as well as the Collation type used at the server level (very important this step). To find out what type of collation you need, check the documentation of the application deployed or consult the DEV team. If no type of Collation is specified, use the default one.

3. Security Models

The next step is to select an authentication mode supported by SQL Server: Mixed mode (SQL Windows) or Windows. The type of authentication is very important in determining the way your users access the system. Microsoft’s recommendation is as much as possible, to use Windows authentication mode because it is the safest mode. With this type of authentication, user’s access is controlled by a Windows user or group, which in turn is authenticated on the client machine or by Active Directory. SQL authentication, in change opens a door for the malicious users who know the username and password and can log into the system. Also SQL username & password cross the network unencrypted unless network encryption is enabled ( SSL certificate) and can be intercepted by attackers.

4. Databases Files Location.

  1. In SQL 2005 it was possible to select a single location for User Database (Data files and Log files), a single location for the System Databases (Data files and Log files) and a single location for SQL installation files.
  2. In SQL 2008 you can set up better the storage space so that you perform less post-installation changes. You can choose separate locations for: the System databases, Tempdb (which should be placed on a separate disk space well sized), User Databases Data Files and User Databases Log files. Performance will be enhanced if Data Files will be separated from Log files on different disks because the I/O operations of the Log files are sequential and random for Data Files.

After you install SQL Server it is always very important to setup few configuration options or Server properties like:

  1. Memory minimum and maximum values.
  2. I wrote a special blog about setting memory needs and it can be read here.

  3. model database recovery option (Simple if the SQL Server is for a DEV environment) and growing options (default 1 MB and 10% for Data/Log files is a big mistake that most leave behind and later face the problems caused by it)
  4. Set Firewall rules for allowing access to the SQL port.
  5. Check the “Allow Remote connection to this server”: right click SQL Server –> Properties –> Connections page.

SQL 2005/2008 – Configure Memory (part 1)

I promised to come back with the checklist of things to be done after the installation of SQL 2005/2008 is completed. This post will mainly be focused on the memory setting. Even though the memory is not the single worry, it is the principal one in my mind.

For starters, check that you installed the correct version of SQL on the right platform, Service Pack and edition. The following commands will show you all the basic information linked to the Server properties. This information can be stored if you want in a dedicated DBA table from a dedicated DBA server. If you have a lot of servers in your administration the collection of this kind of information is mandatory.

select serverproperty ('productversion')
select serverproperty ('productlevel')
select serverproperty ('edition')
select @@version
master.dbo.xp_msver

1. Memory min and max values:

As with other applications, SQL Server applications need memory to run. However, unlike most applications, it allows you to decide how much memory it can use. That’s good because SQL Server likes lots of memory and it will take as much memory as it is available on the server. If this option is left to the default value, soon you will face the out-of-memory problem for other applications or even for the OS. The funny part is that, the memory acquired by SQL is not released back to the system. Say the system has physical memory = 16 GB. If at some point the SQL needs 15,5 GB of RAM to function and there was this amount of memory available – the SQL will grow the memory used to 15,5 GB and will not give it back to the system when the transaction is over. If SQL Server uses all of the memory in the server, and Windows doesn’t have enough memory to function, SQL Server will run as if it is short on memory. Query response time will go up, CPU usage will go up and disk I/O will go up as Windows begins paging more and more RAM to the hard drive. You don’t want to reach this situation because the complications can be disastrous for you databases.
You can set the minimum and maximum Memory values issuing sp_configure from SQL or accessing “Properties” windows of the SQL Server instance from GUI. Both min and max values are extremely important for SQL to function properly.

Set minimum memory for SQL Server:

Minimum memory needs are specified on msdn website or in installation files of each SQL edition as hardware requirements section.
Here are the Memory needs for SQL 2005/2008 editions:

SQL 2005:
Minimum: 512 MB
Recommended: 1 GB or more
 
SQL 2008:
Minimum: 1 GB
Recommended: 4 GB or more

Set the maximum memory for SQL Server:

-- to allow to see the advanced options of SQL:
sp_configure 'show advanced options', 1
RECONFIGURE
GO

-- To set the maximum server memory value:
sp_configure 'max server memory', 6144
RECONFIGURE
GO

 
After you set the maximum and minimum values there a few things to be checked or considered.
 
2. Memory considerations if you run SQL Server on 32-bit platform:
 
Standard 32-bit addresses can map a maximum of 4 GB of memory. The standard address spaces of 32-bit processes are therefore limited to 4 GB. By default, on 32-bit Microsoft Windows operating systems, 2 GB are reserved for the operating system, and 2 GB are made available to the application – in our case to SQL Server.
 
2.1. /3GB switch.
 
If you have a total of 4 GB Physical memory and you need to address more than 2 GB to SQL, you have to specify the /3GB parameter in the Boot.ini file of Windows 2000/2003 Server and reboot the machine. With the /3GB parameter, the operating system reserves only 1 GB to its processes and the SQL Server can access up to 3 GB instead of the default setting of 2 GB.
 
2.2. PAE (Physical Address Extension) and AWE (Address Windowing Extensions)
 
Physical Address Extension (PAE) is a processor feature that enables x86 processors to access more than 4 GB of physical memory on capable versions of Windows. Certain 32-bit versions of Windows Server running on x86-based systems can use PAE to access up to 64 GB or 128 GB of physical memory, depending on the physical address size of the processor. However, PAE does not change the amount of virtual address space available to a process. Each process running in 32-bit Windows is still limited to a 4 GB virtual address space.  At this point we need to consider the AWE facility.

AWE is a set of extensions to the memory management functions of Windows that allow applications to address more memory than the 2-3 GB that is available through standard 32-bit addressing. AWE lets applications acquire physical memory, and then dynamically map views of the nonpaged memory to the 32-bit address space. Although the 32-bit address space is limited to 4 GB, the nonpaged memory can be much larger. This enables memory-intensive applications, such as large database systems, to address more memory than can be supported in a 32-bit address space.

Steps to configure the operating system for AWE:

  1. To support more than 4 GB of physical memory on 32-bit operating systems, you must first add the /PAE parameter to the Boot.ini file and reboot the computer.
  2. Remove /3GB parameter from Boot.ini file IF there is more than 16 GB of physical memory available on a computer. The operating system requires 2 GB of virtual address space for system purposes and therefore can support only a 2 GB user mode virtual address space.
  3. Enable AWE. Go to Sql Server Management Console > Properties dialog > Memory > select  ‘Use AWE to allocate memory’ , or alternatively by issuing the following command in a query against the target server:
  4. sp_configure 'awe enabled', 1
  5. Ensure that you add the user that the SQL Server service is running under to the ‘Lock Pages in Memory’ Local Security Policy (see Microsoft KB 811891 for the exact details on how to do this).  If you don’t update the local security policy, SQL Server will not actually use AWE and continue to use only 2Gb of memory; furthermore, you’re likely to see the following in the SQL Server log:
  6. "Cannot use Address Windowing Extensions because lock memory privilege
    was not granted."
  7. Restart the server machine.

 
3. Memory considerations if you run SQL Server on 64-bit platform:
 
In today’s 64-bit platforms, some great improvements have been made in the memory arena. While 32-bit platforms require you to use AWE and PAE to access more than 2 GB of RAM, the 64-bit platforms don’t have this limitation. In 64-bit platforms, all memory is available to the applications, as long as they are compiled as 64-bit applications; 32-bit applications running in Windows on Windows (WOW) have the same memory limits that they have running on a 32-bit platform.

While SQL Server only provides a few simple memory settings, setting them correctly is extremely important. Correct memory settings will have SQL Server running smoothly for a long time to come. Memory settings should be reviewed regularly to ensure that the original settings are still appropriate. After all, the amount of memory installed last year may no longer be enough memory or allocated correctly anymore.

You can find out and monitor how much memory SQL needs and whether the Operating System is comfortable with the memory left. Check out my next post related to SQL Memory needs.



SQL ERROR Fix: sp_MS_marksystemobject: Invalid object name ‘[cdc].[cdc_TABLE1]’

Hi, one of the new features in Microsoft SQL Server 2008 is the ability to track changes on a table. You can enable change tracking on a table using the Change Data Capture feature. It is extremely useful for BI (Business Intelligence) processes because it allows to track only the changes done on a table instead of importing the whole table each day for processing. Today me and a DEV guy faced a curious problem trying to enable an additional table for cdc on a Database already enabled for change data Capture feature. Just FYI, you need to be sysadmin or have db_owner role to manage the cdc feature.

Normally when you run:

USE [DB1]
GO
EXEC sys.sp_cdc_enable_table_change_data_capture
@source_schema = 'dbo',
@source_name = 'TABLE1',
@role_name = 'cdc_TABLE1'
GO

Results:

Job 'cdc.Test_capture' started successfully.
Job 'cdc.Test_cleanup' started successfully.

The SQL creates 2 new SQL Agent jobs, and a new table named as you specified it at the @role_name parameter, which will reflect the changes on the tracked table.
The table will be market as system table and found under “System Tables” folder from Database/Tables tree in SQL Management Studio.

In my case the cdc table was created inside master database with few warnings and NOT as system table. It was inserted under User tables leaf. At first look it was working as cdc table, but I didn’t want to take the risk.

These where the warnings:

sp_MS_marksystemobject: Invalid object name '[cdc].[cdc_TABLE1]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_insdel_605961235]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_upd_605961235]'
sp_MS_marksystemobject: Invalid object name '[cdc].[sp_batchinsert_605961235]'

After digging a little on the Internet I found out that, when the stored procedure sys.sp_cdc_enable_table_change_data_capture is executed, the undocumented procedure sp_MS_marksystemobject tries to mark the table as system table. In our case, the task failed and the table was marked as user table which was not what we wanted.
So first thing was to try to run manually the command which I thought could have done the trick and mark the table as system:

sp_MS_marksystemobject '[cdc].[cdc_TABLE1]'

Same warning. Not what I expected…

After another couple of hours of research, I found the issue. The resolution came from a totally unsuspected area. It happened that cdc schema under which all cdc tables are created had a different schema owner than the cdc SQL user. Somehow, in past it was modified to another SQL user. The reason it failed with “Invalid object name” warning became clear for me. The user which owned the schema didn’t have rights to see/change cdc objects. I had to change the cdc schema owner to cdc as it is supposed to be.

This command solved the issue and let me enable the cdc feature on the table without warnings:

USE [DATABASE]
GO
ALTER AUTHORIZATION ON SCHEMA::[cdc] TO [cdc]
GO

SQL ERROR Fix : Error : Msg 9514, Level 16, State 1, Line 1 Xml data type is not supported in distributed queries. Remote object ‘LINKED_SERVER.Database.dbo.Table’ has xml column(s).

I ran into a rather senseless restriction put by MS when using INSERT INTO…SELECT FROM to transfer few rows through a Linked Server from a table that has a XML data type column even if that column is not listed in “insert into” statement. See detailed the reproduction of this error.

The insert statement from this example uses a variable for filtering options.

Statement:

DECLARE @maxid as nvarchar (max)
SET @maxid=(SELECT TOP 1 col1 from Table_Local ORDER BY col1 DESC)
Insert into Database.dbo.Table_Local (col1, col2, col3, XMLcol4)
select col1, col2, col3, XMLcol4
from  [Linked_Server].Database.dbo.Table_Remote t2  where col1 > @maxid )t2

Error:

Msg 9514, Level 16, State 1, Line 4
Xml data type is not supported in distributed queries.
Remote object 'LINKED_SERVER.Database.dbo.Table' has xml column(s).

Resolution:

1. Change the statement using OpenQuery function instead of Linked Server.
2. Cast the XML column retrieved to a varchar (max).

Basically, the data is queried on the remote server, converts the XML data to a varchar, sends the data to the requesting server and then reconverts it back to XML.

Here is the right statement:

DECLARE @TSQL varchar(8000)
DECLARE @maxid as nvarchar (max)
SET @maxid=(SELECT TOP 1 col1 from Table_Local ORDER BY col1 DESC)
SELECT  @TSQL = 'insert into Database.dbo.Table_Local
                          (col1, col2, col3, XMLcol4)
select col1, col2, col3, Cast(t2.XMLcol4 as XML) as XMLcol4
from OPENQUERY([LINKED_SERVER_NAME],''SELECT col1, col2, col3,
Cast(XMLcol4 as Varchar(max)) as XMLcol4
FROM
DATABASE.dbo.Table_Remote where col1 >  ''''' + @maxid + ''''''') t2'
EXEC (@TSQL)

DiskPart – How to find out whisk disk is distributed to a logical volume

There is one fast way to check out to what physical disk is allocated one logical disk. In this way you can allocate better the available storage between data files, log files and tempdb files. You should know that these are better separated on different disks.

You need to consider applying this method only to the servers which don’t have as physical storage SAN disks. The distribution of  logical disks on SAN is very different and must be provided by network admins.

The following commands don’t mess anything but play carefully with diskpart ’cause it’s a partitioning tool.

Diskpart.exe command-line utility basic commands:

Open a command prompt and type diskpart. Depending the Windows version, diskpart will open in the same window or a new window. When entering the utility you can use the following commands.

list disk – lists the disks known by Windows. At this point you will know how many disks there are to play with.

Diskpart - List Volume Partitions

select disk (Number) – sets the focus on the specified disk. you need to set the focus on one disk in order to check the partitions created on it or the logical volumes. All the below command need to have the focus set on a single disk.

detail disk – after setting the focus, you will see all the volumes created on the specified disk.

select partition – you can guess what that is – lists the partitions from one disk.

Diskpart - List Volume Partitions

It is now obvious that Disk 1 with a capacity of 931 GB hosts two logical Volumes- 683 and 243 each.

A little about this blog…

I started my blog in the will of expanding and enhancing my acquired knowledge over the past 5 years since I am working with SQL Servers as a Database Administrator. Also, if some of the information presented here serve as help, point to start a discussion to some of you, I will be happy to enter in touch with my readers through comments, questions.

This blog  is designed for DBA’s or technicaly interested people in DBA work. Here you will find a little portion of the problems/daily work/solutions which I hope you will find usefull and interesting.