Category Archives: Server Configuration

Configure MSDTC for Distributed Transactions

Few days ago, inside a Software project, I had to enable and configure MSDTC (Distributed Transaction Coordinator) between 2 machines: a Web Server using NServiceBus to process messages in a transaction and a SQL Server.
I encountered some issues and I would like to detail the order of few configuration steps as well as testing methods that should be done/checked before MSDTC in fully functional.

 
Step 1: Network connectivity

Make sure both servers can ping each other by the machine name, cause MSDTC uses netBIOS to establish a connection.

Start a command prompt window and execute on both servers:

  • ping [SERVER_NAME1]
  • ping [SERVER_NAME2]

Step 2: Enable MSDTC on both servers.

There are pictures for Windows 2008 Server and Windows 2003 Server because MSDTC location is changed from one WIndows edition to another. The options to check are the same though.

  1. Open Component Services:
  2. Component Services

  3. Access DTC Properties
  4. Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2003 - MSDTC location

    Windows 2008 - MSDTC location

    Windows 2008 - MSDTC location

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  5. Enable MSDTC according to below options selected.
    Check only the red check boxes and click Apply.
     
  6. Windows 2008 - MSDTC properties

     
     
     
     
     
     
     
     
     
     
     
     

    A warning message will be popped in to inform that the Distribution Transaction Coordinator Windows service will be started (or restarted).
    MSDTC restart

  7. Set the startup type of Distribution Transaction Coordinator Windows service to Automatic.

If you don’t have Firewalls that prohibit external access to machine’s ports than you can stop here with the MSDTC configuration.
MSDTC will function great with basic configuration (Step 1 & Step 2) done only when there is no Firewall involved in network external communication. What do you do when there is a Firewall afterwards? In most Network configurations it is mandatory for securing the external access. If you follow all the steps detailed below, you should be able to make the MSDTC work without problems. Take a look at next steps.

Step 3: Restrict MSRPC dynamic port allocation.
 
MSDTC service depends on RPC protocol which stands in front of every Windows process that uses RPC. When you deal with MSDTC, you also have to consider the RPC access points. When MSRPC protocol is left with its default setting, all depending services including MSDTC is free to use a dynamically allocated port in the port range 1024-65535. Basically, it means that each re-start of MSDTC will result in a different port number. Fortunately you can restrict this port range which means that, instead of creating a rule in Firewall that opens all the ports from 1024 – 65535 you only need to insert the range of ports restricted in RPC setting.

There is one thing to be considered though:
 
There can be up to 100 services that depend on RPC and will be affected by this change. Make it not too small… not to big. Doing a little reading on the internet I saw that 50 – 100 ports would be a minimum – maximum for RPC depending services to function, but again it depends on each machine and how many processes depend on RPC. If you want to find out which are these look at RPC service at Dependencies tab and count the active ones.

RPC_Service

Perform these steps on both machines in order to configure a different port range. The port range does not have to be the same on both machines.

  1. Open Component Services properties windows
  2. Component Services Properties

  3. Access Default Protocols tab and insert a new port range.
  4. Change Port Range

Next, in order to be able to start a Distributed Transaction through MSDTC service – both participating servers must be configured to trust each other by allowing access to the each other’s port used by MSDTC service.

Step 4: Add Firewall Inbound rules

on SERVER 1: to allow inbound requests to the port range configured.
on SERVER 2: to allow inbound requests to the port range configured.
This will enable the communication between SERVER 1 and SERVER 2 through MSDTC service.

SERVER 1 will be able to access MSDTC allocated port from SERVER 2 and SERVER 2 will be able to access MSDTC allocated port from SERVER1.

Step 5: Restart both machines

You need to restart both machines where the port range was changed in order for these modifications to take effect.

 
Step 4: Testing MSDTC connectivity
 
After you had done all of the above, you might want to test first if a Distributed Transaction can be initialized and committed. There are 2 important Microsoft troubleshooting tools that I used and worked for me: DTCping and DTCtester. I will not detail the testing steps because all of them are covered in this post at step 2 and step 4.

SQL 2005/2008 – Configure Memory (part 2)

How can we be sure that SQL is low on memory? Or what to set up on max Memory value?

Here is a short guide on how to analyze memory needs for an Instance of SQL Server:
 
Use Perfmon Counters to understand SQL Memory needs
 
Considering that you did not set up yet the maximum SQL Server Memory limit, start Perfmon and include the below 3 counters when the system is most highly used. A stress test would be good enough in order to catch the highest/lowest values.

  1. Memory: Available MBytes – reports the available physical memory. This gives a good indication of whether you are running low on physical memory. If the machine is running low on physical memory, paging will either be happening or will soon begin to happen. This is useful data for diagnosing OOM issues. If this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM and don’t need to worry.
  2. Process(sqlservr): Private Bytes – indicates the amount of memory being used that can’t be shared with other processes. You should monitor this if you want to see how much memory your process uses. This counter also gives a good indication of how your application impacts the entire system—using a lot of private bytes has a big impact on the machine since the memory cannot be shared with other processes
  3. SQLServer: Buffer ManagerPage Life expectancy – normal value: < = 300 sec. According to Microsoft, 300 seconds is the minimum target for page life expectancy. If the buffer pool flushes your pages in less than 300 seconds, you probably have a memory problem.

a) SQL is short on memory
 
First indication that SQL/Machine is low on memory will be that SQL process will assign as much Memory as it finds free and available on the system – until the system will run out of memory. So check the First counter from the list. It should be over 10% of Total RAM. If it’s not, look at the third counter to see whether SQL is really hungry for memory or just very greedy. If Page Life Expectancy is above 300 sec, but the SQL still requires more memory – than it is the time to limit the SQL Memory so that at least 1 GB is left for other OS processes. If, at contrary Available MB is very low and Page Life Expectancy counter is below 300 sec, than you actually are forcing the system to cope with less memory than it needs. Add few GB to RAM and check again.
 
b) SQL has sufficient memory
 
The second case is the easiest meaning that SQL didn’t grab all of available memory and the system still has enough memory for its own processes. I usually take the value from Private Bytes and add few GB (1-3) to set maximum Memory limit on SQL engine setting.

In short, The Page Life Expectancy is a very good indication that SQL has less memory that it would be comfortable with and I suggest collecting this value periodically along with other counters mentioned.

Basically when PLE goes under 300 sec it means, the activity on the Disk will grow (more Paging will occur) in order to get the pages which should still be in buffer pool. At this point CPU will also grow in order to manage the High Disk Activity. So, don’t take fast conclusions if you see high disk activity or CPU increasing. Memory, Disk and CPU are very tightly related.

In addition you could add these counters to check CPU and Disk activity:

  • Processor(_Total)%: Processor Time – determines the percentage of time the processor is busy by measuring the percentage of time the thread of the Idle process is running and then subtracting that from 100 percent. If you begin to see values of 70 percent or more for the Processor% Processor Time counter, investigate your processor’s activity further, as follows:
    1. Examine SystemProcessor Queue Length.
    2. Identify the processes that are running when Processor% Processor Time and SystemProcessor Queue Length values are highest.
  • PhysicalDisk(_Total): Current Disk Queue Length – used to determine the number of I/O requests queued for service. Notice that this might overstate the true length of the queue, because the counter includes both queued and in-service requests. If the value of Current  Disk Queue Length exceeds twice the number of spindles, then you are likely developing a bottleneck. With a volume set, a queue that is never shorter than the number of active physical disks indicates that you are developing a bottleneck.  Normally, for a disk with two spindles the value should be max 2.

SQL 2005/2008 Database Recovery Procedure – Recovery Models (part 1)

When you create a new database and prepare the environment for a certain application to be deployed, you must take into consideration the database recovery model configuration. Each new database created will have the same Recovery model set to model system database. model database works as a template for each new created database. The default setting is “Full”.

The first thing to do is to decide whether you need Full, Simple or Bulk-Logged Recovery Model on your databases. I will not enter in many details about what each model means as there a lot of very good articles describing the pros and cons for each (links to them are posted at the bottom on this post).

I will mainly focus on recovery procedures in specific types of recovery models. They are different and there are some factors that need to be considered before you decide which model to set on your databases.

1. Full Recovery Model

In Full recovery model all transactions are logged and you have full control over your business – human or technical mistakes are not longer a disaster. So, you can recover the database to the last transactional backup log done – be it 30 minutes or 10 minutes ago – depending on your schedules.

In order to be able to do so: Full backups, Differential backups (if you want to reduce the recovery process to a smaller backup chain) and Log backups must be enabled according to specific schedules that best fit your transaction flows. The transaction log needs to be backed up regularly to prevent not only the loss of work but also to reduce the transaction log size.

The recovery process of a database can be done by:

  1. Recovering the last Full backup + each one of the Log backups performed since the last Full Backup.
  2. Recovering the the last Full backup + last Differential made after last Full + each one of the Log backups performed after the last Differential Backup.
  3. Alternatively if the database is damaged, corrupted and has entered in suspect state you can recover the database to the last transaction from the current transaction log (not-backed up). This can be done by backing up the tail-of-the-log if possible and try applying the 1’st or 2’nd of the above methods + the tail-of-the-log just backed – up. (see Database Recovery procedure – Data File deleted (Part 2)

Conclusion: Stick to the Full recovery option if your business is critical and transactions can’t be re-created or restored easily from other tools.

2. Simple Recovery Model

I will keep it simple. Choose this model if your databases are for DEV/Test purposes or…. are Production but not critical ones such as: databases with static data, with data that be re-created or recovered from other tools and mostly databases containing read-only data (data warehouse databases).

You must be sure that your system can leave with a loss of all transactions processed since the last Full or Differential was made. Although Log backups are not possible, I suggest you enable anyway more frequent differentials as these can be really useful for lost wide developments.

The recovery process of a database can be done by:

  1. Recovering the last Full backup + last differential since the last Full Backup.

3. Bulk-Logged Recovery Model

It is quite similar to Full recovery model, with the single difference that in Bulk-Logged recovery model NOT all transactions are fully logged and you CAN’T have full control over your business. Transactions that are minimally logged are: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations are not recoverable. You still need to do full, differential and transaction log backups.

If a disaster forces you to recover a database set in Bulk-Logged recovery model you will able to do so by:

The recovery process of a database can be done by:

  1. Recovering the last Full backup + last Differential made after last Full + each one of the Log backups performed since the last Full Backup that DO NOT contain any bulk- logged transaction. In other words, you will be able to recover completely the database to the last log backup made only if it’s not containing bulk activity since the last Full backup made. If there was, you will be able to recover the database using log backups only to the beginning of that log backup which has bulk activity.
  2. Contrary to Full recovery model there are exceptions when you can backup the tail-of-log in order to completely recover the database to the last transaction made.
  • If there has been a bulk-log operation performed in the BULK-LOGGED recovery model since the last log backup made – in that case a tail-of-the-log backup is not possible at all, and you’ll have lost all transaction log generated since the last log backup.
  • If you are lucky and there are no bulk-logged operations performed in the Bulk-Logged recovery model since the last log backup made – in that case a tail-of-the-log backup is possible, but if there are other bulk-logged operations in some of the previous log backups since the last Full – the tail-of-log can’t be used in a sequential restore.
  • You can backup the tail-of-log and use it only if there are no bulk-logged operations caught in any of the needed previous log backups and non in the current not-backed up transaction log.

Conclusion: Why you should use Bulk-Next Logged recovery model? If you have a big amount of bulk insert operations that are scheduled at specific hours (nightly) the bulk-logged recovery model may be the best choice. Do a Full backup after the Bulk insert and you will have a good backup log chain to recover from until the next bulk operation.

Any option you choose be sure you choose it wisely and plan your recovery options accordingly. In part 2 and part 3, I will simulate several recovery methods considering different scenarios.

Log File Usage – in Full and Simple Recovery Model

I heard many times people complaining that their database log file is growing huge even if the database is in Simple recovery model. It is absolutely normal because the SQL transaction mechanism (ACID) doesn’t change when recovery model is changed. In order to keep a transaction atomic transaction log will be used no matter which recovery model you use. Atomicity refers to the idea that a transaction needs to be processed completely or not at all. In other words, if a transaction fails at any point in the process, the entire transaction must be rolled back and the log is the place where all modifications are recorded.

At each DML statement (INSERT / DELETE / UPDATE) the transaction log will be used. How? At each data modification statement, SQL will read the data pages affected by the modification into buffer cache, updates with new values also in Buffer cache and records the modification into the Log file. This is why the Log is used and you can see big Log growth during a transaction.

In Simple recovery model – when a Checkpoint occurs (usually an SQL internally managed operation) – the dirty data pages (which were modified in buffer cache) are written (flushed) to disk. Also at checkpoint the inactive part (not used by any transaction) of the log is truncated making the Transaction log re-usable for other transactions. After the log was truncated, the Log file than can be shrinked in order to reduce the physical size if needed.

In Full recovery model, the log is truncated only after a log backup. The checkpoint does not truncate the inactive part of the log file. This is the main difference between Full and Simple recovery mode. A good start in understanding better all this can serve the following articles:

  1. MSDN article – Transaction Log Truncation
  2. Louis Nguyen article – SQL Server Database Engine Basics
  3. SQL Server Performance blog –Database Recovery Models in SQL Server


Next ->> Part 2

New SQL 2005/2008 installation

When accepting such a role in a company you should be aware that you will not always be seen as  the nicest person that everyone will want to communicate over a coffee and create friendships. The DBA is usually the man/woman who solves problems as we speak, but in the same time – he is the same man who will be the first held responsible without analyzing other factors when something goes wrong with the database or server administration. The DBA is the man who must discover what goes wrong, what to do to solve the problem immediately, and how to prevent such situations coming back. It’s frustrating but mostly it’s nice and fun and quite often you get a chance to demonstrate your intelligence and technical skills and receive appropriate feedback when you are really good at what you’re doing.Now, this part with “really good” is hard to get. You don’t learn from books internet or universities how to be a good DBA. There are not many people who can teach you the best practices. And generally, there aren’t many DBA’s in a company given the big responsibility and trust offered. Basically you just need to have a global vision of the entire system that you manage and treat very seriously any problem. That’s what it is. A DBA deals with the problems, anticipates them and solves them. Funny? I say yes …

Although there are many articles, a DBA really learns a lot practicing on a daily basis, bravely facing any problem that arises or is presented by your technical or functional team. Nobody is really made for it, but some feel that it suits them. I opened this blog to write about the problems I face or had and I lost time to find the solution. I practice this job for almost five years and I turned into many difficult situations and I got helped a lot by other bloggers with similar problems, whom I am thankful. Although never a problem that seems just like another will have the same solution, there is always one for yours.

For starters I want to describe what should you do if you’re on a new installation of SQL 2005/2008. It’s always important who did the installation and how the server is configured. Some bad configurations, unfortunately can only be securely undone after a re-install.

If you’re on a new installation of SQL 2005 or SQL 2008: The installation process is very different from 2005-2008 but it is important to consider the following for both these installations:

1. Installation options

  1. Be careful of what edition of SQL you plan to install and on what type of hardware (32 bit or 64 bit). As a rule, Enterprise is installed for the production environment, Developer for development environments and Standard for production but with a lot limitations compared to Enterprise features. It’s good to check what version is required before the application is deployed, because the costs differ substantially.
  2. Check with the DEV team, if you need all the components proposed in the kit. SQL Server is a high consuming resources software – especially in what memory is concerned. For example: if you do not need Analysis Services (for data warehouse) do not install it. If needed you will be able to install it later.
  3. If you want to install multiple instances on the same machine, you should give a different name for you instance instead of using the DEFAULT one. Default instance ID will be MSSQLSERVER and the Named Instance will be what you want it to be – ex: MYINSTANCE.

2. Startup Accounts

  1. The 2005 install will ask you to insert the details for the Service’s account used to run SQL Server. You can use the Local System account, Domain System account if the server is connected to a domain and you previously created a Service account specifically for this purpose in Active Directory or you can use the Domain System account option to insert a local user created for this purpose with Administrator role on the server. After installation you can change the Service Accounts to your components installed from SQL Server Configuration Manager or Services from Administrative Tools.
  2. In the 2008 install you can select in the same window: the Service Accounts for all components installed, their startup type (Manual/ Automatic/Disabled) as well as the Collation type used at the server level (very important this step). To find out what type of collation you need, check the documentation of the application deployed or consult the DEV team. If no type of Collation is specified, use the default one.

3. Security Models

The next step is to select an authentication mode supported by SQL Server: Mixed mode (SQL Windows) or Windows. The type of authentication is very important in determining the way your users access the system. Microsoft’s recommendation is as much as possible, to use Windows authentication mode because it is the safest mode. With this type of authentication, user’s access is controlled by a Windows user or group, which in turn is authenticated on the client machine or by Active Directory. SQL authentication, in change opens a door for the malicious users who know the username and password and can log into the system. Also SQL username & password cross the network unencrypted unless network encryption is enabled ( SSL certificate) and can be intercepted by attackers.

4. Databases Files Location.

  1. In SQL 2005 it was possible to select a single location for User Database (Data files and Log files), a single location for the System Databases (Data files and Log files) and a single location for SQL installation files.
  2. In SQL 2008 you can set up better the storage space so that you perform less post-installation changes. You can choose separate locations for: the System databases, Tempdb (which should be placed on a separate disk space well sized), User Databases Data Files and User Databases Log files. Performance will be enhanced if Data Files will be separated from Log files on different disks because the I/O operations of the Log files are sequential and random for Data Files.

After you install SQL Server it is always very important to setup few configuration options or Server properties like:

  1. Memory minimum and maximum values.
  2. I wrote a special blog about setting memory needs and it can be read here.

  3. model database recovery option (Simple if the SQL Server is for a DEV environment) and growing options (default 1 MB and 10% for Data/Log files is a big mistake that most leave behind and later face the problems caused by it)
  4. Set Firewall rules for allowing access to the SQL port.
  5. Check the “Allow Remote connection to this server”: right click SQL Server –> Properties –> Connections page.

SQL 2005/2008 – Configure Memory (part 1)

I promised to come back with the checklist of things to be done after the installation of SQL 2005/2008 is completed. This post will mainly be focused on the memory setting. Even though the memory is not the single worry, it is the principal one in my mind.

For starters, check that you installed the correct version of SQL on the right platform, Service Pack and edition. The following commands will show you all the basic information linked to the Server properties. This information can be stored if you want in a dedicated DBA table from a dedicated DBA server. If you have a lot of servers in your administration the collection of this kind of information is mandatory.

select serverproperty ('productversion')
select serverproperty ('productlevel')
select serverproperty ('edition')
select @@version
master.dbo.xp_msver

1. Memory min and max values:

As with other applications, SQL Server applications need memory to run. However, unlike most applications, it allows you to decide how much memory it can use. That’s good because SQL Server likes lots of memory and it will take as much memory as it is available on the server. If this option is left to the default value, soon you will face the out-of-memory problem for other applications or even for the OS. The funny part is that, the memory acquired by SQL is not released back to the system. Say the system has physical memory = 16 GB. If at some point the SQL needs 15,5 GB of RAM to function and there was this amount of memory available – the SQL will grow the memory used to 15,5 GB and will not give it back to the system when the transaction is over. If SQL Server uses all of the memory in the server, and Windows doesn’t have enough memory to function, SQL Server will run as if it is short on memory. Query response time will go up, CPU usage will go up and disk I/O will go up as Windows begins paging more and more RAM to the hard drive. You don’t want to reach this situation because the complications can be disastrous for you databases.
You can set the minimum and maximum Memory values issuing sp_configure from SQL or accessing “Properties” windows of the SQL Server instance from GUI. Both min and max values are extremely important for SQL to function properly.

Set minimum memory for SQL Server:

Minimum memory needs are specified on msdn website or in installation files of each SQL edition as hardware requirements section.
Here are the Memory needs for SQL 2005/2008 editions:

SQL 2005:
Minimum: 512 MB
Recommended: 1 GB or more
 
SQL 2008:
Minimum: 1 GB
Recommended: 4 GB or more

Set the maximum memory for SQL Server:

-- to allow to see the advanced options of SQL:
sp_configure 'show advanced options', 1
RECONFIGURE
GO

-- To set the maximum server memory value:
sp_configure 'max server memory', 6144
RECONFIGURE
GO

 
After you set the maximum and minimum values there a few things to be checked or considered.
 
2. Memory considerations if you run SQL Server on 32-bit platform:
 
Standard 32-bit addresses can map a maximum of 4 GB of memory. The standard address spaces of 32-bit processes are therefore limited to 4 GB. By default, on 32-bit Microsoft Windows operating systems, 2 GB are reserved for the operating system, and 2 GB are made available to the application – in our case to SQL Server.
 
2.1. /3GB switch.
 
If you have a total of 4 GB Physical memory and you need to address more than 2 GB to SQL, you have to specify the /3GB parameter in the Boot.ini file of Windows 2000/2003 Server and reboot the machine. With the /3GB parameter, the operating system reserves only 1 GB to its processes and the SQL Server can access up to 3 GB instead of the default setting of 2 GB.
 
2.2. PAE (Physical Address Extension) and AWE (Address Windowing Extensions)
 
Physical Address Extension (PAE) is a processor feature that enables x86 processors to access more than 4 GB of physical memory on capable versions of Windows. Certain 32-bit versions of Windows Server running on x86-based systems can use PAE to access up to 64 GB or 128 GB of physical memory, depending on the physical address size of the processor. However, PAE does not change the amount of virtual address space available to a process. Each process running in 32-bit Windows is still limited to a 4 GB virtual address space.  At this point we need to consider the AWE facility.

AWE is a set of extensions to the memory management functions of Windows that allow applications to address more memory than the 2-3 GB that is available through standard 32-bit addressing. AWE lets applications acquire physical memory, and then dynamically map views of the nonpaged memory to the 32-bit address space. Although the 32-bit address space is limited to 4 GB, the nonpaged memory can be much larger. This enables memory-intensive applications, such as large database systems, to address more memory than can be supported in a 32-bit address space.

Steps to configure the operating system for AWE:

  1. To support more than 4 GB of physical memory on 32-bit operating systems, you must first add the /PAE parameter to the Boot.ini file and reboot the computer.
  2. Remove /3GB parameter from Boot.ini file IF there is more than 16 GB of physical memory available on a computer. The operating system requires 2 GB of virtual address space for system purposes and therefore can support only a 2 GB user mode virtual address space.
  3. Enable AWE. Go to Sql Server Management Console > Properties dialog > Memory > select  ‘Use AWE to allocate memory’ , or alternatively by issuing the following command in a query against the target server:
  4. sp_configure 'awe enabled', 1
  5. Ensure that you add the user that the SQL Server service is running under to the ‘Lock Pages in Memory’ Local Security Policy (see Microsoft KB 811891 for the exact details on how to do this).  If you don’t update the local security policy, SQL Server will not actually use AWE and continue to use only 2Gb of memory; furthermore, you’re likely to see the following in the SQL Server log:
  6. "Cannot use Address Windowing Extensions because lock memory privilege
    was not granted."
  7. Restart the server machine.

 
3. Memory considerations if you run SQL Server on 64-bit platform:
 
In today’s 64-bit platforms, some great improvements have been made in the memory arena. While 32-bit platforms require you to use AWE and PAE to access more than 2 GB of RAM, the 64-bit platforms don’t have this limitation. In 64-bit platforms, all memory is available to the applications, as long as they are compiled as 64-bit applications; 32-bit applications running in Windows on Windows (WOW) have the same memory limits that they have running on a 32-bit platform.

While SQL Server only provides a few simple memory settings, setting them correctly is extremely important. Correct memory settings will have SQL Server running smoothly for a long time to come. Memory settings should be reviewed regularly to ensure that the original settings are still appropriate. After all, the amount of memory installed last year may no longer be enough memory or allocated correctly anymore.

You can find out and monitor how much memory SQL needs and whether the Operating System is comfortable with the memory left. Check out my next post related to SQL Memory needs.