U How many times have you thought about using the Mirror database for some read activity or for reporting purpose? SQL Server currently doesn’t support reading the data directly from the mirror database (SQL Server Denali will be supporting this feature). However, even with the current version of SQL Server, you can still read the data from the Mirror copy using Database Snapshots.
WHAT IS A DATABASE SNAPSHOT?
Database snapshot is a static, read-only, transaction-consistent snapshot of a user database as it existed at the moment of the snapshot creation. You can create Multiple Snapshots of the same database but they must all reside on the same server instance. Database Snapshots are primarily used for reporting purposes however, you can also use them for reverting changes (due to user errors, accidently deleting data or objects, etc.) to the state it was in when the snapshot was created.
USING DATABASE SNAPSHOTS WITH MIRRORED DATABASES
In the database mirroring environment, Principal DB interacts with all the users and the mirror database only receives transaction log records from the principal database as the mirror database in a DBM session will be in a “RECOVERING” state.
With the introduction of SQL Server 2005, a new feature was added called Database Snapshots. And you can now use this feature to create a database snapshot from the Mirror database for reporting purpose. You can direct all client connection requests to the most recent database snapshot created from the Mirror database. If you are looking for updated data in your snapshot database then you will need to create new snapshots of that database periodically to get the latest data from the mirror database.
KEEP IN MIND: You can create a database snapshot on the mirror database only when the database is fully synchronized. Also, having an excessive number of database snapshots on the mirror database may decrease the performance of the principal database. Therefore, it is recommend that you don’t create multiple database snapshots of the same mirror copy. You should delete the old copies and keep the current one for reporting purpose.
WHAT HAPPENS DURING A ROLE SWITCH?
If role switching occurs, the database and its snapshots are restarted by temporarily disconnecting users. Afterwards, the database snapshots remain on the server instance where they were created, which would now become the new principal database. Reporting users can continue to use the snapshots after the failover. However, this would place an additional load on the new principal server and if performance is a concern in your environment than it is recommended that you create a snapshot on the new mirror database when it becomes available and redirect your clients to the new snapshot, and drop the database snapshots from the former mirror database.
HOW TO CREATE A DATABASE SNAPSHOT OF THE MIRROR DATABASE
Let’s create a snapshot on the mirror database called MSSOLVE. Make sure you are connected to the mirror database instance when you create this.
CREATING A SNAPSHOT DATABASE OF MSSOLVE MIRROR DB
USE MASTER
GO
CREATE DATABASE MSSOLVE_SNAPSHOT_0629 ON
( NAME = 'MSSOLVE_Data',
FILENAME = 'E:\MSSQL\DATA\MSSOLVE_SNAPSHOT_0629.ss' )
AS SNAPSHOT OF MSSOLVE;
GO
Once you successfully create the database snapshot, you are now ready to use the new snapshot of the mirror database for querying/reporting purpose.
WHERE CAN I VIEW THE NEWLY CREATED DATABASE SNAPSHOT?
You may wonder why the newly created database snapshot doesn’t appear in the database list in management studio. That’s because database snapshots are listed under Database Snapshots folder right below the System Databases folder in Management Studio. In the object explorer of Management Studio, connect to the instance of Microsoft SQL Server and expand “Databases” and then expand “Database Snapshots”.
HOW TO DROP A SNAPSHOT DATABASE?
You can drop the database snapshot the exact same way as you would any other user database using the Drop Database command.
DROPPING A DATABASE SNAPSHOT
DROP DATABASE MSSOLVE_SNAPSHOT_0629
Sunday, July 31, 2011
Monday, July 25, 2011
RECOVERING DATA USING SQL SERVER EMERGENCY MODE
Remember those days when the database would go in to suspect mode and you had to perform various steps to recover the database by putting the database in the emergency mode and then extracting the data out of that emergency database?
These are the high level steps you had to perform in previous (
This process has changed since the release of SQL2K5 onwards, putting the user database in EMERGENCY mode is now a supported and documented feature in the current release of SQL Server (unlike SQL Server 2000/7.0/6.x where you had to change the status of SYSDATABASES)
With the release of SQL2K5, SQL Server no longer allows making any changes to the system tables even by SA’s. Making even a slightest change to system objects is restricted. However, there may be situations when you would need to put the database into EMERGENCY mode and export/extract the data out of the corrupt database in to another database and in order to do that, SQL Server now provides a new feature as part of the ALTER DATABASE statement that would enable System Administrators to put the database in to EMERGENCY mode.
In the below example, we will see how this can be done using the ALTER DATABASE statement. Note: This is simply an example of how to put the database in emergency mode and how to bring it back to its normal state. In a real life scenario, once the database is in suspect mode and you put it in EMERGENCY mode, you may not be able to put it back in the normal state due to corruption. In this situation, you must export the data to another database.
NOTE: One of the good feature of SQL Server EMERGENCY mode is that when you run DBCC CHECKDB on a user database that doesn’t have a log file (ex: disk on which log file(s) were residing crashed and can’t be recovered), CHECKDB will rebuild the log file automatically for that user database when it is run while the database is in EMERGENCY mode.
1. Enable modifications to system catalogs.
2. Change the status of the database in SysDatabases system object to 32768
3. Restart SQL Server services (Once restarted database would appear in Emergency mode)
4. You would then transfer the data from your database in to another database
This process was not an easy process and involved manually updating system tables. Often this information was not publicly available.
With the release of SQL2K5, SQL Server no longer allows making any changes to the system tables even by SA’s. Making even a slightest change to system objects is restricted. However, there may be situations when you would need to put the database into EMERGENCY mode and export/extract the data out of the corrupt database in to another database and in order to do that, SQL Server now provides a new feature as part of the ALTER DATABASE statement that would enable System Administrators to put the database in to EMERGENCY mode.
In the below example, we will see how this can be done using the ALTER DATABASE statement. Note: This is simply an example of how to put the database in emergency mode and how to bring it back to its normal state. In a real life scenario, once the database is in suspect mode and you put it in EMERGENCY mode, you may not be able to put it back in the normal state due to corruption. In this situation, you must export the data to another database.
IMPORTANT: It is strongly recommended that you perform regular backups of your database to avoid any data loss.
PUTTING SALES DATABASE IN EMERGENCY MODE
|
ALTER DATABASE SALES SET EMERGENCY
GO
Once the database is in emergency mode, you should now export the data from the SALES database in to some other database.
|
PUTTING THE DATABASE BACK TO NORMAL STATE
|
ALTER DATABASE SALES SET ONLINE
GO
|
NOTE: One of the good feature of SQL Server EMERGENCY mode is that when you run DBCC CHECKDB on a user database that doesn’t have a log file (ex: disk on which log file(s) were residing crashed and can’t be recovered), CHECKDB will rebuild the log file automatically for that user database when it is run while the database is in EMERGENCY mode.
THINGS TO KEEP IN MIND:
When the database is put in the EMERGENCY mode, it is marked as READ_ONLY and logging is disabled. Only members of SYSADMIN role can set and are allowed to access the database when in emergency mode.
You can verify if the database is in emergency mode or not by examining the “STATE” and “STATE_DESC” columns in the sys.databases catalog view or from the “STATUS” property of the DATABASEPROPERTYEX function.
IMPORTANT: It is strongly recommended that you perform regular backups of your database to avoid any data loss.
credits to Saleem Hakani. my SQL Hero.
Wednesday, July 20, 2011
Do You Know ?
WHAT IS A
DATA PROVIDER?
|
It’s a layer of software that handles communication between data extensions and customized software specific to each type of external data source. Depending on the specific data source, multiple data providers are available from Microsoft and from third-party vendors.
|
Simple Recovery Mode
|
SIMPLE Recovery Mode does NOT mean that your transactions are not logged. There will be logging and your log could grow quite large if you are running large transactions or a large number of concurrent transactions.
|
WHAT IS A CERTIFICATE?
|
A digital document that is commonly used for authentication and to help secure information on a network. A certificate binds a public key to an entity that holds the corresponding private key. Certificates are digitally signed by the certification authority that issues them, and they can be issued for a user, a computer, or a service.
|
Friday, July 15, 2011
SQL SERVER INDEX OPERATIONS ON STEROIDS
Ever noticed that even though you have a beefy server, SQL Server index operations sometime takes time?
SQL Server can intelligently detect the load on the server and auto-adjusts the amount of resources that can be allocated to a process. In the case of Index operations; if SQL Server is busy, it will automatically adjust the Max Degree of Parallelism server configuration setting to accommodate the process or the load. This means it could either grant or limit the number of CPU’s Index operations can use. In many cases this is the best practice and you should not change the default value.
Let’s welcome MAXDOP
There are many things that happen behind the scenes that could contribute to the slowness of index operations. Some of the reasons include: the load on SQL Server, out of date statistics, the server configuration setting for degree of parallelism, amount of available memory/resources, etc.
If you are using the Enterprise Edition of SQL Server 2008 (or even SQL2K5), you can now control the number of processors/CPUs that can be used for performing index operations by using MAXDOP option. This option enables parallelism for Index operations, which means it can use multiple processors to fulfill a single query statement which can potentially improve the performance of index operations such as: Creating an index, Altering or Rebuilding an index, Dropping a Clustered Index, etc.
Once you apply the MAXDOP option for index operations, it will override the max degree of parallelism server configuration option for index operations. (All other Index operations with no MAXDOP will continue to honor the max degree of parallelism settings)
MAXDOP supports three values
MAXDOP=0 (This is the default setting for SQL Server. This instructs SQL Server to grant CPUs based on the available resources and server load.
MAXDOP=1 (This disables parallelism and enables the operations to execute serially)
MAXDOP=2 through 64 (This indicates SQL Server to override the max degree of parallelism setting and use the number of CPUs specified here)
Keep in mind that if your SQL Server has only 8 CPUs and you specify more CPUs (Ex: MAXDOP=32) then SQL Server will default to the number of CPUs available on the system.
Let's take an example of how we can write an Index creation statement that would use 6 CPUs out of 8 when creating an index:
Create NonClustered Index EmailAddress_IDX on
Employee.Contact(EmailAddress)
With (MAXDOP=6)
The above statement will override the current SQL Configuration "Max Degree of Parallelism" because we are now using MAXDOP option.
Thank you,
Wednesday, July 13, 2011
Download SQL Server "Denali" CTP3
CTP3 is here … use below url to download Denali CTP3.
Monday, July 11, 2011
BEST PRACTICES FOR USING SQL SERVER SERVICE ACCOUNTS
Securing SQL Server is one of the top priorities of any SQL Server DBA. It is extremely important for SA's to make sure that SQL Server is TIGHTLY SECURED and is not exposed to users that don't need access to SQL Server. Only valid and required users MUST be granted "required" permissions.
Apart from configuring the right security/permissions for SQL Server users/logins, it is also very important to select the right account to run SQL Server services. In today's tip, we will explore when to use what service accounts for running SQL Server services.
WHEN TO USE DOMAIN USER ACCOUNT? |
If your SQL Server interacts with other servers, services or resources on the network (ex: Files Shares, etc.) or if your SQL Server services uses linked servers to connect to other SQL Servers on the network, then you may use a low privileged domain user account for running SQL Server services. Domain user account is the most recommended account for setting up SQL Server services that interact with other servers on the network. One of the plus points of using a Domain User Account is that the account is controlled by Windows active directory therefore, domain level policy on accounts apply to SQL Server service account as well. |
WHEN TO USE NETWORK SERVICE ACCOUNT? |
NEVER should you use Network Service Account for running SQL Server services. Network Service accounts are shared with other services running on the local computer. Network Service Account is a built-in account that has more access to server resources and objects than users accounts of local user groups. Any SQL Server services that runs on Network Service Account, can access network resources by using the credentials of the computer account. This account shows up as "NET AUTHORITY\NETWORK SERVICE" when configuring SQL Server Services. |
WHEN TO USE LOCAL USER ACCOUNT? |
If your SQL Server DOES NOT interact with other servers, services or resources on the network (ex: Files, Shares, Linked Servers, Etc.) then you may use a low privileged local user account for running SQL Server Services. NOTE: You are not required to use the local user account with administrative privileges to run SQL Server services. |
WHEN TO USE LOCAL SYSTEM ACCOUNT? |
NEVER should you use local system account for running SQL Server services. Local System Account has more permissions than you would think. It is a very high-privileged built-in account created by Windows O/S. Local System Account has extensive privileges on the entire local system and acts as a computer on your company's network. This account shows up as "NT AUTHORITY\SYSTEM" when configuring SQL Server services. credits to Saleem Hakani. my SQL Hero. |
Wednesday, July 6, 2011
Troubleshoot Slow Running Queries
Below are some common reasons for slow-running queries and what your approach should be for identifying and fixing them. The best and the quickest way to address any slow running query should be by first preparing a Troubleshooting Checklist or Troubleshooting Guide and then by using it during slow running query situations. This way it can help any developer or DBA to easily and quickly narrow down the issue. (This troubleshooting guide or checklist should be posted on a central location from where all the DBA's / Developers of your team can access and update when required)
Please note: If you experience something new to check which is not listed in the below list than you may update your TSG or TSC appropriately
Queries or updates that take more than the expected time to execute can be caused by a variety of reasons. Slow-running queries can be caused by performance problems related to your network or the computer where SQL Server is running. Slow-running queries can also be caused by problems with your physical database design.
Some of the most common reasons for slow-running queries and updates are:
1) Slow network communication.
2) Inadequate memory in the server computer, or No enough memory available for SQL Server.
3) Lack of useful statistics on indexed columns.
4) Out-of-date statistics on indexed columns.
5) Lack of useful indexes.
6) Lack of useful indexed views.
7) Lack of useful data striping.
8) Lack of useful partitioning.
Troubleshooting Checklist (Trying to cover all the common scenarios)
1) Is the performance problem related to a component other than queries? For example, is the problem slow network performance? Are there any other components that might be causing or contributing to performance degradation?
The Windows System Monitor can be used to monitor the performance of SQL Server and non-SQL Server related components.
2) If the performance issue is related to queries, identify which query or set of queries is involved?
First use SQL Server Profiler to help identify the slow query or queries.
After you have identified the slow-running query or queries, you can further analyze query performance by producing a Showplan, which can be a text, XML, or graphical representation of the query execution plan that the query optimizer generates. You can produce a Showplan using Transact-SQL SET options, SQL Server Management Studio, or SQL Server Profiler.
The information gathered by these tools allows you to determine how a query is executed by the SQL Server query optimizer and which indexes are being used. Using this information, you can determine if performance improvements can be made by rewriting the query, changing the indexes on the tables, or perhaps modifying the database design.
3) Was the query optimized with useful statistics?
Statistics on the distribution of values in a column are automatically created on indexed columns by SQL Server. They can also be created on nonindexed columns either manually, using SQL Server Management Studio or the CREATE STATISTICS statement, or automatically, if the AUTO_CREATE_STATISTICS database option is set to TRUE. These statistics can be used by the query processor to determine the optimal strategy for evaluating a query. Maintaining additional statistics on nonindexed columns involved in join operations can improve query performance.
Monitor the query using SQL Server Profiler or the graphical execution plan in SQL Server Management Studio to determine if the query has enough statistics.
4) Are the query statistics up to date? Are the statistics automatically updated?
SQL Server automatically creates and updates query statistics on indexed columns (as long as automatic query statistic updating is not disabled). Additionally, statistics can be updated on nonindexed columns either manually, using SQL Server Management Studio or the UPDATE STATISTICS statement, or automatically, if the AUTO_UPDATE_STATISTICS database option is set to TRUE. Up-to-date statistics are not dependent upon date or time data. If no UPDATE operations have taken place, the query statistics are still up-to-date.
5) Are suitable indexes available? Would adding one or more indexes improve query performance? For more information, see General Index Design Guidelines in Database Engine Tuning Advisor can also recommend the creation of necessary statistics.
6) Are there any data or index hot spots? Consider using disk striping. Disk striping can be implemented by using RAID (redundant array of independent disks) level 0, where data is distributed across multiple disk drives
7) Is the query optimizer provided with the best opportunity to optimize a complex query?
8) If you have a large volume of data, do you need to partition it? Data manageability is the main benefit of partitioning, but if your tables and indexes on them are partitioned similarly, partitioning can also improve query performance
Monday, July 4, 2011
Difference between Truncate and Delete
Truncate | Delete |
TRUNCATE is a DDL command | DELETE is a DML command |
TRUNCATE TABLE always locks the table and page but not each row | DELETE statement is executed using a row lock, each row in the table is locked for deletion |
Cannot use Where Condition | We can specify filters in where clause |
It Removes all the data | It deletes specified data if where condition exists. |
TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions. | Delete activates a trigger because the operation are logged individually. |
Faster in performance wise, because it doesn't keep any logs | Slower than truncate because, it keeps logs |
Rollback is not possible | Rollback is possible |
Drop all object's statistics and marks like High Water Mark free extents and leave the object really empty with the first extent. zero pages are left in the table | keeps object's statistics and all allocated space. After a DELETE statement is executed, the table can still contain empty pages. |
TRUNCATE TABLE removes the data by DE allocating the data pages used to store the table data and records only the page DE allocations in the transaction lo | The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row |
If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the column | DELETE retain the identity |
Restrictions on using Truncate Statement 1. Are referenced by a FOREIGN KEY constraint. 2. Participate in an indexed view. 3. Are published by using transactional replication or merge replication. | Delete works at row level, thus row level constrains |
Friday, July 1, 2011
Avoid losing Permissions When Subscription is REINITIALIZED
You must have experienced this scenario several times that whenever a snapshot is reinitialized it loses all the granted permissions.
So what can be done to avoid this? Well there are two ways you can handle this scenario:
Please note: By default, all the objects in the subscription database are dropped and recreated whenever subscription is reinitialized.
1) You can re-apply all the permissions after the re-initialization. I recommend that when you setup the permissions manually, you script out all the object/statement level permissions and store it separately so that you can use it right after you reinitialize subscription.
2) You can configure your subscription so that it does not drop any objects whenever you reinitialize subscription.
You can use SP_CHANGEARTICLE system stored procedure to configure the value of PRE_CREATION_CMD for the parameter @PROPERTY and a value of 'NONE','DELETE' or 'TRUNCATE' for the parameter @Value.
Also, in the Article Properties dialog box in the destination object section, select a value of "KEEP EXISTING OBJECT UNCHANGED, DELETE DATA"
"IF ARTICLE HAS A ROW FILTER, DELETE ONLY THAT MATCHES THE FILTER"
"TRUNCATE ALL DATA IN THE EXISTING OBJECT"
You much test this in your test environment and go thru the latest BOL for up-to-date information.
Credits to H.S our SQL Hero.
Subscribe to:
Posts (Atom)