Wednesday, July 20, 2011

Do You Know ?

 

WHAT IS A
DATA PROVIDER?
It’s a layer of software that handles communication between data extensions and customized software specific to each type of external data source. Depending on the specific data source, multiple data providers are available from Microsoft and from third-party vendors.

 
Simple Recovery Mode
SIMPLE Recovery Mode does NOT mean that your transactions are not logged. There will be logging and your log could grow quite large if you are running large transactions or a large number of concurrent transactions.
 




WHAT IS A CERTIFICATE?
A digital document that is commonly used for authentication and to help secure information on a network. A certificate binds a public key to an entity that holds the corresponding private key. Certificates are digitally signed by the certification authority that issues them, and they can be issued for a user, a computer, or a service.


Friday, July 15, 2011

SQL SERVER INDEX OPERATIONS ON STEROIDS

Ever noticed that even though you have a beefy server, SQL Server index operations sometime takes time?

There are many things that happen behind the scenes that could contribute to the slowness of index operations. Some of the reasons include: the load on SQL Server, out of date statistics, the server configuration setting for degree of parallelism, amount of available memory/resources, etc.

 SQL Server can intelligently detect the load on the server and auto-adjusts the amount of resources that can be allocated to a process. In the case of Index operations; if SQL Server is busy, it will automatically adjust the Max Degree of Parallelism server configuration setting to accommodate the process or the load. This means it could either grant or limit the number of CPU’s Index operations can use. In many cases this is the best practice and you should not change the default value.


Let’s welcome MAXDOP

If you are using the Enterprise Edition of SQL Server 2008 (or even SQL2K5), you can now control the number of processors/CPUs that can be used for performing index operations by using MAXDOP option. This option enables parallelism for Index operations, which means it can use multiple processors to fulfill a single query statement which can potentially improve the performance of index operations such as: Creating an index, Altering or Rebuilding an index, Dropping a Clustered Index, etc.



Once you apply the MAXDOP option for index operations, it will override the max degree of parallelism server configuration option for index operations. (All other Index operations with no MAXDOP will continue to honor the max degree of parallelism settings)



MAXDOP supports three values

MAXDOP=0 (This is the default setting for SQL Server. This instructs SQL Server to grant CPUs based on the available resources and server load.

MAXDOP=1 (This disables parallelism and enables the operations to execute serially)

MAXDOP=2 through 64 (This indicates SQL Server to override the max degree of parallelism setting and use the number of CPUs specified here)

Keep in mind that if your SQL Server has only 8 CPUs and you specify more CPUs (Ex: MAXDOP=32) then SQL Server will default to the number of CPUs available on the system.

Let's take an example of how we can write an Index creation statement that would use 6 CPUs out of 8 when creating an index:



Create NonClustered Index EmailAddress_IDX on

Employee.Contact(EmailAddress)

With (MAXDOP=6)



The above statement will override the current SQL Configuration "Max Degree of Parallelism" because we are now using MAXDOP option.



Thank you,


Monday, July 11, 2011

BEST PRACTICES FOR USING SQL SERVER SERVICE ACCOUNTS


Securing SQL Server is one of the top priorities of any SQL Server DBA. It is extremely important for SA's to make sure that SQL Server is TIGHTLY SECURED and is not exposed to users that don't need access to SQL Server. Only valid and required users MUST be granted "required" permissions. 
Apart from configuring the right security/permissions for SQL Server users/logins, it is also very important to select the right account to run SQL Server services. In today's tip, we will explore when to use what service accounts for running SQL Server services.

WHEN TO USE DOMAIN USER ACCOUNT?

If your SQL Server interacts with other servers, services or resources on the network (ex: Files Shares, etc.) or if your SQL Server services uses linked servers to connect to other SQL Servers on the network, then you may use a low privileged domain user account for running SQL Server services. Domain user account is the most recommended account for setting up SQL Server services that interact with other servers on the network. One of the plus points of using a Domain User Account is that the account is controlled by Windows active directory therefore, domain level policy on accounts  apply to SQL Server service account as well.


WHEN TO USE NETWORK SERVICE ACCOUNT?


NEVER should you use Network Service Account for running SQL Server services. Network Service accounts are shared with other services running on the local computer. Network Service Account is a built-in account that has more access to server resources and objects than users accounts of local user groups.


Any SQL Server services that runs on Network Service Account, can access network resources by using the credentials of the computer account. This account shows up as "NET AUTHORITY\NETWORK SERVICE" when configuring SQL Server Services.


WHEN TO USE LOCAL USER ACCOUNT?


If your SQL Server DOES NOT interact with other servers, services or resources on the network (ex: Files, Shares, Linked Servers, Etc.) then you may use a low privileged local user account for running SQL Server Services.


NOTE: You are not required to use the local user account with administrative privileges to run SQL Server services.


WHEN TO USE LOCAL SYSTEM ACCOUNT?


NEVER should you use local system account for running SQL Server services. Local System Account has more permissions than you would think. It is a very high-privileged built-in account created by Windows O/S.

Local System Account has extensive privileges on the entire local system and acts as a computer on your company's network. This account shows up as "NT AUTHORITY\SYSTEM" when configuring SQL Server services.


 credits to Saleem Hakani.  my SQL Hero.

Wednesday, July 6, 2011

Troubleshoot Slow Running Queries


Below are some common reasons for slow-running queries and what your approach should be for identifying and fixing them. The best and the quickest way to address any slow running query should be by first preparing a Troubleshooting Checklist or Troubleshooting Guide and then by using it during slow running query situations. This way it can help any developer or DBA to easily and quickly narrow down the issue. (This troubleshooting guide or checklist should be posted on a central location from where all the DBA's / Developers of your team can access and update when required)


Please note: If you experience something new to check which is not listed in the below list than you may update your TSG or TSC appropriately


Queries or updates that take more than the expected time to execute can be caused by a variety of reasons. Slow-running queries can be caused by performance problems related to your network or the computer where SQL Server is running. Slow-running queries can also be caused by problems with your physical database design.


Some of the most common reasons for slow-running queries and updates are:


1) Slow network communication.
2) Inadequate memory in the server computer, or No enough memory available for SQL Server.
3) Lack of useful statistics on indexed columns.
4) Out-of-date statistics on indexed columns.
5) Lack of useful indexes.
6) Lack of useful indexed views.
7) Lack of useful data striping.
8) Lack of useful partitioning.


Troubleshooting Checklist (Trying to cover all the common scenarios)
1) Is the performance problem related to a component other than queries? For example, is the problem slow network performance? Are there any other components that might be causing or contributing to performance degradation?


The Windows System Monitor can be used to monitor the performance of SQL Server and non-SQL Server related components.




2) If the performance issue is related to queries, identify which query or set of queries is involved?


First use SQL Server Profiler to help identify the slow query or queries.


After you have identified the slow-running query or queries, you can further analyze query performance by producing a Showplan, which can be a text, XML, or graphical representation of the query execution plan that the query optimizer generates. You can produce a Showplan using Transact-SQL SET options, SQL Server Management Studio, or SQL Server Profiler.




The information gathered by these tools allows you to determine how a query is executed by the SQL Server query optimizer and which indexes are being used. Using this information, you can determine if performance improvements can be made by rewriting the query, changing the indexes on the tables, or perhaps modifying the database design.


3) Was the query optimized with useful statistics?


Statistics on the distribution of values in a column are automatically created on indexed columns by SQL Server. They can also be created on nonindexed columns either manually, using SQL Server Management Studio or the CREATE STATISTICS statement, or automatically, if the AUTO_CREATE_STATISTICS database option is set to TRUE. These statistics can be used by the query processor to determine the optimal strategy for evaluating a query. Maintaining additional statistics on nonindexed columns involved in join operations can improve query performance.
Monitor the query using SQL Server Profiler or the graphical execution plan in SQL Server Management Studio to determine if the query has enough statistics.


4) Are the query statistics up to date? Are the statistics automatically updated?


SQL Server automatically creates and updates query statistics on indexed columns (as long as automatic query statistic updating is not disabled). Additionally, statistics can be updated on nonindexed columns either manually, using SQL Server Management Studio or the UPDATE STATISTICS statement, or automatically, if the AUTO_UPDATE_STATISTICS database option is set to TRUE. Up-to-date statistics are not dependent upon date or time data. If no UPDATE operations have taken place, the query statistics are still up-to-date.




5) Are suitable indexes available? Would adding one or more indexes improve query performance? For more information, see General Index Design Guidelines in Database Engine Tuning Advisor can also recommend the creation of necessary statistics.


6) Are there any data or index hot spots? Consider using disk striping. Disk striping can be implemented by using RAID (redundant array of independent disks) level 0, where data is distributed across multiple disk drives


7) Is the query optimizer provided with the best opportunity to optimize a complex query?


8) If you have a large volume of data, do you need to partition it? Data manageability is the main benefit of partitioning, but if your tables and indexes on them are partitioned similarly, partitioning can also improve query performance

Monday, July 4, 2011

Difference between Truncate and Delete


TruncateDelete
TRUNCATE is a DDL commandDELETE is a DML command
TRUNCATE TABLE always locks the table and page but not each rowDELETE statement is executed using a row lock, each row in the table is locked for deletion
Cannot use Where ConditionWe can specify filters in where clause
It Removes all the dataIt deletes specified data if where condition exists.
TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions.Delete activates a trigger because the operation are logged individually.
Faster in performance wise, because it doesn't keep any logsSlower than truncate because, it keeps logs
Rollback is not possibleRollback is possible
Drop all object's statistics and marks like High Water Mark free extents and leave the object really empty with the first extent. zero pages are left in the tablekeeps object's statistics and all allocated space. After a DELETE statement is executed, the table can still contain empty pages.
TRUNCATE TABLE removes the data by DE allocating the data pages used to store the table data and records only the page DE allocations in the transaction loThe DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row
If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the columnDELETE retain the identity
Restrictions on using Truncate Statement
1. Are referenced by a FOREIGN KEY constraint.
2. Participate in an indexed view.
3. Are published by using transactional replication or merge replication.
Delete works at row level, thus row level constrains

Friday, July 1, 2011

Avoid losing Permissions When Subscription is REINITIALIZED



You must have experienced this scenario several times that whenever a snapshot is reinitialized it loses all the granted permissions.
So what can be done to avoid this? Well there are two ways you can handle this scenario:

 Please note: By default, all the objects in the subscription database are dropped and recreated whenever subscription is reinitialized.

1) You can re-apply all the permissions after the re-initialization. I recommend that when you setup the permissions manually, you script out all the object/statement level permissions and store it separately so that you can use it right after you reinitialize subscription.


2) You can configure your subscription so that it does not drop any objects whenever you reinitialize subscription.   
You can use SP_CHANGEARTICLE system stored procedure to configure the value of PRE_CREATION_CMD for the parameter @PROPERTY and a value of 'NONE','DELETE' or 'TRUNCATE' for the parameter @Value.


Also, in the Article Properties dialog box in the destination object section, select a value of "KEEP EXISTING OBJECT UNCHANGED, DELETE DATA"
"IF ARTICLE HAS A ROW FILTER, DELETE ONLY THAT MATCHES THE FILTER"
"TRUNCATE ALL DATA IN THE EXISTING OBJECT"


You much test this in your test environment and go thru the latest BOL for up-to-date information.






Credits to H.S our SQL Hero.