Tuesday, August 30, 2011

SQL SERVER Fundaments II ( Stored Procedures Vs Functions)

In many instances you can accomplish the same task using either a stored procedure or a function. Both functions and stored procedures can be custom defined and part of any application. Functions, on the other hand, are designed to send their output to a query or T-SQL statement. For example, User Defined Functions (UDFs) can run an executable file from SQL SELECT or an action query, while Stored Procedures (SPROC) use EXECUTE or EXEC to run. Both are instantiated using CREATE FUNCTION.
To decide between using one of the two, keep in mind the fundamental difference between them: stored procedures are designed to return its output to the application. A UDF returns table variables, while a SPROC can't return a table variable although it can
Create a table. Another significant difference between them is that UDFs can't change the server environment or your operating system environment, while a SPROC can. Operationally, when T-SQL encounters an error the function stops, while T-SQL will ignore an error in a SPROC and proceed to the next statement in your code (provided you've included error handling support). You'll also find that although a SPROC can be used in an XML FOR clause, a UDF cannot be.
If you have an operation such as a query with a FROM clause that requires a rowset be drawn from a table or set of tables, then a function will be your appropriate choice. However, when you want to use that same rowset in your application the better choice would be a stored procedure.
There's quite a bit of debate about the performance benefits of UDFs vs. SPROCs. You might be tempted to believe that stored procedures add more overhead to your server than a UDF. Depending upon how your write your code and the type of data you're processing, this might not be the case. It's always a good idea to text your data in important or time-consuming operations by trying both types of methods on them

Monday, August 29, 2011

BEST PRACTICES FOR BACKING UP LARGE MISSION CRITICIAL DATABASES

In an ideal world, hard drives and other hardware never fail, software is never defective, users do not make mistakes, and hackers are never successful. However, we live in a less than perfect world and we should plan and prepare to handle adverse events.

In today’s topic, we will focus on best practices for backing up large mission critical databases. Performing and maintaining good backups is one of the top priority for any DBA/Developer/Engineer working with SQL Server.

KEEP IN MIND: BACKUP AND RESTORE IS NOT A HIGH AVAILABILITY FEATURE. YOU MUST PERFORM REGULAR BACKUPS OF YOUR DATABASES.

RESTORING a database from backup is simply a repair feature and not an availability feature. If you are running a mission-critical system and if your database requires high availability, then please look into various H/A features available with SQL Server.

If you are running a large/mission-critical database system than you need your database to be available continuously or for extended periods of time with minimal down-time for maintenance tasks. Therefore, the duration of situations that require databases to be restored must be kept as short as possible.

Additionally, if your databases are large, requiring longer periods of time to perform backup and restore than you MUST look into some of the cool new features that SQL Server offers to increase the speed of backup and restore operations to minimize the effect on users during both backup and restore operations.

USE MULTIPLE BACKUP DEVICES SIMULTANEOUSLY
If you are performing backups/restore on a large database than use multiple backup devices simultaneously to allow backups to be written to all the devices at the same time. Using multiple backup devices in SQL Server, allows database backups to be written to all devices in parallel. One of the potential bottleneck in backup throughput is the backup device speed. Using multiple backup devices can increase throughput in proportion to the number of devices used. Similarly, the backup can be restored from multiple devices in parallel.

USE MIRRORED MEDIA SET
Use a mirrored media set. A total of four mirrors is possible per media set. With the mirrored media set, the backup operation writes to multiple groups of backup devices. Each group of backup devices makes up a single mirror in the mirrored media set. Each single mirror set must use the same quantity and type of physical backup devices, and must all have the same properties.

USE SNAPSHOT BACKUPS (FASTEST BACKUP)
This is the fastest way to perform backups on databases. A snapshot backup is a specialized backup that is created almost instantaneously by using a split-mirror solution obtained from an independent hardware and software vendor. Snapshot backups minimize or eliminate the use of SQL Server resources to accomplish the backup. This is especially useful for moderate to very large databases in which availability is very important. Snapshot backups and restores can be performed sometimes in seconds with very little or zero effect on the server.

USE LOW PRIORITY BACKUP COMPRESSION
Backing up databases using the newly introduced backup compression feature, could increase CPU usage and any additional CPU consumed by the compression process can adversely impact concurrent operations. Therefore, when possible create a low priority compressed backup whose CPU usage is limited by Resource Governor to prevent any CPU contention.

USE FULL, DIFFERENTIAL AND LOG BACKUPS
If the database recovery model is set to FULL, than use different combination of backups (FULL, DIFFERENTIAL, LOG). This will help you minimize the number of backups that need to be applied to bring the database to the point of failure.

USE FILE/FILEGROUP BACKUPS
Use file and file group backups and T-log backups. These allow for only those files that contain the relevant data, instead of the whole database, to be backed up or restored.

USE A DIFFERENT DISK FOR BACKUPS
Do not use the same physical disk that holds database files or Log files for backup purposes. Using the same physical disk not only affects the performance, but also may reduce the recoverability of the plan.


Thanks,

Friday, August 19, 2011

SQL SERVER - Fundamentals- 1.

Today was taking an interveiw to fresher and worried about the sorry state on fundamentals. So inorder to provide some insight that would help Aspriants to understand better about the Fundamentals going ahead posting atleast 1 post per week with Fundamentals/Definations.. To start with.. here we go.

WHAT IS AN UNENFORCED RELATIONSHIP?
A link between tables that references the primary key in one table to a foreign key in another table, and which does not check the referential integrity during INSERT and UPDATE transactions


WHAT IS MANY TO MANY RELATIONSHIP?
A relationship between two tables in which rows in each table have multiple matching rows in the related table. For example, each sales invoice can contain multiple products, but each product can appear on multiple sales invoices.

WHAT IS A LINKED SERVER?
A definition of an OLE DB data source used by SQL Server distributed queries. The linked server definition specifies the OLE DB provider required to access the data, and includes enough addressing information for the OLE DB provider to connect to the data. Any rowsets exposed by the OLE DB data source can then be referenced as tables, called linked tables, in SQL Server distributed queries.

Wednesday, August 10, 2011

EFFICIENTLY MANAGE LARGE DATA MODIFICATIONS

Did you know that you can now use the TOP operator for Deleting, Inserting and Updating data in SQL Server tables?

Using the TOP operator for DML operation can help you in executing very large data operations by breaking the process into smaller pieces. This can potentially help with increased performance and also helps with improving database concurrency for larger and highly accessed tables. This is considered as one of the best techniques for managing data modifications on large data loads for reporting or data warehouse applications.

When you perform an update on large number of records using single set updates, it can cause the Transaction Log to grow considerably. However, when processing the same operation in chunks or pieces, each chunk is committed after completion allowing SQL Server to potentially re-use the T-Log space. Another classic issue many of us have experienced is when you are performing very large data updates and you cancel the query for some reason, you would have to wait for a long time while the transaction completely rolls back.

With this technique you can perform data modifications in smaller chunks and you can continue with your updates more quickly. Also, chunking allows more concurrency against the modified table, allowing user queries to jump in, instead of waiting for several minutes for a large modifications to finish.

Let’s take an example of deleting 1000 rows of records in a chunk. Assume a table called LARGETABLE table that has millions of records and you want delete 1000 records in chunk:
DELETING LARGE TABLE IN CHUNKS

--CREATE A DEMO TABLE CALLED LARGETABLE
CREATE TABLE LARGETABLE (ID_COL INT IDENTITY(1,1), COL_A VARCHAR(10),COL_B VARCHAR(10))
GO

--INSERT THE DATA IN LARGETABLE.. NOTICE THE USE OF ‘GO 10000’
INSERT INTO LARGETABLE VALUES ('A','B')
GO 10000

--QUERY THE TABLE
SELECT COUNT(*) FROM LARGETABLE;

--PERFORM DELETION OF 1000 ROWS FROM LARGETABLE
WHILE (SELECT COUNT(*) FROM LARGETABLE) > 0
BEGIN
DELETE TOP (1000) FROM LARGETABLE
SELECT LTRIM(STR(COUNT(*)))+' RECORDS TO BE DELETED' FROM LARGETABLE --THIS IS JUST A COMMENT.
END


The above technique can also be used with INSERT and UPDATE commands.