Wednesday, November 30, 2011

Some of the best practices for shrinking T-Log files.

Hi  all,

                There are fixed boundaries from which a transaction log file can be shrunk. The size of the virtual log files within the log determines the possible reductions in size. Therefore, the log file can never be shrunk to a size less than the virtual log file. Also, the log file is reduced in increments equal to the size of the virtual log file size. For example, a transaction log file of 1 GB can consist of five virtual log files of 200 MB each.
              Shrinking the transaction log file deletes unused virtual log files, but leaves at least two virtual log files. Because each virtual log file in this example is 200 MB, the transaction log can be reduced only to a minimum of 400 MB and only in increments of 200 MB.

     To be able to reduce a transaction log file to a smaller size, create a small transaction log and let it grow automatically, instead of creating a large transaction log file all at once.


        In SQL Server 2005, a DBCC SHRINKDATABASE or DBCC SHRINKFILE operation immediately tries to reduce a transaction log file to the requested size (subject to rounding). You should backup the log file before shrinking the file to reduce the size of the logical log and mark as inactive the virtual logs that do not hold any part of the logical log. For more information, see Shrinking the Transaction Log.



Best Practices

Consider the following information when you plan to shrink a database or file:


1) A shrink operation is most effective after an operation that creates lots of unused space, such as a truncate table or a drop table operation.
 

2) Most databases require some free space for regular day-to-day operations. If you shrink a database repeatedly and notice that the database size grows again, this indicates that the space that was shrunk is required for regular operations. In these cases, repeatedly shrinking the database is a wasted operation.


3) A shrink operation does not preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. For example, you should not shrink a database or data file after rebuilding indexes. This is another reason not to repeatedly shrink the database.



4) Unless you have a specific requirement, do not set the AUTO_SHRINK database option to ON.

Thanks,

Sunday, November 27, 2011

BEST PRACTICES FOR USING SQL SERVER NETWORK PROTOCOL

WHICH IS THE BEST PROTOCOL TO CONNECT TO SQL SERVER?

I often get this question from the community and my answer to this question depends based on your network environment whether you are on a local computer with no networks or whether you are on a Local Area Network or whether you are on a Wide Area Network. I’ve tried to provide some guidance / recommendation below that works best for many SQL Server installations.


In order to connect to SQL Server, your client machine must have a network protocol enabled that can communicate with SQL Server database engine. By default, SQL Server can listen on 4 different protocols that come installed with SQL Server. However, not all the installed protocols will be enabled and you would need to enable them at the server using SQL Server Configuration Manager to be able to use them from your client machine.


THE FOUR NETWORK PROTOCOLS IN SQL SERVER ARE:



Protocol
Description
Shared Memory
This is the simplest protocol of all the other protocols available in SQL Server. The limitation is that the client applications must reside on the same machine where the SQL Server is installed.
Named Pipes
This protocol can be used when your application and SQL Server resides on a local area network. A part of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer).
TCP/IP
This is the most popular and common protocol widely used throughout the industry today. It communicates across interconnected networks and is a standard for routing network traffics and offers advanced security features.
VIA
(DEPRECATED)
DO NOT USE
Virtual Interface Adapter (VIA) protocol works with VIA hardware. This protocol has been deprecated and will no longer be available in the future versions of SQL Server.



BEST PRACTICES FOR USING SQL SERVER NETWORK PROTOCOLS



WHAT PROTOCOL TO USE ON A LOCAL SERVER?

If you are running your server application on a local computer which also runs SQL Server 2008 then Shared Memory or Named Pipes protocol is your best choice. However, Named Pipes get loaded in kernel mode of the operating system and are much faster on the local machine.

Note: This is only true if both the application and SQL Server are running on the same machine and if not then it would use Network Pipes that can be used over Local Area Networks.




WHAT PROTOCOL TO USE ON A LOCAL AREA NETWORK (LAN)?

As long as your Local Area Network is fast, both TCP/IP and Named Pipes would perform with the same speed. However, if there are more requests waiting to hit the server then Named Pipes may encounter pipe-busy error when trying to connect to SQL Server whereas, TCP/IP sockets doesn’t experience this problem as it supports  backlog queue for any number of requests.




WHAT PROTOCOL TO USE ON A WIDE AREA NETWORK (WAN)?

Data transmission using TCP/IP sockets is much more streamlined and has relatively less overhead compared to other network protocols. TCP/IP protocol can also take advantage of many performance features supported by the operating system that includes windowing, delayed acknowledgement which can be very helpful on slow networks.



Thanks,

Wednesday, November 9, 2011

TRIGGER-LESS TRACKING OF DML OPERATIONS

There are times when you need to track all the changes happening to a table. In the past, developers have used custom code to track changes by calling stored procedures, triggers and until recently even change data capture (also known as CDC).


SQL Server now supports a simplified method of auditing changes using DML statements without having to use Triggers or CDC (Change Data Capture). SQL Server introduces an OUTPUT clause as a part of DML statement that can help you in tracking changes made during any DML operations.


The OUTPUT clause can save the result-set in a table or table variable. The functionality is similar to what triggers had with INSERTED and DELETED tables which used to access the rows that have been modified during the DML operation.



Example:

In this example we will change the ITEMCOMMENT value from the ITEM table to the reverse of the original value and store the original and changed values in a table.



TRACKING CHANGES USING OUTPUT CLAUSE
--CREATE ITEM TABLE
CREATE TABLE ITEM (ITEMNAME VARCHAR(40), ITEMCOMMENT VARCHAR(255))

--INSERT DATA IN TO ITEM TABLE
-- PLEASE NOTICE THE MULTIPLE INSERTS WITH ONE INSERT STATEMENT
INSERT INTO ITEM VALUES
('MANGO','ELPPA NAHT RETTEB SI OGNAM'),
('PEACH','ETATS HCAEP A SI AIGROEG'),
('GRAPES','TAERG ERA SEPARG')

-- QUERY THE ITEM TABLE
SELECT * FROM ITEM

  
--DECLARE A TABLE VARIABLE TO STORE THE CHANGED VALUE
DECLARE @RECORDCHANGES TABLE(CHANGE VARCHAR(255))

--UPDATE THE ITEM TABLE TO FIX ITEM COMMENTS
--NOTICE THE USE OF OUTPUT PARAMETER BELOW.
--THE CHANGES ARE STORED IN THE TABLE VARIABLE

UPDATE ITEM
SET ITEMCOMMENT=REVERSE(ITEMCOMMENT)
OUTPUT 'ORIGINAL VALUE: [' + DELETED.ITEMCOMMENT+'] HAS BEEN CHANGED TO: ['+ INSERTED.ITEMCOMMENT+']' INTO @RECORDCHANGES

--QUERY THE CHANGES FROM THE TABLE VARIABLE
SELECT * FROM @RECORDCHANGES

-- RESULTSET FROM THE CHANGED TABLE

--QUERY THE ITEM TABLE
SELECT * FROM ITEM

--YOU WILL SEE THE BELOW RESULTSET



NOTE: OUTPUT CLAUSE IS NOT SUPPORTED FOR THE FOLLOWING STATEMENTS:

1.       DML statements that reference local partitioned views, distributed partitioned views, or remote tables.

2.       INSERT statements that contain an EXECUTE statement.

3.       Full-text predicates are not allowed in the OUTPUT clause when the database compatibility level is set to 100.

4.       The OUTPUT INTO clause cannot be used to insert into a view, or rowset function.

5.       A user-defined function cannot be created if it contains an OUTPUT INTO clause that has a table as its target.



Please refer to SQL Server BOL (Books Online) for detailed information on the Output clause.

Wednesday, November 2, 2011

UNDERSTANDING DATA HASHING TECHNIQUES IN SQL SERVER

 A common scenario in data warehousing applications is knowing what source system records to update, what data needs to be loaded and which data rows can be skipped as nothing has changed since they were last loaded. Another possible scenario is the need to facilitate searching data that is encrypted using cell level encryption or storing application passwords inside the database.



Data Hashing can be used to solve this problem in SQL Server.



A hash is a number that is generated by reading the contents of a document or message. Different messages should generate different hash values, but the same message causes the algorithm to generate the same hash value.





The HashBytes function in SQL Server
SQL Server has a built-in function called HashBytes to support data hashing.

HashBytes ( '', { @input | 'input' } )
::= MD2 | MD4 | MD5 | SHA | SHA1 | SHA2_256 | SHA2_512

Here is a sample along with the return values commented in the next line :
Select HashBytes('MD2', 'Hello world!')
--0x63503D3117AD33F941D20F57144ECE64

Select HashBytes('MD4', 'Hello world!')
--0x0D7A9DB5A3BED4AE5738EE6D1909649C

Select HashBytes('MD5', 'Hello world!')
--0x86FB269D190D2C85F6E0468CECA42A20

Select HashBytes('SHA', 'Hello world!')
--0xD3486AE9136E7856BC42212385EA797094475802

Select HashBytes('SHA1', 'Hello world!')
--0xD3486AE9136E7856BC42212385EA797094475802

Select HashBytes('SHA2_256', 'Hello world!')
--0xC0535E4BE2B79FFD93291305436BF889314E4A3FAEC05ECFFCBB7DF31AD9
E51A

Select HashBytes('SHA2_512', 'Hello world!')
--0xF6CDE2A0F819314CDDE55FC227D8D7DAE3D28CC556222A0A8AD66D91CCA
D4AAD6094F517A2182360C9AACF6A3DC323162CB6FD8CDFFEDB0FE038F55E8
5FFB5B6






Properties of good hash functions
A good hashing algorithm has these properties:

·         It is especially sensitive to small changes in the input. Minor changes to the document will generate a very different hash result.
·         It is computationally unfeasible to reverse. There will be absolutely no way to determine what changed in the input or to learn anything about the content of an input by examining hash values. For this reason, hashing is often called one-way hashing.
·         It is very efficient.






Should you encrypt or hash?
In the last 2 weeks, we described how to use cell-level encryption and TDE in SQL server. During application development, it might be useful to understand when to encrypt your data vs. when to hash it.
The difference is that encrypted data can be decrypted, while hashed data cannot be decrypted. Another key difference is that encryption normally results in different results for the same text but hashing always produces the same result for the same text. The deciding factor when choosing to encrypt or hash your data comes after you determine if you'll need to decrypt the data for offline processing.
A typical example of data that needs to be decrypted would be within a payment processing system is a credit card number. Thus the credit card number should be encrypted in the payment processing system. However, in the case of security code for the credit card, hashing it is sufficient if only equality checks are done and the system does not need to know it’s real value.
Encryption is a two way process but hashing is unidirectional



How to use hash bytes for indexing encrypted data.
Encryption introduces randomization and in there is no way to predict the outcome of an encryption built-in. Does that mean creating an index on top of encrypted data is not possible?

However, data hashing can come to your rescue. Refer to this blog post to learn how.




Which hash function should I choose?
Although, most hashing functions are fast, the performance of a hashing function depends on the data to be hashed and the algorithm used.

There is no magic bullet. For security purposes, it is advised to use the strongest hash function (SHA2_512). However, you can choose other hashing algorithms depending on your workload and data to hash.   





Hash functions or CHECK_SUM()?
SQL Server has the CHECK_SUM () (or BINARY_CHECKSUM ()) functions for generating the checksum value computed over a row of a table, or over a list of expressions.

One problem with the CHECK_SUM() (or BINARY_CHECKSUM()) functions is that the probability of a collision may not be sufficiently low for all applications (i.e. it is possible to come across examples of two different inputs hashing to the same output value). Of course, collisions are possible with any functions that have a larger domain than its range but because the CHECK_SUM function implements a simple XOR, the probability of this collision is high.

Try it out using the following example -
 DECLARE @guid1 UNIQUEIDENTIFIER ,@guid2 UNIQUEIDENTIFIER
SELECT @guid1 = ‘3DB7D309-A8F4-47C4-BA90-0CB458B44CB0′ , @guid2 = ‘EFE7F2C5-19F9-42B9-9C16-21BED41E882B’
SELECT chksum_guid1 = CHECKSUM(@guid1), chksum_guid2 = CHECKSUM(@guid2)
Hash functions provide stronger guarantees against collision.
The HashBytes function in SQL Server Denali provides data hashing and
supports the following algorithms :
MD2 | MD4 | MD5 | SHA | SHA1 | SHA2_256 | SHA2_512