In many instances you can accomplish the same task using either a stored procedure or a function. Both functions and stored procedures can be custom defined and part of any application. Functions, on the other hand, are designed to send their output to a query or T-SQL statement. For example, User Defined Functions (UDFs) can run an executable file from SQL SELECT or an action query, while Stored Procedures (SPROC) use EXECUTE or EXEC to run. Both are instantiated using CREATE FUNCTION.
To decide between using one of the two, keep in mind the fundamental difference between them: stored procedures are designed to return its output to the application. A UDF returns table variables, while a SPROC can't return a table variable although it can
Create a table. Another significant difference between them is that UDFs can't change the server environment or your operating system environment, while a SPROC can. Operationally, when T-SQL encounters an error the function stops, while T-SQL will ignore an error in a SPROC and proceed to the next statement in your code (provided you've included error handling support). You'll also find that although a SPROC can be used in an XML FOR clause, a UDF cannot be.
If you have an operation such as a query with a FROM clause that requires a rowset be drawn from a table or set of tables, then a function will be your appropriate choice. However, when you want to use that same rowset in your application the better choice would be a stored procedure.
There's quite a bit of debate about the performance benefits of UDFs vs. SPROCs. You might be tempted to believe that stored procedures add more overhead to your server than a UDF. Depending upon how your write your code and the type of data you're processing, this might not be the case. It's always a good idea to text your data in important or time-consuming operations by trying both types of methods on them
Tuesday, August 30, 2011
Monday, August 29, 2011
BEST PRACTICES FOR BACKING UP LARGE MISSION CRITICIAL DATABASES
In an ideal world, hard drives and other hardware never fail, software is never defective, users do not make mistakes, and hackers are never successful. However, we live in a less than perfect world and we should plan and prepare to handle adverse events.
In today’s topic, we will focus on best practices for backing up large mission critical databases. Performing and maintaining good backups is one of the top priority for any DBA/Developer/Engineer working with SQL Server.
KEEP IN MIND: BACKUP AND RESTORE IS NOT A HIGH AVAILABILITY FEATURE. YOU MUST PERFORM REGULAR BACKUPS OF YOUR DATABASES.
RESTORING a database from backup is simply a repair feature and not an availability feature. If you are running a mission-critical system and if your database requires high availability, then please look into various H/A features available with SQL Server.
If you are running a large/mission-critical database system than you need your database to be available continuously or for extended periods of time with minimal down-time for maintenance tasks. Therefore, the duration of situations that require databases to be restored must be kept as short as possible.
Additionally, if your databases are large, requiring longer periods of time to perform backup and restore than you MUST look into some of the cool new features that SQL Server offers to increase the speed of backup and restore operations to minimize the effect on users during both backup and restore operations.
USE MULTIPLE BACKUP DEVICES SIMULTANEOUSLY
If you are performing backups/restore on a large database than use multiple backup devices simultaneously to allow backups to be written to all the devices at the same time. Using multiple backup devices in SQL Server, allows database backups to be written to all devices in parallel. One of the potential bottleneck in backup throughput is the backup device speed. Using multiple backup devices can increase throughput in proportion to the number of devices used. Similarly, the backup can be restored from multiple devices in parallel.
USE MIRRORED MEDIA SET
Use a mirrored media set. A total of four mirrors is possible per media set. With the mirrored media set, the backup operation writes to multiple groups of backup devices. Each group of backup devices makes up a single mirror in the mirrored media set. Each single mirror set must use the same quantity and type of physical backup devices, and must all have the same properties.
USE SNAPSHOT BACKUPS (FASTEST BACKUP)
This is the fastest way to perform backups on databases. A snapshot backup is a specialized backup that is created almost instantaneously by using a split-mirror solution obtained from an independent hardware and software vendor. Snapshot backups minimize or eliminate the use of SQL Server resources to accomplish the backup. This is especially useful for moderate to very large databases in which availability is very important. Snapshot backups and restores can be performed sometimes in seconds with very little or zero effect on the server.
USE LOW PRIORITY BACKUP COMPRESSION
Backing up databases using the newly introduced backup compression feature, could increase CPU usage and any additional CPU consumed by the compression process can adversely impact concurrent operations. Therefore, when possible create a low priority compressed backup whose CPU usage is limited by Resource Governor to prevent any CPU contention.
USE FULL, DIFFERENTIAL AND LOG BACKUPS
If the database recovery model is set to FULL, than use different combination of backups (FULL, DIFFERENTIAL, LOG). This will help you minimize the number of backups that need to be applied to bring the database to the point of failure.
USE FILE/FILEGROUP BACKUPS
Use file and file group backups and T-log backups. These allow for only those files that contain the relevant data, instead of the whole database, to be backed up or restored.
USE A DIFFERENT DISK FOR BACKUPS
Do not use the same physical disk that holds database files or Log files for backup purposes. Using the same physical disk not only affects the performance, but also may reduce the recoverability of the plan.
Thanks,
In today’s topic, we will focus on best practices for backing up large mission critical databases. Performing and maintaining good backups is one of the top priority for any DBA/Developer/Engineer working with SQL Server.
KEEP IN MIND: BACKUP AND RESTORE IS NOT A HIGH AVAILABILITY FEATURE. YOU MUST PERFORM REGULAR BACKUPS OF YOUR DATABASES.
RESTORING a database from backup is simply a repair feature and not an availability feature. If you are running a mission-critical system and if your database requires high availability, then please look into various H/A features available with SQL Server.
If you are running a large/mission-critical database system than you need your database to be available continuously or for extended periods of time with minimal down-time for maintenance tasks. Therefore, the duration of situations that require databases to be restored must be kept as short as possible.
Additionally, if your databases are large, requiring longer periods of time to perform backup and restore than you MUST look into some of the cool new features that SQL Server offers to increase the speed of backup and restore operations to minimize the effect on users during both backup and restore operations.
USE MULTIPLE BACKUP DEVICES SIMULTANEOUSLY
If you are performing backups/restore on a large database than use multiple backup devices simultaneously to allow backups to be written to all the devices at the same time. Using multiple backup devices in SQL Server, allows database backups to be written to all devices in parallel. One of the potential bottleneck in backup throughput is the backup device speed. Using multiple backup devices can increase throughput in proportion to the number of devices used. Similarly, the backup can be restored from multiple devices in parallel.
USE MIRRORED MEDIA SET
Use a mirrored media set. A total of four mirrors is possible per media set. With the mirrored media set, the backup operation writes to multiple groups of backup devices. Each group of backup devices makes up a single mirror in the mirrored media set. Each single mirror set must use the same quantity and type of physical backup devices, and must all have the same properties.
USE SNAPSHOT BACKUPS (FASTEST BACKUP)
This is the fastest way to perform backups on databases. A snapshot backup is a specialized backup that is created almost instantaneously by using a split-mirror solution obtained from an independent hardware and software vendor. Snapshot backups minimize or eliminate the use of SQL Server resources to accomplish the backup. This is especially useful for moderate to very large databases in which availability is very important. Snapshot backups and restores can be performed sometimes in seconds with very little or zero effect on the server.
USE LOW PRIORITY BACKUP COMPRESSION
Backing up databases using the newly introduced backup compression feature, could increase CPU usage and any additional CPU consumed by the compression process can adversely impact concurrent operations. Therefore, when possible create a low priority compressed backup whose CPU usage is limited by Resource Governor to prevent any CPU contention.
USE FULL, DIFFERENTIAL AND LOG BACKUPS
If the database recovery model is set to FULL, than use different combination of backups (FULL, DIFFERENTIAL, LOG). This will help you minimize the number of backups that need to be applied to bring the database to the point of failure.
USE FILE/FILEGROUP BACKUPS
Use file and file group backups and T-log backups. These allow for only those files that contain the relevant data, instead of the whole database, to be backed up or restored.
USE A DIFFERENT DISK FOR BACKUPS
Do not use the same physical disk that holds database files or Log files for backup purposes. Using the same physical disk not only affects the performance, but also may reduce the recoverability of the plan.
Thanks,
Friday, August 19, 2011
SQL SERVER - Fundamentals- 1.
Today was taking an interveiw to fresher and worried about the sorry state on fundamentals. So inorder to provide some insight that would help Aspriants to understand better about the Fundamentals going ahead posting atleast 1 post per week with Fundamentals/Definations.. To start with.. here we go.
WHAT IS AN UNENFORCED RELATIONSHIP?
A link between tables that references the primary key in one table to a foreign key in another table, and which does not check the referential integrity during INSERT and UPDATE transactions
WHAT IS MANY TO MANY RELATIONSHIP?
A relationship between two tables in which rows in each table have multiple matching rows in the related table. For example, each sales invoice can contain multiple products, but each product can appear on multiple sales invoices.
WHAT IS A LINKED SERVER?
A definition of an OLE DB data source used by SQL Server distributed queries. The linked server definition specifies the OLE DB provider required to access the data, and includes enough addressing information for the OLE DB provider to connect to the data. Any rowsets exposed by the OLE DB data source can then be referenced as tables, called linked tables, in SQL Server distributed queries.
WHAT IS AN UNENFORCED RELATIONSHIP?
A link between tables that references the primary key in one table to a foreign key in another table, and which does not check the referential integrity during INSERT and UPDATE transactions
WHAT IS MANY TO MANY RELATIONSHIP?
A relationship between two tables in which rows in each table have multiple matching rows in the related table. For example, each sales invoice can contain multiple products, but each product can appear on multiple sales invoices.
WHAT IS A LINKED SERVER?
A definition of an OLE DB data source used by SQL Server distributed queries. The linked server definition specifies the OLE DB provider required to access the data, and includes enough addressing information for the OLE DB provider to connect to the data. Any rowsets exposed by the OLE DB data source can then be referenced as tables, called linked tables, in SQL Server distributed queries.
Wednesday, August 10, 2011
EFFICIENTLY MANAGE LARGE DATA MODIFICATIONS
Did you know that you can now use the TOP operator for Deleting, Inserting and Updating data in SQL Server tables?
Using the TOP operator for DML operation can help you in executing very large data operations by breaking the process into smaller pieces. This can potentially help with increased performance and also helps with improving database concurrency for larger and highly accessed tables. This is considered as one of the best techniques for managing data modifications on large data loads for reporting or data warehouse applications.
When you perform an update on large number of records using single set updates, it can cause the Transaction Log to grow considerably. However, when processing the same operation in chunks or pieces, each chunk is committed after completion allowing SQL Server to potentially re-use the T-Log space. Another classic issue many of us have experienced is when you are performing very large data updates and you cancel the query for some reason, you would have to wait for a long time while the transaction completely rolls back.
With this technique you can perform data modifications in smaller chunks and you can continue with your updates more quickly. Also, chunking allows more concurrency against the modified table, allowing user queries to jump in, instead of waiting for several minutes for a large modifications to finish.
Let’s take an example of deleting 1000 rows of records in a chunk. Assume a table called LARGETABLE table that has millions of records and you want delete 1000 records in chunk:
DELETING LARGE TABLE IN CHUNKS
--CREATE A DEMO TABLE CALLED LARGETABLE
CREATE TABLE LARGETABLE (ID_COL INT IDENTITY(1,1), COL_A VARCHAR(10),COL_B VARCHAR(10))
GO
--INSERT THE DATA IN LARGETABLE.. NOTICE THE USE OF ‘GO 10000’
INSERT INTO LARGETABLE VALUES ('A','B')
GO 10000
--QUERY THE TABLE
SELECT COUNT(*) FROM LARGETABLE;
--PERFORM DELETION OF 1000 ROWS FROM LARGETABLE
WHILE (SELECT COUNT(*) FROM LARGETABLE) > 0
BEGIN
DELETE TOP (1000) FROM LARGETABLE
SELECT LTRIM(STR(COUNT(*)))+' RECORDS TO BE DELETED' FROM LARGETABLE --THIS IS JUST A COMMENT.
END
The above technique can also be used with INSERT and UPDATE commands.
Using the TOP operator for DML operation can help you in executing very large data operations by breaking the process into smaller pieces. This can potentially help with increased performance and also helps with improving database concurrency for larger and highly accessed tables. This is considered as one of the best techniques for managing data modifications on large data loads for reporting or data warehouse applications.
When you perform an update on large number of records using single set updates, it can cause the Transaction Log to grow considerably. However, when processing the same operation in chunks or pieces, each chunk is committed after completion allowing SQL Server to potentially re-use the T-Log space. Another classic issue many of us have experienced is when you are performing very large data updates and you cancel the query for some reason, you would have to wait for a long time while the transaction completely rolls back.
With this technique you can perform data modifications in smaller chunks and you can continue with your updates more quickly. Also, chunking allows more concurrency against the modified table, allowing user queries to jump in, instead of waiting for several minutes for a large modifications to finish.
Let’s take an example of deleting 1000 rows of records in a chunk. Assume a table called LARGETABLE table that has millions of records and you want delete 1000 records in chunk:
DELETING LARGE TABLE IN CHUNKS
--CREATE A DEMO TABLE CALLED LARGETABLE
CREATE TABLE LARGETABLE (ID_COL INT IDENTITY(1,1), COL_A VARCHAR(10),COL_B VARCHAR(10))
GO
--INSERT THE DATA IN LARGETABLE.. NOTICE THE USE OF ‘GO 10000’
INSERT INTO LARGETABLE VALUES ('A','B')
GO 10000
--QUERY THE TABLE
SELECT COUNT(*) FROM LARGETABLE;
--PERFORM DELETION OF 1000 ROWS FROM LARGETABLE
WHILE (SELECT COUNT(*) FROM LARGETABLE) > 0
BEGIN
DELETE TOP (1000) FROM LARGETABLE
SELECT LTRIM(STR(COUNT(*)))+' RECORDS TO BE DELETED' FROM LARGETABLE --THIS IS JUST A COMMENT.
END
The above technique can also be used with INSERT and UPDATE commands.
Sunday, July 31, 2011
USING THE MIRROR DATABASE FOR REPORTING/QUERYING PURPOSE
U How many times have you thought about using the Mirror database for some read activity or for reporting purpose? SQL Server currently doesn’t support reading the data directly from the mirror database (SQL Server Denali will be supporting this feature). However, even with the current version of SQL Server, you can still read the data from the Mirror copy using Database Snapshots.
WHAT IS A DATABASE SNAPSHOT?
Database snapshot is a static, read-only, transaction-consistent snapshot of a user database as it existed at the moment of the snapshot creation. You can create Multiple Snapshots of the same database but they must all reside on the same server instance. Database Snapshots are primarily used for reporting purposes however, you can also use them for reverting changes (due to user errors, accidently deleting data or objects, etc.) to the state it was in when the snapshot was created.
USING DATABASE SNAPSHOTS WITH MIRRORED DATABASES
In the database mirroring environment, Principal DB interacts with all the users and the mirror database only receives transaction log records from the principal database as the mirror database in a DBM session will be in a “RECOVERING” state.
With the introduction of SQL Server 2005, a new feature was added called Database Snapshots. And you can now use this feature to create a database snapshot from the Mirror database for reporting purpose. You can direct all client connection requests to the most recent database snapshot created from the Mirror database. If you are looking for updated data in your snapshot database then you will need to create new snapshots of that database periodically to get the latest data from the mirror database.
KEEP IN MIND: You can create a database snapshot on the mirror database only when the database is fully synchronized. Also, having an excessive number of database snapshots on the mirror database may decrease the performance of the principal database. Therefore, it is recommend that you don’t create multiple database snapshots of the same mirror copy. You should delete the old copies and keep the current one for reporting purpose.
WHAT HAPPENS DURING A ROLE SWITCH?
If role switching occurs, the database and its snapshots are restarted by temporarily disconnecting users. Afterwards, the database snapshots remain on the server instance where they were created, which would now become the new principal database. Reporting users can continue to use the snapshots after the failover. However, this would place an additional load on the new principal server and if performance is a concern in your environment than it is recommended that you create a snapshot on the new mirror database when it becomes available and redirect your clients to the new snapshot, and drop the database snapshots from the former mirror database.
HOW TO CREATE A DATABASE SNAPSHOT OF THE MIRROR DATABASE
Let’s create a snapshot on the mirror database called MSSOLVE. Make sure you are connected to the mirror database instance when you create this.
CREATING A SNAPSHOT DATABASE OF MSSOLVE MIRROR DB
USE MASTER
GO
CREATE DATABASE MSSOLVE_SNAPSHOT_0629 ON
( NAME = 'MSSOLVE_Data',
FILENAME = 'E:\MSSQL\DATA\MSSOLVE_SNAPSHOT_0629.ss' )
AS SNAPSHOT OF MSSOLVE;
GO
Once you successfully create the database snapshot, you are now ready to use the new snapshot of the mirror database for querying/reporting purpose.
WHERE CAN I VIEW THE NEWLY CREATED DATABASE SNAPSHOT?
You may wonder why the newly created database snapshot doesn’t appear in the database list in management studio. That’s because database snapshots are listed under Database Snapshots folder right below the System Databases folder in Management Studio. In the object explorer of Management Studio, connect to the instance of Microsoft SQL Server and expand “Databases” and then expand “Database Snapshots”.
HOW TO DROP A SNAPSHOT DATABASE?
You can drop the database snapshot the exact same way as you would any other user database using the Drop Database command.
DROPPING A DATABASE SNAPSHOT
DROP DATABASE MSSOLVE_SNAPSHOT_0629
WHAT IS A DATABASE SNAPSHOT?
Database snapshot is a static, read-only, transaction-consistent snapshot of a user database as it existed at the moment of the snapshot creation. You can create Multiple Snapshots of the same database but they must all reside on the same server instance. Database Snapshots are primarily used for reporting purposes however, you can also use them for reverting changes (due to user errors, accidently deleting data or objects, etc.) to the state it was in when the snapshot was created.
USING DATABASE SNAPSHOTS WITH MIRRORED DATABASES
In the database mirroring environment, Principal DB interacts with all the users and the mirror database only receives transaction log records from the principal database as the mirror database in a DBM session will be in a “RECOVERING” state.
With the introduction of SQL Server 2005, a new feature was added called Database Snapshots. And you can now use this feature to create a database snapshot from the Mirror database for reporting purpose. You can direct all client connection requests to the most recent database snapshot created from the Mirror database. If you are looking for updated data in your snapshot database then you will need to create new snapshots of that database periodically to get the latest data from the mirror database.
KEEP IN MIND: You can create a database snapshot on the mirror database only when the database is fully synchronized. Also, having an excessive number of database snapshots on the mirror database may decrease the performance of the principal database. Therefore, it is recommend that you don’t create multiple database snapshots of the same mirror copy. You should delete the old copies and keep the current one for reporting purpose.
WHAT HAPPENS DURING A ROLE SWITCH?
If role switching occurs, the database and its snapshots are restarted by temporarily disconnecting users. Afterwards, the database snapshots remain on the server instance where they were created, which would now become the new principal database. Reporting users can continue to use the snapshots after the failover. However, this would place an additional load on the new principal server and if performance is a concern in your environment than it is recommended that you create a snapshot on the new mirror database when it becomes available and redirect your clients to the new snapshot, and drop the database snapshots from the former mirror database.
HOW TO CREATE A DATABASE SNAPSHOT OF THE MIRROR DATABASE
Let’s create a snapshot on the mirror database called MSSOLVE. Make sure you are connected to the mirror database instance when you create this.
CREATING A SNAPSHOT DATABASE OF MSSOLVE MIRROR DB
USE MASTER
GO
CREATE DATABASE MSSOLVE_SNAPSHOT_0629 ON
( NAME = 'MSSOLVE_Data',
FILENAME = 'E:\MSSQL\DATA\MSSOLVE_SNAPSHOT_0629.ss' )
AS SNAPSHOT OF MSSOLVE;
GO
Once you successfully create the database snapshot, you are now ready to use the new snapshot of the mirror database for querying/reporting purpose.
WHERE CAN I VIEW THE NEWLY CREATED DATABASE SNAPSHOT?
You may wonder why the newly created database snapshot doesn’t appear in the database list in management studio. That’s because database snapshots are listed under Database Snapshots folder right below the System Databases folder in Management Studio. In the object explorer of Management Studio, connect to the instance of Microsoft SQL Server and expand “Databases” and then expand “Database Snapshots”.
HOW TO DROP A SNAPSHOT DATABASE?
You can drop the database snapshot the exact same way as you would any other user database using the Drop Database command.
DROPPING A DATABASE SNAPSHOT
DROP DATABASE MSSOLVE_SNAPSHOT_0629
Monday, July 25, 2011
RECOVERING DATA USING SQL SERVER EMERGENCY MODE
Remember those days when the database would go in to suspect mode and you had to perform various steps to recover the database by putting the database in the emergency mode and then extracting the data out of that emergency database?
These are the high level steps you had to perform in previous (
This process has changed since the release of SQL2K5 onwards, putting the user database in EMERGENCY mode is now a supported and documented feature in the current release of SQL Server (unlike SQL Server 2000/7.0/6.x where you had to change the status of SYSDATABASES)
With the release of SQL2K5, SQL Server no longer allows making any changes to the system tables even by SA’s. Making even a slightest change to system objects is restricted. However, there may be situations when you would need to put the database into EMERGENCY mode and export/extract the data out of the corrupt database in to another database and in order to do that, SQL Server now provides a new feature as part of the ALTER DATABASE statement that would enable System Administrators to put the database in to EMERGENCY mode.
In the below example, we will see how this can be done using the ALTER DATABASE statement. Note: This is simply an example of how to put the database in emergency mode and how to bring it back to its normal state. In a real life scenario, once the database is in suspect mode and you put it in EMERGENCY mode, you may not be able to put it back in the normal state due to corruption. In this situation, you must export the data to another database.
NOTE: One of the good feature of SQL Server EMERGENCY mode is that when you run DBCC CHECKDB on a user database that doesn’t have a log file (ex: disk on which log file(s) were residing crashed and can’t be recovered), CHECKDB will rebuild the log file automatically for that user database when it is run while the database is in EMERGENCY mode.
1. Enable modifications to system catalogs.
2. Change the status of the database in SysDatabases system object to 32768
3. Restart SQL Server services (Once restarted database would appear in Emergency mode)
4. You would then transfer the data from your database in to another database
This process was not an easy process and involved manually updating system tables. Often this information was not publicly available.
With the release of SQL2K5, SQL Server no longer allows making any changes to the system tables even by SA’s. Making even a slightest change to system objects is restricted. However, there may be situations when you would need to put the database into EMERGENCY mode and export/extract the data out of the corrupt database in to another database and in order to do that, SQL Server now provides a new feature as part of the ALTER DATABASE statement that would enable System Administrators to put the database in to EMERGENCY mode.
In the below example, we will see how this can be done using the ALTER DATABASE statement. Note: This is simply an example of how to put the database in emergency mode and how to bring it back to its normal state. In a real life scenario, once the database is in suspect mode and you put it in EMERGENCY mode, you may not be able to put it back in the normal state due to corruption. In this situation, you must export the data to another database.
IMPORTANT: It is strongly recommended that you perform regular backups of your database to avoid any data loss.
PUTTING SALES DATABASE IN EMERGENCY MODE
|
ALTER DATABASE SALES SET EMERGENCY
GO
Once the database is in emergency mode, you should now export the data from the SALES database in to some other database.
|
PUTTING THE DATABASE BACK TO NORMAL STATE
|
ALTER DATABASE SALES SET ONLINE
GO
|
NOTE: One of the good feature of SQL Server EMERGENCY mode is that when you run DBCC CHECKDB on a user database that doesn’t have a log file (ex: disk on which log file(s) were residing crashed and can’t be recovered), CHECKDB will rebuild the log file automatically for that user database when it is run while the database is in EMERGENCY mode.
THINGS TO KEEP IN MIND:
When the database is put in the EMERGENCY mode, it is marked as READ_ONLY and logging is disabled. Only members of SYSADMIN role can set and are allowed to access the database when in emergency mode.
You can verify if the database is in emergency mode or not by examining the “STATE” and “STATE_DESC” columns in the sys.databases catalog view or from the “STATUS” property of the DATABASEPROPERTYEX function.
IMPORTANT: It is strongly recommended that you perform regular backups of your database to avoid any data loss.
credits to Saleem Hakani. my SQL Hero.
Wednesday, July 20, 2011
Do You Know ?
WHAT IS A
DATA PROVIDER?
|
It’s a layer of software that handles communication between data extensions and customized software specific to each type of external data source. Depending on the specific data source, multiple data providers are available from Microsoft and from third-party vendors.
|
Simple Recovery Mode
|
SIMPLE Recovery Mode does NOT mean that your transactions are not logged. There will be logging and your log could grow quite large if you are running large transactions or a large number of concurrent transactions.
|
WHAT IS A CERTIFICATE?
|
A digital document that is commonly used for authentication and to help secure information on a network. A certificate binds a public key to an entity that holds the corresponding private key. Certificates are digitally signed by the certification authority that issues them, and they can be issued for a user, a computer, or a service.
|
Subscribe to:
Posts (Atom)