Tuesday, 31 January 2012

Global Vs Local Temporary Table

 

Temporary table is a very important part for SQL developer. Here in this article we are focusing about the local and global temporary table.

A global temporary table is visible to any session while in existence. It lasts until all users that are referencing the table disconnect.

The global temporary table is created by putting (##) prefix before the table name.

CREATE TABLE ##sample_Table

                          (emp_code  DECIMAL(10)   NOT NULL,

                           emp_name  VARCHAR(50) NOT NULL)

A local temporary table, like #California below, is visible only the local session and child session created by dynamic SQL (sp_executeSQL). It is deleted after the user disconnects.

The local temporary table is created by putting (#) prefix before the table name.

CREATE TABLE #sample_Table

                         (emp_code  DECIMAL(10)   NOT NULL,

                          emp_name  VARCHAR(50) NOT NULL)

 

If we use the block like BEGIN TRANSACTION and COMMIT TRANSACTION/ROLLBACK TRANSACTION the scope of the Local temporary table is within the transaction not out site of transaction.

Hope the article is quite informative and thanking you to give your valuable time.

Posted by: MR. JOYDEEP DAS

 

 

Monday, 30 January 2012

DECMIAL (Precision, Scale)


 

There are some common mistake related to DECIMAL data type to configure PRECISION and SACLE.  Some time I saw that, people are making mistake on it, they just take whatever number they think not related to data storage. If we have confusion, we must solve it whether how tinny it is.

 

In my article I am going to illustrate this part. Hope you like it.

 

Numeric data types have fixed precision and scale.

 

DECIMLA (P, S)

 

 PRECISION:

 

The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.

When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1.

 

SCLE:

 

The maximum number of decimal digits that can be stored to the right of the decimal point.

Default is the 0.

 

The maximum storage size is varies based on precision is described bellow

 

Precision

Storage bytes

1 - 9

5

10-19

9

20-28

13

29-38

17

 

Hope this is quite informative and thanking you to provide your valuable time.

 

Posted by: MR. JOYDEEP DAS

PRIMARY key Vs UNIQUE key


This article is created for the beginners only. I find that there are lot of bad concepts related to PRIMARY key and UNIQUE key. In my article I want to clarify all those points related to it.

1.      1.  PRIMARY key and UNIQUE key both maintains the uniqueness of the columns.

2.       2. PRIMARY Key NOT allows any kind of NULL values but in UNIQUE Key allow only One NULL value in it.

3.       3. So the PRIMARY KEY = UNIQUE + NOT NULL combination of data.

4.       4. So the UNIQUE KEY = UNIQUE + One NULL Value (Allowed)

5.       5. By default PRIMAY Key takes CULSTERED INDEX.

6.       6. We can create NON CLUSTERED INDEX on UNIQUE Key  

I think the article is quite informative and thanking you to provide your valuable time on it.

 

Posted by: MR. JOYDEEP DAS

Authentication mode of SQL server


I am collecting some interesting facts related to Authentication mode of SQL server.

When we are going to install the SQL server, it's asked about the Authentication mode. We have to choose between 2 options.

1.    Windows Authentication mode.

2.    Windows and SQL Authentication mode.

In my article, I am going to provide some short description related to it. 


Windows Authentication mode


This is the default option during setup because it's the most secure.  Using this method allows you to control all of your database security through Windows and Active Directory.  This method allows you to place users into groups if you so desire and it allows you not have to manage passwords for your users.  It also keeps your environment secure by either eliminating generic accounts, or at least allowing you to keep tighter control over their passwords. 

This scenario is simple.  You've got an application that needs to connect to SQL Server 2008, and you don't want the username and password sitting out there in a file somewhere.  So the easiest way to protect against this is to use a Windows account with a password you control and run the application under that user.  This way you can also rotate the password as you see fit and there's nothing out on the server that reveals it.  Having passwords sitting out in ini files or in webcofig files, etc is one of the worst management nightmares in security because any of the developers or anyone else can just go out and get the password and connect under the generic account.  And this makes chasing down problems much harder.

Windows and SQL Authentication mode

This one seems redundant, but it's really not.  In the old days, you had 3 choices for security; you had Windows, SQL, and Both.  To a degree that made sense, but in retrospect it really doesn't.  The logic here is simple.  Since Windows is more secure, there's no reason to disallow it.  All you're really doing here is allowing SQL authentication along with the most secure method.  So you're always going to be able to connect with Windows authentication, the only thing we're arguing about is whether or not you're going to allow SQL as well.

If you're using SQL authentication though, Windows has to use NTLM which bypasses Kerberos and you can now connect to the database to troubleshoot the server, or merely to continue working until you get the Kerberos problem worked out.  So having a SQL backdoor can be useful in a very practical sense.  And the really cool thing about using SQL authentication in SQL Server 2005 and above is that you can take advantage of your Windows security model and force your SQL passwords to adhere to your Windows password policies.  So you can force SQL passwords to expire, and to honor your Windows password complexity requirements.

You can also have external customers whom you don't want to have Windows accounts.  In this case, SQL authentication is a good choice too.  And it doesn't have to be external customers.  You could just have non-Windows domains and Windows authentication simply isn't an option.


While Windows authentication is the most secure, it's simply not practical as your sole authentication method.  Don't get me wrong though.  You'll still want to insist on using Windows every chance you get.  You should work really hard to insist on running applications and websites through specific Windows accounts.  Your developers will usually fight you on this because they have a generic SQL account and they want to use it.  They love being able to bypass their personal accounts and connect to the production database to do things.  So while enabling SQL authentication is always a good idea, it's best to keep it restricted to admin usage and only hand out SQL accounts if there's absolutely no other way around a connection problem.  It's also important to note that your decision during install isn't final.  You can easily change the authentication mode at any time.  You have to restart the SQL Server service once you do though
.

 

 

Posted by: MR. JOYDEEP DAS

Wednesday, 25 January 2012

ORDER BY clause and PERFORMANCE of Query

 

Some of my friends told me that using the ORDER BY clause decreases the performance of SQL statements. Do you believe it?  Yes  I am, as in such type of conditions the ORDER BY clause really decrease the performance of SQL query.  But you have to understand the situations where you use or don't use the ORDER BY clause.

In my article, I am trying to illustrate those points.

In Microsoft SQL Server 2005/2008, you create a table. The table contains a column of a non-Unicode data type. Additionally, the collation of the column is an SQL tertiary collation. Then, you create an index that is not unique on the column. You run a query against the table by specifying the ORDER BY clause on the column.

In this scenario, the performance of the query is slow. Additionally, if the query is executed from a cursor, the cursor is converted implicitly to a keyset cursor.   

Now I am trying to illustrate the causes of the problem

This problem occurs because the query optimizer does not use the index. In the execution plan of the query, the SORT operator appears first. Then, the Compute Scalar operator that contains the TERTIARY_WEIGHTS function appears.

Some other things that we must considers.

In a tertiary collation definition, certain characters are considered equal for comparison. However, the weights of these characters are considered different when you order these characters. For example, a string comparison is not case-sensitive. However, when you use the ORDER BY clause to order these strings, uppercase characters appear before lowercase characters.

The following data types are non-Unicode data types:

  • char
  • text
  • varchar

The following data types are Unicode data types:

  • nchar
  • ntext
  • nvarchar

To perform the SORT operation on non-Unicode string expressions that are defined in an SQL tertiary collation, SQL Server 2005 requires a binary string of weights for each character in the string expression.

Posted by: MR. JOYDEEP DAS

 

 

Tuesday, 24 January 2012

SQL PerformanceTips


Specific Query Performance Tips
  1. Use EXPLAIN to profile the query execution plan
  2. Use Slow Query Log 
  3. Don't use DISTINCT when you have or could use GROUP BY
  4. Insert performance
    1. Batch INSERT and REPLACE
    2. Use LOAD DATA instead of INSERT
  5. LIMIT m,n may not be as fast as it sounds. Learn how to improve it and read more about Efficient Pagination Using MySQL
  6. Don't use ORDER BY RAND() if you have > ~2K records
  7. Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
  8. Avoid wildcards at the start of LIKE queries
  9. Avoid correlated subqueries and in select and where clause (try to avoid in)
  10. No calculated comparisons -- isolate indexed columns
  11. ORDER BY and LIMIT work best with equalities and covered indexes
  12. Separate text/blobs from metadata, don't put text/blobs in results if you don't need them
  13. Derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs without sorting them. (Self-join can speed up a query if 1st part finds the IDs and uses then to fetch the rest)
  14. ALTER TABLE...ORDER BY can take data sorted chronologically and re-order it by a different field -- this can make queries on that field run faster (maybe this goes in indexing?)
  15. Know when to split a complex query and join smaller ones
  16. Delete small amounts at a time if you can
  17. Make similar queries consistent so cache is used
  18. Have good SQL query standards
  19. Don't use deprecated features
  20. Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
  21. Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS()
  22. Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
  23. use groupwise maximum instead of subqueries
  24. Avoid using IN(...) when selecting on indexed fields, It will kill the performance of SELECT query.
  25. Prefer using UNION ALL if you don't need to merge the resul

  Posted By : Mr. Joydeep Das

 


How to Remove duplicate entry from a Table


It's a quite simple SQL manipulation for dulplicate entry delitions. Some how I am noted it down for a quick references.

 

-- Step-1 [ Create a Table ]

 

CREATE TABLE AA

            (x DECIMAL,

             y VARCHAR)

 

-- Step-2 [ Insert Some values, it must contains duplicate values ]

 

INSERT INTO AA

VALUES(1,'A'),(2,'B'),(3,'C'),(1,'A'),(1,'A')

 

-- Step-3

SELECT x, y, col3=count(*)

INTO midtable

FROM AA

GROUP BY x, y

HAVING count(*) > 1

 

--Step-4

SELECT DISTINCT AA.*

INTO midtable1

FROM AA, midtable

WHERE AA.x = midtable.x

AND AA.y = midtable.y

 

--Step-5

SELECT x, y, count(*)

FROM midtable1

GROUP BY x, y

 

--Step-6

DELETE AA

FROM AA, midtable

WHERE AA.x = midtable.x

AND AA.y = midtable.y

 

--Step-7

INSERT AA SELECT * FROM midtable1

 

--Step-8 [ Final Output ]

SELECT * FROM AA

  Posted By : MR. JOYDEEP DAS

Transaction log and performance


The transaction log of SQL server is a factor for performance matter.
Here I provide you some collection that matter the performance in case of transaction log optimizations.
1) Try to dedicate the transaction log portion of your database to its own phyiscal disk. In high volume OLTP system, isolating the transaction log can allow the disk head to be ready for the next write by not having other files contend for the physical disk resource. If your database already exists, the best way to "move" your transaction log is to detach your database and then reattach it
a. sp_attach_db
sp_attach_db [ @dbname = ] 'dbname' 
             , [ @filename1 = ] 'filename_n' [ ,...16 ]

b. sp_detach_db
sp_detach_db [ @dbname= ] 'database_name'
              [ , [ @skipchecks= ] 'skipchecks' ]
              [ , [ @keepfulltextindexfile = ] 'KeepFulltextIndexFile' ]

2) Defrag the disk(s) on which your transaction logs reside. This will get rid of external fragmentation of the transaction log - better known as disk file fragmentation. This will require that your server be taken offline but if your files have had a lot of auto growth and/or they reside on a disk with a lot of other files that have been modified, then all of your files (incl. the transaction log file) are likely to be interleaved and fragmented.
3) Create only ONE transaction log file. Even though you can create multiple transaction log files, you only need one... SQL Server DOES not "stripe" across multiple transaction log files. Instead, SQL Server uses the transaction log files sequentially. While this might sound bad - it's not. If you want to get better performance out of the transaction log, place it on faster disks and/or a disk (RAID) configuration.
4) Not only should you try to isolate the transaction log to its own physical disk but you should make sure that the logical/physical disk configuration is as efficient as possible. Try to use an isolated RAID 1 mirroring set if you don't need significant capacity. If you need a greater capacity OR you want better performance, consider a combination of RAID 0 and RAID 1 (either RAID 0 + 1 or RAID 1 + 0). While RAID 0 + 1 can often offer better performance, RAID 1 + 0 offers better reliability. If you're new to RAID and are interested in learning more.
5) Don't be caught up in nothing but transaction log speed, you'll also want to make sure that your transaction log is always available as this can help you in times of disaster. Even if the data is damaged, if the transaction log is available and you have a series of backups up to and including the last transaction log then you can make a final backup of the transaction log that will represent all of the changes since your last transaction log backup. If this backup is possible (it's called backing up the "tail" of the log), then you can achieve up-to-the-minute recovery. This is only possible when the log is available. By placing the log on mirrored disks you can increase your chances of recovering data and minimize data loss.
6) Create transaction log files with a reasonable initial size. When you create a database it's ideal to PRE-ALLOCATE both your data files and your transaction log file. A little bit of capacity planning goes a long way... Now, if you think that you've got absolutely no idea how to size your transaction log you're going to need, well - here are the things that have the greatest impact:
  • Type of activity - transaction processing or decision support
  • Frequency of that activity - the more frequent the changes, the faster the transaction log will grow
  • Recovery Model - the recovery model of the database
  • Frequency of transaction log backups
  • Whether or not replication is used (since the log reader relies on the transaction log)
I wish I could give you a rough idea on sizing but if you look only at database size and none of these other factors, you could end up with a transaction log that's seriously oversized or seriously undersized. I've seen recommendations of 10-25% of the size of the data and you can use that but I would also add a bit of common sense. A larger database with very frequent transaction log backups may not need a transaction log that's even 1% of the size of the data... The best way is to setup your development environment similar to that of your production environment (including backup jobs) and then see how the transaction log grows. If you have a lot of auto growth (because your guess was wrong), you can later clean up the fragmentation that has occurred and get back to a reasonable, intact, and optimal transaction log.
7) Don't let auto growth get out of control. As important as capacity planning, you're not likely to be spot-on in your estimates. I don't recommend completely turning off auto growth but in general I also don't like SQL Server 2000's default growth rate (or max size). In general, I would recommend setting the entire transaction log files attributes: initial size, growth rate AND maximum size. For the growth rate, I recommend something that can be allocated somewhat quickly and something of a fixed size. In general, I recommend a value which is less than or equal to 1GB (based on total size) but something that doesn't mean that you're going to auto grow again soon. So, for databases whose transaction logs are under 1GB then you might set auto grow to somewhere between 20 and 100 MB. For those of you who have transaction logs measured in GB, then I'd set the auto growth to 500MB or 1GB. In general, I don't like the percentage because it needs to be calculated (which I realize isn't really all that big of a deal) but the larger the file the larger the auto growth and the longer it takes.
Follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:
1. Wait for an inactive time of day (ideally, it would be best to put the database into single user mode first) and then clear all transaction log activity through a regular transaction log backup. If you're using the simple recovery model then you don't need to do a log backup... Instead, just clear the transaction log by running a checkpoint.
 BACKUP LOG databasename TO devicename
2. Shrink the log to as small a size as possible (truncateonly)
DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)
3. Alter the database to modify the transaction log file to the appropriate size - in one step
ALTER DATABASE databasename
MODIFY FILE 
      NAME = transactionloglogicalfilename 
    , SIZE = newtotalsize
)

Posted By : MR. JOYDEEP DAS

SPACE() Function in SQL


Some of my friends has a lot of confusions about the functions SPACE() and it's implementations.

The SPACE function takes Integer as parameters and Returns a specified number of spaces as string.

The number of space it returns is what number you provide in the parameters.

The syntax is

 SPACE(expression int)RETURN varchar

 

EXAMPLE

SELECT fname +','+SPACE(2)+lname 'Student Name'

FROM   tbl_student

RETURN VALUE

Student Name

RAJA, KUMAR

DIA, MIRZA

 

The Spce came in between , and Last name.

Posted By: MR. JOYDEEP DAS

Database Growth


I have collects some facts related to Database growth and how it occurs. I think it will be iterated to know these facts. 

SQL Server version 7.0 and later support growing database and log files automatically. Although this option is convenient and reduces the workload of the database administrators, growing database files can impose considerable overhead on the database server and thereby degrade performance. SQL Server 2000 and 2005 configure auto-grow options differently. With SQL Server 2000, each file you create is configured to grow without the upper size limit by 10% of its current size each time it needs to grow. For small to medium size databases, this setting works fine. However, what happens when the database file grows to be 150GB? Next time it needs to grow SQL Server will have to allocate 15GB of storage. This will be a very time and resource intensive operation and could bring your server to its knees. Also note that since the file is configured to grow automatically without the upper limit, it can fill up the entire drive (or drive array) where it resides. This might be disastrous if the database is completely full and no transactions can be committed. You'll encounter unplanned downtime until you create additional data files. 

With SQL Server 2005 each newly created file is automatically configured to grow by 1MB each time it needs to grow without an upper limit. This configuration is safer for large scale applications; however, if you're truly serious about performance, never let your database files grow automatically. Instead, estimate the growth potential for your data files before deploying the application in the production environment. Create the initial database files with plenty of free space. Then, monitor the available space regularly and either add new files or grow existing files manually during minimal user activity. If you must configure database files to grow automatically, be sure to allow plenty of room for growth for each file. If you expect your database to grow to 100GB, start with an 80GB file and then configure the file to grow by 500MB at a time. It is also important to configure the upper limit for each file so your drives don't fill up. 

Note that if you have multiple files in the file-group, the files grow automatically only after all files are full. For example, let's say you have files A, B, C, and D in your SECONDARY file-group. Each file is configured to grow by 100MB automatically and initially each file is 500MB. As data is added to this file group the files will be populated in round-robin fashion. Even if file A fills up it will not grow until files B, C and D are also full.


Posted By: MR. JOYDEEP DAS

SQL Server AGENT


I have collected some general facts related to SQL Server Agent and how you configure it. Hope it will help you.
SQL Server Agent is the service SQL Server uses to perform administrative tasks called jobs. Jobs are made up of one or more steps to accomplish the administrative task you need performed. Jobs can be executed on a recurring schedule and they have multiple methods to notify and record the results of the job.
Configuring SQL Server Agent
The SQL Server Agent service is installed once for each named instance of SQL Server installed on the server. For the default instance the service is called SQLSERVERAGENT. Named instances are called SQLAgent$instancename.
Service Accounts
SQL Server Agent runs as a service on the SQL Server and that service needs a user account to run under. The Agent service can run under a built-in account such as Network Service or System. It can also run under a local user account or domain account.
It is a best practice to use a low privileged domain account for the SQL Server Agent service to ensure it has access to only those resources it needs. This will also allow the Agent to connect to remote servers and access network resources you've granted permission to. Additionally, if your SQL Server is running in a cluster the SQL Server Agent service account MUST use a domain account, it cannot use a built-in or local account.
You'll be prompted to configure the SQL Agent user account during SQL Server installation. If you need to change the configuration after installation you can do this via SQL Server Configuration Manager.
Books Online has a complete description of the all issues you should consider before choosing an account for SQL Server Agent. The primary note about these considerations is all the built-in accounts have security vulnerabilities.
This is of particular importance because if your users are allowed to create their own jobs (executed under the SQL Agent account) your users may access secure resources via SQL Agent they would not normally have available to them. This is critically important to consider for your organization's data security particularly if you must comply with various regulations such as PCIHIPAA, etc.
General Configuration Options
There are a few SQL Agent settings you can customize to meet your needs, some of the most common are listed below. Unless otherwise noted, these options can be configured via SSMS or using the msdb.dbo. sp_set_sqlagent_properties
system stored procedure.

Auto-Restart Services
The SQL Server and SQL Server Agent services monitor each other and restart each other if the other service fails. You can enable or disable the auto restart of either service. Note that these options should be disabled on clusters.
SQL Agent Log Files
After installation the SQL Agent log files will be configured to use the path %SQL Install Path%\MSSQL\LOG. If the log file location doesn't meet your needs you'll find the path is not configurable during installation and is not something you can modify in the GUI.
You can get instructions for moving the log file and a T-SQL script to reconfigure your log file path in my earlier post, Moving Default Directories in SQL Server. You will need to restart SQL Agent service for these changes to take effect.
You can also configure limits to the size of Agent log data and storage duration of Agent logs. You can limit the rows stored in the log both by total number of history rows and total number of rows per job. You can also restrict how much Agent history is kept by time period (e.g. last 30 days…)
Posted By: MR. JOYDEEP DAS

Different type of referential actions in foreign key


Foreign key is an important part of referential integrity. But we often confuse about the foreign key referential actions and their implementations. Here is a beautiful collections that help you to clear your doubts related to it.

Foreign keys are defined in the ANSI SQL Standard, through a FOREIGN KEY constraint. The syntax to add such a constraint to an existing table is defined  below. Omitting the column list in the REFERENCES clause implies that the foreign key shall reference the primary key of the referenced table.

ALTER TABLE <TABLE identifier>

      ADD [ CONSTRAINT <CONSTRAINT identifier> ]

      FOREIGN KEY ( <COLUMN expression> {, <COLUMN expression>}... )

      REFERENCES <TABLE identifier> [ ( <COLUMN expression> {, <COLUMN expression>}... ) ]

      [ ON UPDATE <referential action> ]

      [ ON DELETE <referential action> ]

Likewise, foreign keys can be defined as part of the CREATE TABLE SQL statement.

CREATE TABLE TABLE_NAME (

   id    INTEGER  PRIMARY KEY,

   col2  CHARACTER VARYING(20),

   col3  INTEGER,

   ...

   FOREIGN KEY(col3)

      REFERENCES other_table(key_col) ON DELETE CASCADE,

   ... )

 

If the foreign key is a single column only, the column can be marked as such using the following syntax:

CREATE TABLE TABLE_NAME (

   id    INTEGER  PRIMARY KEY,

   col2  CHARACTER VARYING(20),

   col3  INTEGER REFERENCES other_table(column_name),

   ... )

 

Foreign keys can be defined with stored proc statement.

sp_foreignkey tabname, pktabname, col1 [, col2] ...  [, col8]

tabname : is the name of the table or view that contains the foreign key to be defined.

 pktabname : is the name of the table or view that has the primary key to which the foreign key applies. The primary key must already be defined.

col1 : is the name of the first column that makes up the foreign key. The foreign key must have at least one column and can have a maximum of eight columns.

Because the Database Management System enforces referential constraints, it must ensure data integrity if rows in a referenced table are to be deleted (or updated). If dependent rows in referencing tables still exist, those references have to be considered. SQL Server specifies 5 different referential actions that shall take place in such occurrences:

  CASCADE

  RESTRICT

  NO ACTION

  SET NULL

  SET DEFAULT

 

CASCADE

Whenever rows in the master (referenced) table are deleted (resp. updated), the respective rows of the child (referencing) table with a matching foreign key column will get deleted (resp. updated) as well. This is called a cascade delete.

Example Tables: Customer(customer_id, cname, caddress) and Order(customer_id, products, payment)

Customer is the master table and Order is the child table, where 'customer_id' is the foreign key in Order and represents the customer who placed the order. When a row of Customer is deleted, any Order row matching the deleted Customer's customer_id will also be deleted.

NOTE: In Microsoft SQL, a cascading delete to a self-referencing table is not allowed. You must either use a trigger, create a stored procedure, or handle the cascading delete from the calling application. An example of this is where a single table has an ID as identity and a ParentID with a relationship to ID in the same table.

RESTRICT

A value cannot be updated or deleted when a row exists in a foreign key table that references the value in the referenced table.

Similarly, a row cannot be deleted as long as there is a reference to it from a foreign key table.

NO ACTION

NO ACTION and RESTRICT are very much alike. The main difference between NO ACTION and RESTRICT is that with NO ACTION the referential integrity check is done after trying to alter the table. RESTRICT does the check before trying to execute the UPDATE or DELETE statement. Both referential actions act the same if the referential integrity check fails: the UPDATE or DELETE statement will result in an error.

In other words, when an UPDATE or DELETE statement is executed on the referenced table using the referential action NO ACTION, the DBMS verifies at the end of the statement execution that none of the referential relationships are violated. This is different from RESTRICT, which assumes at the outset that the operation will violate the constraint. Using NO ACTION, the triggers or the semantics of the statement itself may yield an end state in which no foreign key relationships are violated by the time the constraint is finally checked, thus allowing the statement to complete successfully.

SET NULL

The foreign key values in the referencing row are set to NULL when the referenced row is updated or deleted. This is only possible if the respective columns in the referencing table are nullable. Due to the semantics of NULL, a referencing row with NULLs in the foreign key columns does not require a referenced row.

SET DEFAULT

Similar to SET NULL, the foreign key values in the referencing row are set to the column default when the referenced row is updated or deleted.

 

Posted By: MR. JOYDEEP DAS

Stored Procedure Performance


 Performance is the one of the key factor that I always afraid of when I am writing any complex query. The slow performance came from various factors and we always trying to solve this issue by day to day basis.

For the performance related Issue, I am always welcome everyone to write something and make there valuable suggestion to help others.

Here I am collecting some facts that directly related to stored procedure. By maintain this technique we can save some executions time.

  • Use stored procedures instead of heavy-duty queries.
    This can reduce network traffic, because your client will send to server only stored procedure name (perhaps with some parameters) instead of large heavy-duty queries text. Stored procedures can be used to enhance security and conceal underlying data objects also. For example, you can give the users permission to execute the stored procedure to work with the restricted set of the columns and data.

 

  • Include the SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a Transact-SQL statement.
    This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a Transact-SQL statement.

 

  • Call stored procedure using its fully qualified name.
    The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name. Using fully qualified names eliminates any confusion about which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names.

 

  • Consider returning the integer value as an RETURN statement instead of an integer value as part of a recordset.
    The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason. Using RETURN statement can boost performance because SQL Server will not create a recordset.

 

  • Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database.
    The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend to use the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_" in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. So, when you have the stored procedure with the prefix "sp_" in the database other than master, the master database is always checked first, and if the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.

 

  • Use the sp_executesql stored procedure instead of the EXECUTE statement.
    The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of your code when there are many parameters are used. When you use the sp_executesql stored procedure to executes a Transact-SQL statements that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation.

 

  • Use sp_executesql stored procedure instead of temporary stored procedures.
    Microsoft recommends to use the temporary stored procedures when connecting to earlier versions of SQL Server that do not support the reuse of execution plans. Applications connecting to SQL Server 7.0 or SQL Server 2000 should use the sp_executesql system stored procedure instead of temporary stored procedures to have a better chance to reuse the execution plans.


  • If you have a very large stored procedure, try to break down this stored procedure into several sub-procedures, and call them from a controlling stored procedure.
    The stored procedure will be recompiled when any structural changes were made to a table or view referenced by the stored procedure (for example, ALTER TABLE statement), or when a large number of INSERTS, UPDATES or DELETES are made to a table referenced by a stored procedure. So, if you break down a very large stored procedure into several sub-procedures, you get chance that only a single sub-procedure will be recompiled, but other sub-procedures will not.

 

  • Try to avoid using temporary tables inside your stored procedure.
    Using temporary tables inside stored procedure reduces the chance to reuse the execution plan.

 

  • Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
    Using DDL statements inside stored procedure reduces the chance to reuse the execution plan.

 

  • Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
    The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary each time it is run from the stored procedure because in this case the wrong execution plan will not be used.

 

  • Use SQL Server Profiler to determine which stored procedures has been recompiled too often.
    To check the stored procedure has been recompiled, run SQL Server Profiler and choose to trace the event in the "Stored Procedures" category called "SP:Recompile". You can also trace the event "SP:StmtStarting" to see at what point in the procedure it is being recompiled. When you identify these stored procedures, you can take some correction actions to reduce or eliminate the excessive recompilations.

Posted By: MR.JOYDEEP DAS