Friday, 24 February 2012

INCLUDE clause WITH NON CLUSTERED INDEX

 

 

This article is dedicated to one of my colleague, who want to know about the INCLUDE clause of NON CLUSTERED Index. I try to cover all the important points related to it in short.

In SQL Server 2005, the functionality of non-clustered indexes is extended by adding non-key columns to the leaf level of the non-clustered index.

By including non key columns we create non clustered index that covers more query and increase performance.

The features of Adding non key columns are mentioned bellow:

1.    Index has the limitation of 16 key columns and maximum index key size of 900 bytes. But if we include non key columns the index not care about the index key size and 123 includes non key columns is allowed.
So a non clustered index contains:

16 key Columns + 123 Includes non Key columns.

2.    Columns cannot be repeated in the include list and Dropping of non-key columns is possible after dropping the non-key index first.  

 To DROP Non-KEY Columns
à DROP the NON-KEY INDEX First


3.    The database Engine not considers any non-key columns to calculate the number of index key columns or the size of the index.

Size of Index Not Related to Non-Key Columns

4.    The data types of non-key columns can be anything evens the data type not supported by the non clustered Index Key-columns.

5.    You should keep those columns in "Include" clause which generally comes under "SELECT" clause and not being used much in "WHERE", "GROUPBY" or "ON" clause of "JOIN".

 

The syntax of creating NON CLUSTERED INDEX with INCLUDE clause is mentioned bellow.

 

 

CREATE NONCLUSTERED INDEX [Ind-1] ON [dbo].[tab_exmaple]

(

      [sroll] ASC

)

INCLUDE ( [sSec],[sName])

 

 

Architecture of INCLUDE on NON CLUSTERED Index:

 

The architecture of "Included Column" concerns, whatever columns you have defined under "Include" clause under the index, those will be stored on the Leaf pages, and it won't get stored on the Root page or Intermediate page of the index.

Hope the article is quite interesting and thanking you to provide your valuable time on it.

 

 

 

Posted by: MR. JOYDEEP DAS

 



 

 

Thursday, 23 February 2012

Query Solving (Database Restore Related)

 

I have a Restore related question from one of my DBA Friends mentioned bellow

Query:

Hi, I tried restoring DB with below query 

RESTORE DATABASE sirwar

FROM DISK = 'F:\15.02\15.02.12 backup'

WITH REPLACE,

MOVE 'LogicalDatafileName' TO 'f:\sirwar.mdf',

MOVE 'LogicalLogfilename' TO 'f:\sirwar.ldf'

but this came with an error like this.......... 

Msg 3234, Level 16, State 2, Line 1 < ' 
Logical file 'LogicalDatafileName' is not part of database 'sirwar'. Use RESTORE FILELISTONLY to list the logical file names. 
Msg 3013, Level 16, State 1, Line 1 
RESTORE DATABASE is terminating abnormally.

 

Solutions mentioned bellow:

1) sp_helpdb on the source database. It's the first column in the 
    second result set, just copy and paste into the single quotes.
 


2) RESTIRE FILELISTONLY on the back-up file. Again, it's the first 
    column. 

 

 

Posted by: MR. JOYDEEP DAS

 

Performance increased with Multiport SQL Server

 

In this article I am trying to demonstrate you how the default instant of the SQL server listens on multiple TCP port.

By default when we are installing the default instance of the SQL server the TCP port 1433 is allocated. If the requested comes from multiple terminals the default port must be overloaded. So if we configure the default instance of SQL server as multi port TCP the load will be balanced. It will increase the performance of the SQL Server.

If the default port of SQL server is overloaded, then the client's connections are reset or forced to be reset to new port settings of the SQL Server.

How we configure the multiport TC in default instance of the SQL Server:

1.    In SQL Server network utilities select the TCP properties and added the new port separated by coma like this. 1433, 5000 etc.

2.  Stop and restart the SQL server and retrieve the error log

SQL
server listening on TCP, Shared Memory, Named Pipes.<BR/>

SQL server listening on 157.54.178.42:1433, 157.54.178.42:5000, 127.0.0.1:1433, 127.0.0.1:5000.

3.    In the SQL server client network utility modify your clients to spade load across TCP port.

 

For a general example, suppose we have two webs server named "webServ1" and "webServ2" the both use the same port 1433 of default SQL server instance, so the load is always high. Now it is distributed among port 1433 and port 5000 so the load is balanced and performance is increased.

  

Hope that the article is quite interesting and thanking you to provide valuable time in it.

 

 

 

Posted by: MR. JOYDEEP DAS

Wednesday, 22 February 2012

GROUP BY Clause And Performance Factors

 

 

In this article I am trying to describe the frequently used GROUP BY Clause and the Performance factor related to it.

A short description of GROUP BY Clause

The GROUP BY clause can be used in a SELECT statement to collect data across multiple records and group the results by one or more columns. The aggregate function columns are not included in GROUP BY cluse, I mean to say the columns that used the aggregate function is not included in the GROUP BY cluse.

The HAVING cluse is used to filters the GROUP BY cluse.

Example:

SELECT   columns1, columns2, SUM(columns3)

FROM     Mytable

GROUP BY columns1, columns2

HAVING   columns1>0

 

Is the GROUP BY is the Performance killer?

NO the GROUP BY itself is not the performance killer.

In many cases the GROUP BY clause dramatically decreases the performance of the Query.

 

Here are the some points to take care:

 

1.     To make the performance better, use the COMPOUND INDEXES for the GROUP BY fields.

2.     Don't use unnecessary fields or Unnatural Grouping options.

3.     I personally preferred the same sequence of columns in SELECT as well as GROUP BY. For Example.

Like this:

SELECT   columns1, columns2, SUM(columns3)

FROM      Mytable

GROUP BY  columns1, columns2

 

 

 Not Like this:

SELECT    columns1, columns2, SUM(columns3)

FROM      Mytable

GROUP BY  columns2, columns1

 

This is a small article, but I hope it is quite informative and thanking you to provide your valuable time on it.

 

Posted by: MR. JOYDEEP DAS

 

Question asked by one of my DBA friends.

 

 

One of my DBA friends asks me some SQL related Question. Here I am trying to explain it in this article

 

What are necessary steps should be taken for query execution time faster, suppose a Query when it's executing taking time 3min now I want to execute this query within 30secs how?

 

Answer: 

 

There are lots of factors related to make quay execution faster. My strong suggestion is to understand the execution plan to execute your query in better ways.

 

Some common factors that we all knows, related to SQL query executions are mentioned bellow.

 

a.    Don't use "SELECT * " in a SQL query. That means use the proper columns name that you needed not overload the query by using *. That added extra expenses to data retrieval.

b.    Don't use extra table join that you don't needed in your SQL statement.

c.    Don't use COUNT(*) in Sub query, instead use EXIST or NOT EXIST Clause.


      -- Do not Use

                    SELECT column_list

                    FROM   table WHERE 0 < (SELECT count(*) FROM table2 WHERE ..)

                    -- Use This

                    SELECT column_list

                    FROM table WHERE EXISTS (SELECT * FROM table2 WHERE ...)

 

d.    Avoided joining between two types of columns like "INT" and "FLOAT" and  
      don't try to use CONVERT or CAST.

e.   Try to avoid dynamic SQL

f.    Try to avoid Temporary table

g.   Don't use IN or NOT IN clause in SQL statement, instead use EXISTS or
      NOT EXISTS

h.   Avoid LIKE instead you can use full text search.

i.    Try to use UNION or UNION ALL to implement OR operators.

j.    Don't calls any function in SELET statements, I mean try to avoid.

k.   Try to avoid correlated sub query

l.    Try to use stored procedure to execute all your T-SQL statement.

m.  Use VIEWs but always use WITH SCHEMA BINDING options

n.   Last but most important, solid idea related to Index. If necessary use
      guided index in your SQL join operations.

 

What is the difference between 'SET' and 'SELECT' in SQL?

 

Answer:

 

DECLARE @i INT

 

--Type1

SELECT @i=Roll FROM MyTab WHERE Name='RAJA'

--Type2

SET @i=(SELECT Roll FROM MyTab WHERE Name='RAJA')

 

Consider the above two SQL statements, the variable @i have the same value. So, what is the difference between two statements?

 

1.     From SQL server 7.0 Microsoft recommends to use the SET statement only, in such kind of above operations.

2.     SET is ANSI standard way to assign the value of a variable.

3.     BY SELECT we can assigned multiple value to multiple variable within a single statements. Like this.

DECLARE @i INT,

                           @k VARCHAR(MAX)  

 

          SELECT @i=Roll, @k=SName FROM MyTab WHERE SName='RAJA'

 

4.     SELECT has some difficulties like, if the above statements return more than one rows it not return any error. But if you use SET in the above example it give you error like "Sub query returns more than one values".

 

So at the conclusions, I recommended you to use SET and not to go at SELECT when assigning variables.

  
What are the basic differences between SSMS 2005 & SSMS 2008?

 

Answer:

 

As per me, there are no differences; the difference is the database engine Limitations of SQL 2005 and SQL 2008. SQL 2008 gives you the better functionality and performance then SQL 2005.

 

 

 

Posted by: MR. JOYDEEP DAS

 

Tuesday, 21 February 2012

Case Sensitive SQL Query Search and Joining

 

 

This article gives you a total overall idea related to collation. It not only contains definition but also contains SQL comparisons and joins with different type collation table in different database.

So let's start

Collation is defined by a set of rules that determines how the data is stored and compare. There are for types of collation and they are

1.    Case sensitive: if "A" and "a" is treated as same way, then it is case in-sensitive. And id "A" and "a" are treated as different way than it is case sensitive. As the ASCII code of both letters are different. Capital "A" takes 65 and small "a" takes 97.

2.    Accent sensitivity: if "a" and "á" treated as same way then it is accent in-sensitive and if treated differently than it is accent sensitive. The ASCII code of "a" is 97 and "á" is 225.

3.    Kana Sensitivity: When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.

4.    Width sensitivity: When a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently than it is width sensitive.

Example of database creation, with different collations is mentioned bellow

USE MASTER

GO

CREATE DATABASE BIN COLLATE LATIN1_GENERAL_BIN

GO

CREATE DATABASE CI_AI_KS COLLATE LATIN1_GENERAL_CI_AI_KS

GO

CREATE DATABASE CS_AS_KS_WS COLLATE LATIN1_GENERAL_CS_AS_KS_WS

GO

A table objects creation example with different collation

CREATE TABLE Mytable

   ([colu]        CHAR(10) COLLATE Albanian_CI_AI_KS_WS NULL,

    [Maydate]     CHAR(8)  COLLATE Korean_Wansung_Unicode_CS_AS_KS NOT NULL ,

    [Risk_Rating] CHAR(2)  COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL)

As a SQL server developer you have to face a situation to compare the two columns of different table having different collation.  For Example you are joining two table which support different collations. If you are not careful it gives you an error msg.

To illustrate my points here I give u a simple example of case sensitive collation.

 

CREATE TABLE Mytable

            (Sname CHAR(100) NOT NULL)

           

INSERT INTO Mytable (Sname)

       VALUES ('JOYDEEP'),('TUHIN'),('PALASH'),('SANGRAM')

      

-- Get the result    

SELECT * FROM Mytable WHERE Sname='joydeep'   

 

To make the query case sensitive uses this.

-- To make the query case sensitive

SELECT * FROM Mytable WHERE Sname COLLATE Latin1_General_CS_AS='joydeep'

--  Now

SELECT * FROM Mytable WHERE Sname COLLATE Latin1_General_CS_AS='JOYDEEP'   

To find the collation of any table use this

EXEC sp_help Mytable

 

Now I am giving an example to join two tables from different database with different collations.

If we perform this join it gives an error

SELECT ProductCode, ProductResalePrice

FROM   [database1].[dbo].[PRODUCTS]

       INNER JOIN [database2].[dbo].[items] ON ProductCode = [items].itemCode

Error:

Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation.

We have to first know what type of collation both of the databases used to further precede the joining operations.

SELECT DATABASEPROPERTYEX('database1', 'Collation') SQLCollation;

SELECT DATABASEPROPERTYEX('database2', 'Collation') SQLCollation;

Now we can

SELECT ProductCode, ProductResalePrice

FROM   [database1].[dbo].[PRODUCTS]

       INNER JOIN [database2].[dbo].[items]

       ON ProductCode COLLATE SQL_Latin1_General_CP1_CI_AS = [items].itemCode

 

I think that the article is quite informative and thanking you to provide your valuable time on it.

 

Posted by: JOYDEEP DAS

 

Recovering Suspect Database

 

 

Everyone knows that, how much pain it is when a LIVE database is marked as suspect and especially there is no latest backup of the database.

What Happened when a data base is marked as Suspect:

If your LIVE database is in suspect mode, then no transaction will take place until and unless you repair your database. That causes a show stopper for your up and running application. Here, you will find a way to get out of this.

So, What the Solutions:

In this article, I am going to illustrate some T-SQL statements that can help you in case of suspect database marked.

EXEC sp_resetstatus 'YourDBName'

GO

ALTER DATABASE YourDBName SET EMERGENCY

DBCC checkdb('YourDBName')

ALTER DATABASE DBName SET SINGLE_USER WITH ROLLBACK IMMEDIATE

DBCC CheckDB ('YourDBName', REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE YourDBName SET MULTI_USER

 

GO

/*

 Rebuild the index is necessay

*/

 

ALTER INDEX ALL ON [ TableName] REBUILD

Hope the information of this article is quite informative and thanking you to provide your valuable time on it.

 

Posted by: MR. JOYDEEP DAS

 

 

 

 

Friday, 17 February 2012

Finding the user Activity in a specified Table Objects (Auditing)

 

 

In this article, I am creating and Trigger that track the user and the user activity related to specified table objects (INSERT and DELETE). You can easily turn it into good auditing scripts.

Please follow the steps to execute and understand the process of Auditing.

/*

   STEP-1

   Creating Test Table

*/

    CREATE TABLE TEST_TABLE

                 (ROLL  INT,

                  SNAME VARCHAR(50))

/*

    STEP-2

    Create the Table To trac Log

*/   

      CREATE TABLE [dbo].[TBL_TRAC_LOG]

                  (

                              ObjectName       sysname       NULL,

                              ObjectId         int           NULL,

                              SPID             smallint      NULL,

                              Status           nvarchar(30)  NULL,

                              Login            nvarchar(128) NULL,

                              Host             nvarchar(128) NULL,

                              BlkBy            smallint      NULL,

                              DBName           nvarchar(128) NULL,

                              CommandType      nvarchar(16)  NULL,

                              SQLStatement     nvarchar(max) NULL,

                              ElapsedMS        int           NULL,

                              CPUTime          int           NULL,

                              IOReads          bigint        NULL,

                              IOWrites         bigint        NULL,

                              LastWaitType     nvarchar(60)  NULL,

                              StartTime        datetime      NULL,

                              Protocol         nvarchar(40)  NULL,

                              ConnectionWrites int           NULL,

                              ConnectionReads  int           NULL,

                              ClientAddress    varchar(48)   NULL,

                              Authentication   nvarchar(40)  NULL

               )

 

/*   

    STEP-3

    You must Specify the table name Were U fix the Trigger 

*/

 

                                                 

IF EXISTS (SELECT *

           FROM sysobjects WHERE type = 'TR' AND name = 'TRG_MONITORING_INFO')

      BEGIN

            DROP  Trigger TRG_MONITORING_INFO

      END

GO

 

CREATE Trigger TRG_MONITORING_INFO ON TEST_TABLE

WITH ENCRYPTION

FOR INSERT, DELETE

AS

DECLARE @v_Param INT

BEGIN

       IF EXISTS(SELECT * FROM INSERTED)

          BEGIN

               SET @v_Param=1

          END

         

       IF EXISTS(SELECT * FROM DELETED)

          BEGIN

               SET @v_Param=0

          END  

 

       INSERT INTO TBL_TRAC_LOG

       SELECT    ObjectName       = sobj.name

                        ,ObjectId         = lock.rsc_objid

                        ,SPID             = er.session_id

                        ,[Status]         = ses.status

                        ,[Login]          = ses.login_name

                        ,Host             = ses.host_name

                        ,BlkBy            = er.blocking_session_id

                        ,DBName           = DB_Name(er.database_id)

                        ,CommandType      = CASE WHEN @v_Param=1 THEN 'INSERT' ELSE 'DELETE' END

                        ,SQLStatement     = st.text

                        ,ElapsedMS        = er.total_elapsed_time

                        ,CPUTime          = er.cpu_time

                        ,IOReads          = er.logical_reads + er.reads

                        ,IOWrites         = er.writes

                        ,LastWaitType     = er.last_wait_type

                        ,StartTime        = er.start_time

                        ,Protocol         = con.net_transport

                        ,ConnectionWrites = con.num_writes

                        ,ConnectionReads  = con.num_reads

                        ,ClientAddress    = con.client_net_address

                        ,[Authentication] = con.auth_scheme

            FROM    sys.dm_exec_requests er

                        OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) st

                        LEFT JOIN sys.dm_exec_sessions ses ON ses.session_id = er.session_id

                        LEFT JOIN sys.dm_exec_connections con ON con.session_id = ses.session_id

                        LEFT JOIN (SELECT rsc_dbid, rsc_objid , req_spid FROM sys.syslockinfo)lock ON er.session_id=lock.req_spid

                        LEFT JOIN (SELECT name,id FROM sys.sysobjects)sobj ON lock.rsc_objid=sobj.id

            WHERE   sobj.name = 'TEST_TABLE'

            GROUP BY sobj.name

                        ,lock.rsc_objid

                        ,er.session_id

                        ,ses.status

                        ,ses.login_name

                        ,ses.host_name

                        ,er.blocking_session_id

                        ,er.command

                        ,st.text

                        ,er.total_elapsed_time

                        ,er.cpu_time

                        ,er.logical_reads + er.reads

                        ,er.writes

                        ,er.last_wait_type

                        ,er.start_time

                        ,con.net_transport

                        ,con.num_writes

                        ,con.num_reads

                        ,con.client_net_address

                        ,con.auth_scheme

                        ,er.database_id

                        ,st.objectid

 

 

                 

END

GO

 

/*

  STEP-4

  Testing Zone

*/

TRUNCATE TABLE TEST_TABLE

TRUNCATE TABLE TBL_TRAC_LOG

 

--Insert to Activate auditing

INSERT INTO TEST_TABLE

VALUES(1, 'JOYDEEP')

 

-- Final Output

SELECT * FROM TEST_TABLE

 

 

 

Hope the article is quite informative and thanking you for providing your valuable time on it.  

 

 

Posted by: MR. JOYDEEP DAS