Differences between a Stored Procedure and a Trigger
A stored procedure is a group of SQL statements that is compiled one time, and then can be executed many times. A trigger is a special kind of stored procedure. It is attached to a table and only triggers when an insert, update or delete occurs.
- Function must return a value but in Stored Procedure it is optional( Procedure can return zero or n values).
- Functions can have only input parameters for it whereas Procedures can have input/output parameters .
- Function takes one input parameter it is mandatory but Stored Procedure may take o to n input parameters..
- Functions can be called from Procedure whereas Procedures cannot be called from Function.
- Procedure allows SELECT as well as DML(INSERT/UPDATE/DELETE) statement in it whereas Function allows only SELECT statement in it.
- Procedures can not be utilized in a SELECT statement whereas Function can be embedded in a SELECT statement.
- Stored Procedures cannot be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section whereas Function can be.
- Functions that return tables can be treated as another rowset. This can be used in JOINs with other tables.
- Inline Function can be though of as views that take parameters and can be used in JOINs and other Rowset operations.
- Exception can be handled by try-catch block in a Procedure whereas try-catch block cannot be used in a Function.
- We can go for Transaction Management in Procedure whereas we can’t go in Function.
- We can execute a stored procedure whenever we want with the help of the exec command, but a trigger can only be executed whenever an event (insert, delete, and update) is fired on the table on which the trigger is defined.
- We can call a stored procedure from inside another stored procedure but we can’t directly call another trigger within a trigger. We can only achieve nesting of triggers in which the action (insert, delete, and update) defined within a trigger can initiate execution of another trigger defined on the same table or a different table.
- Stored procedures can be scheduled through a job to execute on a predefined time, but we can’t schedule a trigger.
- Stored procedure can take input parameters, but we can’t pass parameters as input to a trigger.
- Stored procedures can return values but a trigger cannot return a value.
- We can use Print commands inside a stored procedure for debugging purposes but we can’t use print commands inside a trigger.
- We can use transaction statements like begin transaction, commit transaction, and rollback inside a stored procedure but we can’t use transaction statements inside a trigger.
- We can call a stored procedure from the front end (.asp files, .aspx files, .ascx files, etc.) but we can’t call a trigger from these files.
- Stored procedures are used for performing tasks. Stored procedures are normally used for performing user specified tasks. They can have parameters and return multiple results sets.
- The Triggers for auditing work: Triggers normally are used for auditing work. They can be used to trace the activities of table events.
Differences between a Stored Procedure and a Trigger
What is an Index?
A SQL table explanation is not good enough for getting the desired data very quickly or sorting the data in a specific order. But if we consider a huge amount of data in a table, we need some sort of cross reference to get to the data very quickly. This is where an index within SQL Server comes in.An index is a database object created and maintained by DBMS. It is essentially a list of the contents of a column or group of columns.
Why Use an Index?
Use of SQL server indexes provide many facilities such as:
- Rapid access of information
- Efficient access of information
- Enforcement of uniqueness constraints
Correct use of indexes can make the difference between a top performing database with high customer satisfaction and a non-performing database with low customer satisfaction.
Types of Indexes
SQL Server has two major types of indexes:
The index type refers to the way the index is stored internally by SQL Server. So a table or view can contain the two types of indexes.
An index defined as being clustered, defines the physical order that the data in a table is stored. Only one cluster can be defined per table. So it can be defined as:
- Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order.
- The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap.
- With a clustered index the rows are stored physically on the disk in the same order as the index. There can therefore be only one clustered index.
- With a non clustered index there is a second list that has pointers to the physical rows. You can have many non clustered indexes, although each new index will increase the time it takes to write new records.
- It is generally faster to read from a clustered index if you want to get back all the columns. You do not have to go first to the index and then to the table.
- Writing to a table with a clustered index can be slower, if there is a need to rearrange the data.
A comparison of a non-clustered index with a clustered index with an example
As an example of a non-clustered index, let’s say that we have a non-clustered index on the EmployeeID column. A non-clustered index will store both the value of the EmployeeID AND a pointer to the row in the Employee table where that value is actually stored. But a clustered index, on the other hand, will actually store the row data for a particular EmployeeID – so if you are running a query that looks for an EmployeeID of 15, the data from other columns in the table like EmployeeName, EmployeeAddress, etc. will all actually be stored in the leaf node of the clustered index itself.
Why can a table have only one clustered index?
Because a clustered index determines the order in which the rows will be stored on disk, having more than one clustered index on one table is impossible. Imagine if we have two clustered indexes on a single table – which index would determine the order in which the rows will be stored? Since the rows of a table can only be sorted to follow just one index, having more than one clustered index is not allowed.
Summary of the differences between clustered and non-clustered indexes
Here’s a summary of the differences:
- A clustered index determines the order in which the rows of the table will be stored on disk – and it actually stores row level data in the leaf nodes of the index itself. A non-clustered index has no effect on which the order of the rows will be stored.
- Using a clustered index is an advantage when groups of data that can be clustered are frequently accessed by some queries. This speeds up retrieval because the data lives close to each other on disk. Also, if data is accessed in the same order as the clustered index, the retrieval will be much faster because the physical data stored on disk is sorted in the same order as the index.
- A clustered index can be a disadvantage because any time a change is made to a value of an indexed column, the subsequent possibility of re-sorting rows to maintain order is a definite performance hit.
- A table can have multiple non-clustered indexes. But, a table can have only one clustered index.
- Non clustered indexes store both a value and a pointer to the actual row that holds that value. Clustered indexes don’t need to store a pointer to the actual row because of the fact that the rows in the table are stored on disk in the same exact order as the clustered index – and the clustered index actually stores the row-level data in it’s leaf nodes.
Scenario 1 : Primary Key will default to Clustered Index
In this case we will create only Primary Key and when we check the kind of index created on the table we will notice that it has created clustered index automatically over it.
Scenario 2: Primary Key is defined as a Non-clustered Index
In this case we will explicitly defined Primary Key as a non-clustered index and it will create it as a non-clustered index. It proves that Primary Key can be non-clustered index.
Scenario 3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index
In this case we will create clustered index on another column, SQL Server will automatically create a Primary Key as a non-clustered index as clustered index is specified on another column.
Scenario 4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index
In this case we will create two indexes on the both the tables but we will not specify the type of the index on the columns. When we check the results we will notice that Primary Key is automatically defaulted to Clustered Index and another column as a Non-clustered index.
è What’s the difference between a primary key and a unique key?
Both primary key and unique key enforces uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.
- TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
- TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
- TRUNCATE removes all rows from a table, but the table structure, its columns, constraints, indexes and so on, remains. The counter used by an identity for new rows is reset to the seed for the column.
- You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint. Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
- TRUNCATE cannot be rolled back.
- TRUNCATE is DDL Command.
- TRUNCATE Resets identity of the table
- DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
- If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
- DELETE Can be used with or without a WHERE clause
- DELETE Activates Triggers.
- DELETE can be rolled back.
- DELETE is DML Command.
- DELETE does not reset identity of the table.
Note: DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it can not be rolled back but DELETE can be rolled back.
SQL server Agent purpose is to ease the implementation of tasks for the DBA, with its full- function scheduling engine, which allows you to schedule your own jobs and scripts.
Can a stored procedure call itself or recursive stored procedure?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem.
Is it possible to run stored procedure whenever sql server starts?
Answer is yes. SQL Server provides sp_procoption system procedure that will make you to run stored procedure automatically each time sql server service is started.
Using sp_procoption system stored procedure help you to execute stored procedure when sql server service is started. This option will keep a track of sql server up time.
Syntax: exec sp_procoption ‘stored procedure name’,’startup’,’ON|OFF’
First Input parameter is stored procedure name, second is STARTUP
If you set startup parameter option set to ON that will configure to execute procedure automatically in sql server
- Must have sysadmin privilege to use sp_procoption.
- To configure sp_procoption, you should be in master database.
- Stored procedure that needs to configure must not require any input parameter or output parameter.
The RANK() functions ranks each row of a result set. It can also be used to partition the data for ranking. It takes two arguments, PARTITION BY clause and ORDER BY clause. As the name suggests PARTITION BY clause is used to partition the result set into chunks, andORDER BY defines the order of the rows.
SELECT TOP (10)
OrganizationLevel, JobTitle, LoginID, HireDate,
RANK() OVER (ORDER BY HireDate ASC) AS ‘Rank’
The RANK()function in SQL Server returns the position of a value within the partition of a result set, with gaps in the ranking where there are ties.
The DENSE_RANK() function in SQL Server returns the position of a value within the partition of a result set, leaving no gaps in the ranking where there are ties.
RANK() functions does not always assign consecutive numbers to rows. (After 6 , next is 9). If you use DENSE_RANK() it will give 7 after 6.
The ROW_NUMBER function simply assigns sequential numbering to the records of a result-set or to the records within groups of a result-set. You can create these groupings (partition the records) using the PARTITION BY clause. The syntax for ROW_NUMBER function is:
ROW_NUMBER() OVER ( [PARTITION BY partition_value_expression , … [ n ]] ORDER BYorder_value_expression , … [ n ])
Also NTILE is there.
COALESCE vs. ISNULL
The COALESCE and ISNULL T-SQL functions are used to return the first nonnull expression among the input arguments. (To handle Null values we use them.)
I’ve been told that COALESCE is less costly than ISNULL, but research doesn’t indicate that. ISNULL takes only two parameters, the field being evaluated for NULL, and the result you want if it is evaluated as NULL. COALESCE will take any number of parameters, and return the first value encountered that isn’t NULL.
COALESCE “returns the first nonnull expression among its arguments,” and ISNULL replaces NULL with the specified replacement value.
@x AS INT = NULL,
@y AS INT = 1759,
@z AS INT = 42;
SELECT COALESCE(@x, @y, @z);
SELECT ISNULL(@x, @y);
When you run this code, both functions return the integer 1759.
One apparent advantage that COALESCE has over ISNULL is that it supports more than two inputs, whereas ISNULL supports only two. Another advantage of COALESCE is that it’s a standard function (namely, defined by the ISO/ANSI SQL standards), whereas ISNULL is T-SQL–specific.
Data Type of Expression
COALESCE and ISNULL differ in how they determine the data type of the resulting expression.
The data type of a COALESCE expression is the data type of the input argument with the highest data type precedence. If all inputs are the untyped NULL literal, you get an error.
The data type of an ISNULL expression is the data type of the first input. If the first input is an untyped NULL literal, the data type of the result is the type of the second input. If both inputs are the untyped literal, the type of the output is INT.
As an example, consider the following code and its output, which Table 1 shows.
@x AS VARCHAR(3) = NULL,
@y AS VARCHAR(10) = ‘1234567890’;
COALESCE(@x, @y) AS COALESCExy, COALESCE(@y, @x)
ISNULL(@x, @y) AS ISNULLxy, ISNULL(@y, @x)
Table 1: Data Type of Output
COALESCExy COALESCEyx ISNULLxy ISNULLyx
———- ———- ——– ———-
1234567890 1234567890 123 1234567890
Notice that with COALESCE, regardless of which input is specified first, the type of the output is VARCHAR(10)—the one with the higher precedence. However, with ISNULL, the type of the output is determined by the first input. So when the first input is of a VARCHAR(3) data type (the expression aliased as ISNULLxy), the output is VARCHAR(3). As a result, the returned value that originated in the input @y is truncated after three characters.
In the last example, all inputs are character strings, just of different lengths. What if the inputs are of different data type families? The same rules still apply. For example, the following COALESCE expression accepts a character string as the first input and an integer as a second input:
SELECT COALESCE(‘abc’, 1);
An integer has a higher data type precedence than a character string; therefore, SQL Server tries to convert the value ‘abc’ to an integer and naturally fails. This code generates the following conversion error:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value
‘abc’ to data type int.
Try a similar expression, using ISNULL instead:
SELECT ISNULL(‘abc’, 1);
Now the returned type is based on the first input, and therefore SQL Server doesn’t try to convert the character string. So you get the output ‘abc’.
As for untyped NULL literal inputs, as I mentioned, if all inputs to COALESCE are the untyped NULL literal, you get an error. To demonstrate this, run the following code:
SELECT COALESCE(NULL, NULL);
You get the following error:
Msg 4127, Level 16, State 1, Line 1
At least one of the arguments to COALESCE must be an
expression that is not the NULL constant.
As long as there’s a typed NULL in the input, the code succeeds:
SELECT COALESCE(CAST(NULL AS INT), NULL);
You get a NULL typed as an integer as the output.
The ISNULL function does allow both inputs to be untyped NULLs, in which case it returns a NULL typed as an integer as the output. To demonstrate this, run the following code:
SELECT ISNULL(NULL, NULL);
You get a NULL typed asan integer as the output. To prove this, run the following code:
SELECT ISNULL(NULL, NULL) AS col1
WHERE object_id = OBJECT_ID(‘dbo.T1′)
AND name = N’col1’;
You get ‘int’ as output.
What is the major difference between CAST and CONVERT in SQL cause both performs data type conversion?
While both the functions perform the same objective, conversion of data, there are a few differences between the two. In some conversions the CAST function offers faster and easier performance or vice-versa. The CONVERT function is better when trying to convert date and time values, fractional numbers, and monetary signifiers. However, the CAST function is of ANSI standard and is more portable compared to CONVERT function and can be used for other database applications without having the user to change much. CAST is also handier when it comes to converting decimals and numeric values, as the function can keep the decimals from the original expressions. Many experts suggest using CAST and CONVERT together, while using CAST first for conversion and the CONVERT for other tasks that are much better in the CONVERT function.
CAST is an ANSI SQL-92 / CONVERT is specific to SQL Server
CAST ( [Expression] AS Datatype) èSelect CAST(@A as int)
And CONVERT(data_type(length), expression, style)
Style – style values for datetime or smalldatetime conversion to character data. Add 100 to a style value to get a four-place year that includes the century (yyyy).
In this example we take a style value 108 which defines the following format:
Now use the above style in the following query:
FUNCTIONS: A function can take zero or many arguments and must return a value.
To execute UDF we need to specify dbo.FUNCTIONname(). If you miss dbo. Then it treats as system defined func and gets error when we execute it.
For normal scalar functions (those who returns int, varchar, float…) we need BEGIN END stamnts in functions. For Table valued functions (those who returns TABLE) we don’t need BEGIN END stamnts.
In Inline Table valued functions you don’t specify the structure of table and but in multi table valued funcs we specify. Also we use BEGIN and ENDin multi ….
The code below defines and then uses a very simple scalar UDF that simply triples the input:
CREATE FUNCTION dbo.Triple(@Input int)
DECLARE @Result int;
SET @Result = @Input * 3;
SELECT DataVal, dbo.Triple(DataVal) AS Triple FROM dbo.LargeTable;
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.
How do I create and use an Inline Table-Value User-Defined Function?
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.
CREATE FUNCTION CustomersByContinent (@Continent varchar(30))RETURNS TABLE ASRETURN – //LOGIC HERE
How do I create and use a Multi-statement Table-Value User-Defined Function?
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.
CREATE FUNCTION dbo.customersbycountry ( @Country varchar(15) )RETURNS @CustomersbyCountryTab table ( [CustomerID] [nchar] (5), [CompanyName] [nvarchar] (40 )ASBEGINEND
What are the benefits of User-Defined Functions?
The benefits to SQL Server User-Defined functions are numerous. First, we can use these functions in so many different places when compared to the SQL Server stored procedure. The ability for a function to act like a table (for Inline table and Multi-statement table functions) gives developers the ability to break out complex logic into shorter and shorter code blocks. This will generally give the additional benefit of making the code less complex and easier to write and maintain. In the case of a Scalar User-Defined Function, the ability to use this function anywhere you can use a scalar of the same data type is also a very powerful thing. Combining these advantages with the ability to pass parameters into these database objects makes the SQL Server User-Defined function a very powerful tool.
We can pass params to UDFs but we cant to VIEWS.
The basic difference between Decimal and Numeric :
They are the exactly same. Same thing different name.
The basic difference between Decimal/Numeric and Float :
Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.
Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type reane can be represented exactly with precision and scale.
Converting from Decimal or Numeric to float can cause some loss of precision. For the Decimal or Numeric data types, SQL Server considers each specific combination of precision and scale as a different data type. DECIMAL(2,2) and DECIMAL(2,4) are different data types. This means that 11.22 and 11.2222 are different types though this is not the case for float. For FLOAT(6) 11.22 and 11.2222 are same data types.
MS says that in 2000 ‘float’ and ‘real’ are approximations. They aren’t going to be exact, and when converting these values to other datatypes you can get ‘unpredictable’ results. avoide these data types, especially for financial applications.
Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons.
when should you choose one instead of the other?
In general, use DECIMAL instead of FLOAT whenever you can. The main problem with floating point numbers is that they are not precise. DECIMAL values are precise. In other words, a FLOAT value will be an approximate value whereas a DECIMAL value will be an exact value.
Money can take upto 5 decimal. Whereas, if you need > 5 decimal places for a value go for decimal(n,m)
Money and Decimal are fixed numeric datatypes while Float is an approximate numeric datatype. Results of mathematical operations on floating point numbers can seem unpredictable, especially when rounding is involved.
Also, Money doesn’t provide any advantages over Decimal. If fractional units up to 5 decimal places are not valid in your currency or database schema, just use Decimal with the appropriate precision and scale.
difference between numeric(18,0) and decimal(18,0)
They are the same. Numeric is functionally equivalent to decimal. DECIMAL and NUMERIC data types are numeric data types with fixed precision and scale.
more useful information extracted from the cited book
- numeric(precision, scale) = decimal(precision, scale) – Exact Numeric Data Types
- real = float(24) – Approximate Numeric Data Types
- All exact numeric types always produce the same result, regardless of which kind of processor architecture is being used or the magnitude of the numbers
- The parameter supplied to the float data type defines the number of bits that are used to store the mantissa of the floating point num
A UNION statement effectively does a SELECT DISTINCT on the results set. If you know that all the records returned are unique from your union, use UNION ALL instead, it gives faster results.
Temporary tables can be created at runtime and can do the all kinds of operations that one normal table can do. But, based on the table types, the scope is limited. These tables are created inside tempdb database. Two types. Local & global..
When to Use Temporary Tables?
Below are the scenarios where we can use temporary tables:
- When we are doing large number of row manipulation in stored procedures.
- This is useful to replace the cursor. We can store the result set data into a temp table, then we can manipulate the data from there.
- When we are having a complex join operation.
Alternative Approach: Table Variable
Alternative of Temporary table is the Table variable which can do all kinds of operations that we can perform inTemp table. Below is the syntax for using Table variable.
When to Use Table Variable Over Temp Table
Tablevariable is always useful for less data. If the result set returns a large number of records, we need to go for temp table.
Table Variables are the fastest as they are just like any other variables which are stored in the RAM.
If you will be repeatedly adding and deleting large numbers of rows from the table then use a#temporary table.
If you need an index that cannot be created implicitly through a UNIQUE or PRIMARY KEY constraint then you need a #temporary table as it is not possible to create these on table variables.
We can only have one clustered index on table. We can have on more than one column.
A clustered index alters the way that the rows are stored. When you create a clustered index on a column (or a number of columns), SQL server sorts the table’s rows by that column(s). It is like a dictionary, where all words are sorted in alphabetical order in the entire book.
A non-clustered index, on the other hand, does not alter the way the rows are stored in the table. It creates a completely different object within the table that contains the column(s) selected for indexing and a pointer back to the table’s rows containing the data.
there are times when a non-clustered index can improve the performance of certain queries more than a clustered index. Because of that, there are some situations where the performance of some queries can be improved by created a non-clustered index which is identical to the clustered index.
To be clear, having a non-clustered index along with the clustered index on the same columns will degrade performance of updates, inserts, and deletes, and it will take additional space on the disk. These drawbacks of adding additional indexes need to be considered before adding any new nonclustered index.
No, SQL Server does not automatically create non-clustered indexes. The only indexes that get created automatically:
- the clustered index on your primary key (unless you specify otherwise – if you define your primary key to be nonclustered, then a nonclustered index will be created)
- a unique nonclustered index when you apply a UNIQUE CONSTRAINT to a column (or set of columns)
Difference between @@IDENTITY, SCOPE_IDENTITY() and IDENT_CURRENT()
Most of the time we use SCOPE_IDENTITY()
|SELECT @@IDENTITY() returns the last IDENTITY value produced on a connection, regardless of the table that produced the value, and regardless of the scope of the statement that produced the value.
@@IDENTITY will return the last identity value entered into a table in your current session, while it is not limited to the current scope. It will return the last identity value that is either explicitly created (creatd by any trigger or user defined function).
|SELECT SCOPE_IDENTITY() returns the last IDENTITY value produced on a connection and within the same scope, regardless of the table that produced the value. SCOPE_IDENTITY(), like @@IDENTITY returns the last identity value created in the current session, but it will also limit it to your current scope as well. In other words, it will return the last identity value that you explicitly created (not includes the identity created by any trigger or user defined function).SELECT IDENT_CURRENT(‘tablename’) returns the last IDENTITY value produced in a table. It does not depend on any session or scope, instead it is limited to a specified table. IDENT_CURRENT returns the identity value generated for a specific table in any session and any scope.|
CURSORS: you can use a single UPDATE statement to update many rows of data. There are times when you want to loop through a series of rows a perform processing for each row. In this case you can use a cursor.
Please note that cursors are the SLOWEST way to access data inside SQL Server. They should only be used when you truly need to access one row at a time. The only reason I can think of for that is to call a stored procedure on each row.
DECLARE db_cursor CURSOR (DECLARE c1 CURSOR READ_ONLY. The READ_ONLY clause is important in the code sample above. That dramatically improves the performance of the cursor.)
WHERE name NOT IN (‘master’,’model’,’msdb’,’tempdb’)
FETCH NEXT FROM db_cursor INTO @name
WHILE @@FETCH_STATUS = 0
SET @fileName = @path + @name + ‘_’ + ‘.BAK’
BACKUP DATABASE @name TO DISK = @fileName
FETCH NEXT FROM db_cursor INTO @name
Based on the example above, cursors include these components:
- DECLARE statements – Declare variables used in the code block
- SET\SELECT statements – Initialize the variables to a specific value
- DECLARE CURSOR statement – Populate the cursor with values that will be evaluated
- NOTE – There are an equal number of variables in the DECLARE <cursor_name> CURSOR FOR statement as there are in the SELECT statement. This could be 1 or many variables and associated columns.
- OPEN statement – Open the cursor to begin data processing
- FETCH NEXT statements – Assign the specific values from the cursor to the variables
- NOTE – This logic is used for the initial population before the WHILE statement and then again during each loop in the process as a portion of the WHILE statement
- WHILE statement – Condition to begin and continue data processing
- BEGIN…END statement – Start and end of the code block
- NOTE – Based on the data processing multiple BEGIN…END statements can be used
- Data processing – In this example, this logic is to backup a database to a specific path and file name, but this could be just about any DML or administrative logic
- CLOSE statement – Releases the current data and associated locks, but permits the cursor to be re-opened
- DEALLOCATE statement – Destroys the cursor.Releases resoeces used by cursor.
Explain the disadvantages/limitation of the cursor.
Cursor requires a network roundtrip each time it fetches a record, thus consume network resources.
While data processing, it issues locks on part of the table, or on the whole table. Because cursor looks row by row it takes time.
How can you avoid cursors?
The main purpose that cursors fulfill is traversing of the row set. Cursors can be best avoided by:
- Using the SQL while loop: Using a while loop we can insert the result set into the temporary table.
- User defined functions : Cursors are sometimes used to perform some calculation on the resultant row set. This cam also be achieved by creating a user defined function to suit the needs
Also by JOINS, Table variables, Temp tables.
Users can group two or more Transact-SQL statements into a single transaction using the following statements:
- Begin Transaction
- Rollback Transaction
- Commit Transaction
- BEGINTRAN· COMMITTRAN· · PROBLEM:· IF (@intErrorCode <> 0) BEGIN· PRINT’Unexpected error occurred!’· ROLLBACKTRAN· END
SQL Server allows you to nest transactions. Basically, this feature means that a new transaction can start even though the previous one is not complete. Transact-SQL allows you to nest transaction operations by issuing nested BEGIN TRAN commands. The @@TRANCOUNT automatic variable can be queried to determine the level of nesting – 0 indicates no nesting , 1 indicates nesting one level deep, and so fourth.
A COMMIT issued against any transaction except the outermost one doesn’t commit any changes to disk – it merely decrements the@@TRANCOUNT automatic variable. A ROLLBACK, on the other hand, works regardless of the level at which it is issued, but rolls back all transactions, regardless of the nesting level. Though this is counterintuitive, there’s a very good reason for it. If a nested COMMIT actually wrote changes permanently to disk, an outer ROLLBACKwouldn’t be able to reverse those changes since they would already be recorded permanently.
When you explicitly begin a transaction, the @@TRANCOUNT automatic variable count increases from 0 to 1; when youCOMMIT, the count decreases by one; when you ROLLBACK, the count is reduced to 0. As you see, the behavior ofCOMMIT and ROLLBACK is not symmetric. If you nest transactions, COMMIT always decreases the nesting level by 1, as you can see illustrated in Figure 1. The ROLLBACK command, on the other hand, rolls back the entire transaction, illustrated in Figure 2. This asymmetry between COMMIT and ROLLBACK is the key to handling errors in nested transactions.
|Figure 1: A COMMIT always balances a BEGIN TRANSACTION by reducing the transaction count by one.|
|Figure 2: A single ROLLBACK always rolls back the entire transaction.|
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
This isolation level allows dirty reads. One transaction may see uncommitted changes made by some other transaction.
To maintain the highest level of isolation, a DBMS usually acquires locks on data, which may result in a loss of concurrency and a high locking overhead. This isolation level relaxes this property.
The advantage is that it can be faster in some situations. The disadvantage is the result can be wrong (data which hasn’t been committed yet could be returned) and there is no guarantee that the result is repeatable. If you care about accuracy, don’t use READ UNCOMMITTED.
EX: In sqlserver we open different query windows. If you write BEGIN TRAN followed by a UPDATE/DELETE/INSERT query without its ROLLBACK/ COMMIT in one query window, after executing query you can get data immediately on SELECT query. If go to other query window the same SELECT query never returns , it always show executing…. That is because we haven’t COMMIT/ROLLBACK in other query window. So to stop busy executing we use
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED before a SELECT query.
The @@ERROR automatic variable is used to implement error handling code. It contains the error ID produced by the last SQL statement executed during a client’s connection. When a statement executes successfully,@@ERROR contains 0. To determine if a statement executes successfully, an IF statement is used to check the value of@@ERROR immediately after the target statement executes. It is imperative that @@ERROR be checked immediately after the target statement, because its value is reset to 0 when the next statement executes successfully. If a trappable error occurs, @@ERROR will have a value greater than 0. SQL Server resets the @@ERROR value after every successful command, so you must immediately capture the @@ERROR value. Most of the time, you’ll want to test for changes in@@ERROR right after any INSERT, UPDATE, or DELETE statement.
IF (@@ERROR <> 0) BEGIN PRINT’Unexpected error occurred!’ ROLLBACKTRAN RETURN1 END
What is a Trigger
A trigger is a special kind of a store procedure that executes in response to certain action on the table like insertion, deletion or updation of data. It is a database object which is bound to a table and is executed automatically. You can’t explicitly invoke triggers. The only way to do this is by performing the required action on the table that they are assigned to.
Types Of Triggers
There are three action query types that you use in SQL which are INSERT, UPDATE and DELETE. So, there are three types of triggers and hybrids that come from mixing and matching the events and timings that fire them. Basically, triggers are classified into two main types:
- After Triggers (For Triggers)
- Instead Of Triggers
(i) After Triggers
These triggers run after an insert, update or delete on a table. They are not supported for views.
AFTER TRIGGERS can be classified further into three types as:
- AFTER INSERT Trigger.
- AFTER UPDATE Trigger.
- AFTER DELETE Trigger.
AFTER INSERT TRIGGER:
CREATETRIGGER trgAfterInsert ON [dbo].[Employee_Test] FORINSERT/DELETE / UPDATE(AFTER INSERT/DELETE / UPDATE) same as above AS– // QueriesGO
THE ON clause specifies the table name on which the trigger is to be attached. The FOR INSERT specifies that this is an AFTER INSERT trigger. In place of FOR INSERT,AFTER INSERT can be used. Both of them mean the same.
(ii) Instead Of Triggers
These can be used as an interceptor for anything that anyone tried to do on our table or view. If you define an Instead Of trigger on a table for the Delete operation, they try to delete rows, and they will not actually get deleted (unless you issue another delete instruction from within the trigger)
INSTEAD OF TRIGGERS can be classified further into three types as:
- INSTEAD OF INSERT/DELETE/UPDATE Trigger.
- CREATETRIGGER trgInsteadOfDelete ON [dbo].[Employee_Test] c. INSTEAD OFDELETEd. AS
What is Clustered & Non-Clustered Index?
Clustered Index: Clustered index physically rearrange the data that users inserts in your tables. It is nothing but a dictionary type data where actual data remains.
Non-Clustered Index: It Non-Clustered Index contains pointers to the data that is stored in the data page. It is a kind of index backside of the book where you see only the reference of a kind of data.
What is constraints?
SQL Server users constraints to enforce limitations on the data that can be entered into a particular column in table. There are following types of constraints.
Unique, Default, Check, Primary Key, Foreign Key, Not Null.
What’s the difference between DELETE TABLE and TRUNCATE TABLE commands?
DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won’t log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster. Of course.
Unique Key Constraint:
The column values should retain uniqueness.
It allows null values in the column.
It will create non-clustered index by default.
Any number of unique constraints can be added to a table.
Primary Key Constraint:
Primary key will create column data uniqueness in the table.
It Wont allow Null values.
By default Primary key will create clustered index.
Only one Primary key can be created for a table.
Multiple columns can be consolidated to form a single primary key.
#temp Table (Temporary Table)
temp table is a temporary table that is generally created to store session specific data. Its kind of normal table but it is created and populated on disk, in the system database tempdb — with a session-specific identifier packed onto the name, to differentiate between similarly-named #temp tables created from other sessions.
The data in this #temp table (in fact, the table itself) is visible only to the current scope. Generally, the table gets cleared up automatically when the current procedure goes out of scope, however, we should manually clean up the data when we are done with it.
table variable is similar to temporary table except with more flexibility. It is not physically stored in the hard disk, it is stored in the memory. We should choose this when we need to store less 100 records.
How to return XML in SQL Server?
We can use FOR XML statement at the end of the query to return xml data from the SQL Server. select * from mytable for xml auto
1. Use as much as possible WHERE clause filters. Where Clause is the most important part for optimization
2. Select only those fields which really require.
3. Joins are expensive in terms of time. Make sure that use all the keys that relate the two tables together and don’t join to unused tables, always try to join on indexed fields. The join type is important as well (INNER, OUTER).
Returns the number of active transactions for the current connection.
To RENAME DB: sp_renamedb ‘oldname’ , ‘newname’
What is a deadlock?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated.
SQL Server detects deadlocks and terminates one user’s process.
With NOLOCK hint, the transaction isolation level for the SELECT statement is READ UNCOMMITTED. This means that the query may see dirty and inconsistent data. When displaying rough statistical information, NOLOCK can be very useful.
This is not a good idea to apply as a rule. Even if this dirty read behavior is OK for your mission critical web based application, a NOLOCK scan can cause 601 error which will terminate the query due to data movement as a result of lack of locking protection.
SELECT * FROM Person.Contact WITH (NOLOCK) WHERE ContactID < 20
If we run our SELECT without NOLOCK we can see the locks that are taken if we use sp_lock.
The NOLOCK hint is the same as the READUNCOMMITED hint and can be used as follows with the same results.
SELECT * FROM Person.Contact WITH (READUNCOMMITTED)
Check constraint specifies a condition that is enforced for each row of the table on which the constraint is defined. Once constraint is defined, insert or update to the data within the tables is checked against the defined constraint.
Can we create a Foreign Key with out Primary Key?
Yes. If the table has Unique Key then it is posible to create a Foreign key constraint.
varchar are variable length strings with a maximum length specified. If a string is less than the maximum length, then it is stored verbatim without any extra characters.
char are fixed length strings with a set length specified. If a string is less than the set length, then it is padded with extra characters so that it’s length is the set length.
Use varchar when your strings do not have a fixed length (e.g. names, cities, etc.)
Use char when your strings are always going to be the same length (e.g. phone numbers, zip codes, etc).
A subquery is a query that is nested inside a SELECT, INSERT, UPDATE, or DELETE statement, or inside another subquery. Subquery is an inner query or inner select, while the statement containing a subquery is also called an outer query or outer select.
Subquery can be used where an expression is possible. There are certain limitation of using Subquery, they are
A subquery is subject to the following restrictions:
#. The select list of a subquery introduced with a comparison operator can include only one expression or column name (except that EXISTS and IN operate on SELECT * or a list, respectively).
#. If the WHERE clause of an outer query includes a column name, it must be join-compatible with the column in the subquery select list.
#. The ntext, text, and image data types cannot be used in the select list of subqueries.
#. Because they must return a single value, subqueries introduced by an unmodified comparison operator (one not followed by the keyword ANY or ALL) cannot include GROUP BY and HAVING clauses.
#. The DISTINCT keyword cannot be used with subqueries that include GROUP BY.
#. The COMPUTE and INTO clauses cannot be specified.
#. ORDER BY can only be specified when TOP is also specified.
#. A view created by using a subquery cannot be updated.
ROW_NUMBER() returns a column as an expression that contain’s the row number within the result set. This is only a number used in the context of the result set, if the result changes, the ROW_NUMBER() will change.
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.
Global temporary tables (created with a double “##”) are visible to all sessions. You should always check for existence of the global temporary table before creating it… if it already exists, then you will get a duplicate object error.
Global temporary tables are dropped when the session that created it ends, and all other sessions have stopped referencing it.
- How to get @@ERROR and @@ROWCOUNT at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset. And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR
The OFFSET-FETCH clause provides you with an option to fetch only a window or page of results from the result set. OFFSET-FETCH can be used only with the ORDER BY clause.
— > ORDER BY is mandatory to use OFFSET and FETCH clause.
- OFFSET clause is mandatory with FETCH. You can never use, ORDER BY … FETCH.
- TOP cannot be combined with OFFSET and FETCH in the same query expression.
Example 1 Skip first 10 rows from the sorted result set and return the remaining rows.
· SELECT First Name + ‘ ‘ + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS;
Example 2- Skip first 10 rows from the sorted resultset and return next 5 rows.
SELECT First Name + ‘ ‘ + Last Name FROM Employees ORDER BY First Name OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;
There are solutions to this problem in older versions of the SQLSERVER (before 2012) in the form of temp tables, ROW_NUMBER() and TOP .
It’s certainly easier to write a query using OFFSET and FETCH as there is less involved. There is one less column too because “RowNumber” is not needed for the OFFSET and FETCH version.
SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE
MERGE is a new feature that provides an efficient way to perform multiple DML operations. In previous versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions, but now, using MERGE statement we can include the logic of such data modifications in one statement that even checks when the data is matched then just update it and when unmatched then insert it.
One of the most important advantage of MERGE statement is all the data is read and processed only once. In previous versions three different statement has to be written to process three different activity (INSERT, UPDATE or DELETE). SYNTAX:
MERGE <target_table> [AS TARGET]USING <table_source> [AS SOURCE]ON <search_condition>[WHEN MATCHED THEN <merge_matched> ][WHEN NOT MATCHED [BY TARGET]THEN <merge_not_matched> ][WHEN NOT MATCHED BY SOURCETHEN <merge_ matched> ];
CREATE TABLE StudentDetails(
StudentID INTEGER PRIMARY KEY,
StudentName VARCHAR(15) )
CREATE TABLE StudentTotalMarks
StudentID INTEGER REFERENCES StudentDetails,
Insert Some Values.
In our example we will consider three main conditions while we merge this two tables.
- Delete the records whose marks are more than 250.
- Update marks and add 25 to each as internals if records exist.
- Insert the records if record does not exists.
Now we will write MERGE process for tables created earlier. We will make sure that we will have our three conditions discussed above are satisfied.
MERGE StudentTotalMarks AS stm
USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd
ON stm.StudentID = sd.StudentID
WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE
WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks +25
WHEN NOT MATCHED THEN
There are two very important points to remember while using MERGE statement.
- Semicolon is mandatory after the merge statement.
- When there is a MATCH clause used along with some condition, it has to be specified first amongst all other WHEN MATCH clause.
- T-SQL supports the OUTPUT clause after the inception of SQL server 2005 and later editions. We can use the OUTPUT clause with DML statements (INSERT, DELETE, UPDATE) to return information from modified rows.
We primarily use the OUTPUT clause for auditing and archiving modified rows.
Inserted and Deleted Tables in an OUTPUT Clause
Inserted and Deleted tables are two memory-resident tables that reside within SQL server and are used with the OUTPUT clause.
Whenever any DML (INSERT, DELETE, UPDATE) statement is executed, these tables are populated.
The results of the INSERT statement are stored in the Inserted table, and the results of the Delete statement are stored in the Deleted table. Also, with an UPDATE statement, the deleted rows are stored in the Deleted table. The new inserted rows in the Inserted table as UPDATE are nothing but delete and insert operations combined together.
Note: You cannot directly query Inserted and Deleted tables to see what data they are currently holding, but you can use them with the OUTPUT clause as well as with Triggers.
The OUTPUT Clause with an Insert statement
When we do an Insert operation on a table, we get a message which reads, “(n row(s) affected),” but if we want to see what data rows were inserted into a table, we can use an OUTPUT clause and a memory resident inserted table to return the results back to the screen in the same way that a select statement does.
Let us insert a record, and use an OUTPUT clause to print the results on the screen.
INSERT INTO dbo.Songs ( Id, Name, Singer)OUTPUT INSERTED.ID, INSERTED.name, INSERTED.SingerVALUES (5, ‘AINT no grave’, ‘Johnny Cash’);GO
Check the dbo.Songs table. A new row is inserted with id=5.
select * from dbo.Songs;GO
We can use an OUTPUT clause and a deleted table to see which rows were actually deleted from the table.
DELETE from dbo.SongsOUTPUT DELETED.id, DELETED.name, DELETED.singerWHERE ID=5;
The OUTPUT Clause with an Update Statement
An Update statement does nothing but delete old data and insert new data, so with an Update statement, both memory resident tables are affected and are deleted as well as inserted.
UPDATE dbo.Songs SET Singer = ‘Rahman’ OUTPUT DELETED.Singer, INSERTED.Singer
WHERE ID = 2;
Store Results of an OUTPUT Clause into a Table
INSERT INTO dbo.Songs ( Id, Name, Singer)
OUTPUT Inserted.* INTO dbo.Songs_Inserted
VALUES (5, ‘Duniya’, ‘Piyush Mishra’);
you first need to create the target table which must have the same number of columns and data types that match the source table. The same goes with a temporary table.
Store Results of an OUTPUT Clause into a Table Variable
DELETE from dbo.Songs
OUTPUT deleted.* INTO @Songs_Deleted
WHERE ID IN (1,2);
— @Songs_Deleted is table variable