Saturday, April 25, 2009

Import CSV File Into SQL Server Using Bulk Insert - Load Comma Delimited File Into SQL Server

This is very common request recently - How to import CSV file into SQL Server? How to load CSV file into SQL Server Database Table? How to load comma delimited file into SQL Server? Let us see the solution in quick steps.



CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.


Create TestTable


USE TestData

GO

CREATE TABLE CSVTest

(ID INT,
FirstName VARCHAR(40),

LastName VARCHAR(40),

BirthDate SMALLDATETIME)

GO

Create CSV file in drive C: with name csvtest.txt with following content. The location of the file is C:\csvtest.txt

1,James,Smith,19750101

2,Meggie,Smith,19790122

3,Robert,Smith,20071101

4,Alex,Smith,20040202

Now run following script to load all the data from CSV to database table. If there is any error in any row it will be not inserted but other rows will be inserted.

BULK

INSERT CSVTest

FROM ‘c:\csvtest.txt’

WITH
(

FIELDTERMINATOR = ‘,’,

ROWTERMINATOR = ‘\n’

)

GO

Check the content of the table.

SELECT *

FROM CSVTest

GO

Drop the table to clean up database.

SELECT *

FROM CSVTest

GO

CASE Statement/Expression Examples and Explanation

ASE expressions can be used in SQL anywhere an expression can be used. Example of where CASE expressions can be used include in the SELECT list, WHERE clauses, HAVING clauses, IN lists, DELETE and UPDATE statements, and inside of built-in functions.


Two basic formulations for CASE expression


1) Simple CASE expressions

A simple CASE expression checks one expression against multiple values. Within a SELECT statement, a simple CASE expression allows only an equality check; no other comparisons are made. A simple CASE expression operates by comparing the first expression to the expression in each WHEN clause for equivalency. If these expressions are equivalent, the expression in the THEN clause will be returned.


Syntax:

CASE expression

WHEN expression1 THEN expression1

[[WHEN expression2 THEN expression2] [...]]

[ELSE expressionN]


END


Example:

DECLARE @TestVal INT

SET
@TestVal = 3

SELECT

CASE @TestVal


WHEN 1 THEN 'First'

WHEN 2 THEN 'Second'

WHEN 3 THEN 'Third'

ELSE 'Other'


END




2) Searched CASE expressions


A searched CASE expression allows comparison operators, and the use of AND and/or OR between each Boolean expression. The simple CASE expression checks only for equivalent values and can not contain Boolean expressions. The basic syntax for a searched CASE expressions is shown below:


Syntax:

CASE

WHEN Boolean_expression1 THEN expression1


[[WHEN Boolean_expression2 THEN expression2] [...]]

[ELSE expressionN]

END


Example:

DECLARE @TestVal INT

SET
@TestVal = 5


SELECT

CASE

WHEN @TestVal <=3 THEN 'Top 3'

ELSE 'Other'

END

CASE Statement in ORDER BY Clause - ORDER BY using Variable

Stored Procedure takes variable OrderBy as input parameter.

SP uses EXEC (or sp_executesql) to execute dynamically build SQL.


This was taking big hit on performance. The issue was how to improve the performance as well as remove the logic of preparing OrderBy from application. The solution I came up was using multiple CASE statement. This solution is listed here in simple version using AdventureWorks sample database. Another challenge was to order by direction of ascending or descending direction. The solution of that issue is also displayed in following example. Test the example with different options for @OrderBy and @OrderByDirection.


Currently:

Database only solution:

USE AdventureWorks


GO

DECLARE @OrderBy VARCHAR(10)

DECLARE @OrderByDirection VARCHAR(1)

SET @OrderBy = 'State' ----Other options Postal for PostalCode,


---- State for StateProvinceID, City for City

SET @OrderByDirection = 'D' ----Other options A for ascending,

---- D for descending

SELECT AddressID, City, StateProvinceID, PostalCode


FROM person.address

WHERE AddressID < 100

ORDER BY

CASE WHEN @OrderBy = 'Postal'

AND @OrderByDirection = 'D'


THEN PostalCode END DESC,

CASE WHEN @OrderBy = 'Postal'

AND @OrderByDirection != 'D'

THEN PostalCode END,


CASE WHEN @OrderBy = 'State'

AND @OrderByDirection = 'D'

THEN StateProvinceID END DESC,

CASE WHEN @OrderBy = 'State'


AND @OrderByDirection != 'D'

THEN StateProvinceID END,

CASE WHEN @OrderBy = 'City'

AND @OrderByDirection = 'D'


THEN City END DESC,

CASE WHEN @OrderBy = 'City'

AND @OrderByDirection != 'D'

THEN City END


GO

Wednesday, April 15, 2009

Custom Paging

Problem
I need to query a large amount of data to my application window and use paging to view it. The query itself takes a long time to process and I do not want to repeat it every time I have to fetch a page. Also, the number of rows in the result set could be huge, so I am often fetching a page from the end of the result set. I can't use the default paging because I wait a long time until I get the data back. What are my options?

Solution
There are few possible solutions out there for paging through a large result set. In this tip, I am going to focus on three examples and compare the performance implications. The examples are:

  • Example 1 - I use a temporary table (#temp_table) to store the result set for each session.
  • Example 2 - I use a Common Table Expression (CTE) to page through the result set.
  • Example 3 - I populate a global temporary table to store the complete result set.

The first two examples are similar to some of the most commonly used paging stored procedure options, the third example is my own extension which I wanted to show for comparison in this specific case of a complex query with a large large result set.


Example #1 - Using a session temporary table (#temp_table)

In this stored procedure, I create the temporary table and insert only the relevant rows into it based on the input parameters:

CREATE PROCEDURE dbo.proc_Paging_TempTable
(
@Page int,
@RecsPerPage int
)
AS

-- The number of rows affected by the different commands
-- does not interest the application, so turn NOCOUNT ON
SET NOCOUNT ON

-- Determine the first record and last record
DECLARE @FirstRec int, @LastRec int

SELECT @FirstRec = (@Page - 1) * @RecsPerPage
SELECT @LastRec = (@Page * @RecsPerPage + 1)

-- Create a temporary table
CREATE TABLE #TempItems
(RowNum int IDENTITY PRIMARY KEY,
Title nvarchar(100),
Publisher nvarchar(50),
AuthorNames nvarchar(200),
LanguageName nvarchar(20),
FirstLine nvarchar(150),
CreationDate smalldatetime,
PublishingDate smalldatetime,
Popularity int)

-- Insert the rows into the temp table
-- We query @LatRec + 1, to find out if there are more records
INSERT INTO #TempItems (Title, Publisher, AuthorNames, LanguageName,
FirstLine, CreationDate, PublishingDate, Popularity)
SELECT TOP (@LastRec-1)
s.Title, m.Publisher, s.AuthorNames, l.LanguageName,
m.FirstLine, m.CreationDate, m.PublishingDate, m.Popularity
FROM dbo.Articles m
INNER JOIN dbo.ArticlesContent s
ON s.ArticleID = m.ID
LEFT OUTER JOIN dbo.Languages l
ON l.ID = m.LanguageID
ORDER BY m.Popularity desc

-- Return the set of paged records
SELECT *
FROM #TempItems
WHERE RowNum > @FirstRec
AND RowNum < @LastRec

-- Drop the temp table
DROP TABLE #TempItems

-- Turn NOCOUNT back OFF
SET NOCOUNT OFF
GO


Example #2 - Using a Common Table Expression (CTE)

In this example, I use a CTE with the ROW_NUMBER() function to fetch only the relevant rows:

CREATE PROCEDURE dbo.proc_Paging_CTE
(
@Page int,
@RecsPerPage int
)
AS
-- The number of rows affected by the different commands
-- does not interest the application, so turn NOCOUNT ON

SET NOCOUNT ON


-- Determine the first record and last record

DECLARE @FirstRec int, @LastRec int

SELECT @FirstRec = (@Page - 1) * @RecsPerPage
SELECT @LastRec = (@Page * @RecsPerPage + 1);

WITH TempResult as
(
SELECT ROW_NUMBER() OVER(ORDER BY Popularity DESC) as RowNum,
s.Title, m.Publisher, s.AuthorNames, l.LanguageName,
m.FirstLine, m.CreationDate, m.PublishingDate, m.Popularity
FROM dbo.Articles m
INNER JOIN dbo.Content s
ON s.ArticleID = m.ID
LEFT OUTER JOIN dbo.Languages l
ON l.ID = m.LanguageID
)
SELECT top (@LastRec-1) *
FROM TempResult
WHERE RowNum > @FirstRec
AND RowNum < @LastRec



-- Turn NOCOUNT back OFF
SET NOCOUNT OFF
GO


Example #3 - Using a global temporary table to hold the whole result

In this example, I use a global temporary table to store the complete result set of the query. In this scenario, this temporary table will be populated during the first execution of the stored procedure. All subsequent executions of the stored procedure will use the same temporary table. The idea behind this approach is that, when using a Global temporary table, other sessions can also use the same table (if they are aware of the GUID and need the same data). In order to drop the temporary table, you will have to either drop it explicitly or disconnect the session.

If this approach does not work for you, you could use the same technique method to create "temporary" tables in your user defined database with a unique extension. One specific scenario when this technique could be useful is when the tempdb database is already being a bottleneck. If that is the case, with this approach you can always create a dedicated database for these tables. Just do not forget to drop the temporary objects when they are not required.

CREATE PROCEDURE dbo.proc_Paging_GlobalTempTable
(
@Page int,
@RecsPerPage int,
@GUID uniqueidentifier = null OUTPUT -- will output the extension of the table.
-- This parameter should be sent by the application:
-- First time it should be NULL and after, it should be
-- populated by the value that was sent back from the SP.
)
AS
-- The # of rows affected ny the different commands
-- does not interest the application, so turn NOCOUNT ON
SET NOCOUNT ON

-- Determine the first record and last record
DECLARE @FirstRec int, @LastRec int, @cmd varchar(2000)

SELECT @FirstRec = (@Page - 1) * @RecsPerPage
SELECT @LastRec = (@Page * @RecsPerPage + 1)

-- If the GUID is null (first execution) -
-- The global table is created, otherwise it will be queried only:
IF @GUID IS NULL
BEGIN

SET @GUID = NEWID()

SET @cmd = 'SELECT RowNum=IDENTITY(INT,1,1),
s.Title, m.Publisher, s.AuthorNames, l.friendlyName,
m.FirstLine, m.CreationDate, m.PublishingDate, m.Popularity
INTO [##tmp_' + CONVERT(VARCHAR(40),@GUID) + ']
FROM dbo.Abstracts m
INNER JOIN dbo.AbstractsContentSearch s ON s.AbstractID = m.ID
LEFT OUTER JOIN dbo.Languages l on l.ID = m.LanguageID
ORDER BY Popularity DESC;
CREATE UNIQUE INDEX [IDX_##tmp_' + CONVERT( VARCHAR(40),@GUID) + ']
ON [##tmp_' + CONVERT(VARCHAR(40),@GUID) + '] (RowNum)'
EXEC (@cmd)
END

-- Fetch the rows of the desired page
SET @cmd = 'SELECT top (' + CONVERT(VARCHAR(20),@LastRec-1) + ') *
FROM [##tmp_' + CONVERT(VARCHAR(40),@GUID) + ']
WHERE RowNum > ' + CONVERT(VARCHAR(20),@FirstRec) +
' AND RowNum < ' + CONVERT(VARCHAR(20),@LastRec)
EXEC (@cmd)

-- Turn NOCOUNT back OFF
SET NOCOUNT OFF
GO

Monday, April 13, 2009

Backup Database

using Microsoft.SqlServer.Management.SMO;
public static void MakeBackup()
{
Server server = new Server("localhost");
Backup backup = new Backup();
backup.Action = BackupActionType.Database;
backup.BackupSetName = "Backup copy";
backup.BackupSetDescription = "Backup copy";
backup.Database = "DemoSQLServer";
backup.Devices.AddDevice("C:\\DemoSQLServer.bak",
DeviceType.File);
backup.SqlBackup(server);
}

Copy Database

public static void CopyDataBaseAsFile() {

//Set Source SQL Server Instance Information
Server server = null;
Microsoft.SqlServer.Management.Smo.Database ddatabase = null;
Microsoft.SqlServer.Management.Smo.Database sdatabase = null;

try {
server = new Server(DBHelper.SourceSQLServer);
server.ConnectionContext.LoginSecure = false;
server.ConnectionContext.Login = Login;
server.ConnectionContext.Password = Password;

ddatabase = new Microsoft.SqlServer.Management.Smo.Database(server, DBHelper.DestinationDatabase);
sdatabase = new Microsoft.SqlServer.Management.Smo.Database(server, DBHelper.sourceDatabase);
}
catch {
FileActions.WriteToLog(@"" + backupLogFileLocation, "Server connection failed.");
FileActions.WriteToLog(@"" + restoreLogFileLocation, "Server connection failed.");
}

try {
/*
* Backup the target database to a .bak file.
*/
Backup bUp = new Backup();
bUp.Database = DBHelper.SourceDatabase;
bUp.Devices.AddDevice(@"" + BackupFileLocation, DeviceType.File);
bUp.Initialize = true;
bUp.Action = BackupActionType.Database;
bUp.PercentComplete += new PercentCompleteEventHandler(bUp_PercentComplete);
bUp.PercentCompleteNotification = 5;
bUp.SqlBackup(server);
}
catch (Exception ex) {

FileActions.WriteToLog(@"" + backupLogFileLocation, ex.ToString());
return;
}

try {
/*
* Restore the new db from the created db backup.
*/
bool verified = false;
string errorMsg = "";
Restore res = new Restore();
res.Database = DBHelper.DestinationDatabase;
res.Action = RestoreActionType.Database;
res.Devices.AddDevice(@"" + BackupFileLocation, DeviceType.File);
// res.Devices.AddDevice(@"C:\temp\copybakup.bak", DeviceType.File);

verified = res.SqlVerify(server, out errorMsg);

//ddatabase.SetOffline();


if (verified) {
res.PercentCompleteNotification = 5;
res.ReplaceDatabase = true;
res.NoRecovery = false;

res.RelocateFiles.Add(new RelocateFile(DBHelper.SourceDatabase, @"C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\" + DBHelper.DestinationDatabase + ".mdf"));
res.RelocateFiles.Add(new RelocateFile(DBHelper.SourceDatabase + "_Log", @"C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\" + DBHelper.DestinationDatabase + ".ldf"));

res.PercentComplete += new PercentCompleteEventHandler(res_PercentComplete);
res.SqlRestore(server);
}
else {
FileActions.WriteToLog(@"" + RestoreLogFileLocation, "Backup set could not be verified.");
}

//ddatabase.SetOnline();
}
catch (Exception ex) {
FileActions.WriteToLog(@"" + RestoreLogFileLocation, ex.ToString());
//ddatabase.SetOnline;
return;
}
}

Start with SMO

Getting Connected
The first thing we have to do is to make a connection to our server.
Now you might be thinking "Hey, there is already a class existing to connect to a SQL Server - System.Data.SqlClient.SqlConnection", and you are all right - you can use this class to build your connection to the Sql Server.
Microsoft.SqlServer.Management.Smo.Server server;
///
/// Initializes the field 'server'
///

void InitializeServer()
{
// To Connect to our SQL Server -
// we Can use the Connection from the System.Data.SqlClient Namespace.
SqlConnection sqlConnection =
new SqlConnection(@"Integrated Security=SSPI; Data Source=(local)\SQLEXPRESS");

//build a "serverConnection" with the information of the "sqlConnection"
Microsoft.SqlServer.Management.Common.ServerConnection serverConnection =
new Microsoft.SqlServer.Management.Common.ServerConnection(sqlConnection);

//The "serverConnection is used in the ctor of the Server.
server = new Server(serverConnection);
}
Object Hierarchy
Once you have got a connection to your server - accessing databases is very simple. Most of the SMO Objects are stored in a Parent/Child Collection ownership.
A Server has got a collection of Databases (The Databases Parent is the Server),
A Database has got a collection of Tables,
A Table has got a collection of Columns.....
//this Code adds a all known Databases to a Listview

//clean up the listview first.
listView1.Clear();
listView1.Columns.Clear();

//building the Coloumns
listView1.Columns.Add("Name");
listView1.Columns.Add("# of Tables");
listView1.Columns.Add("Size");

//iterate over all Databases
foreach( Database db in server.Databases )
{
//add the Data to the listview.
ListViewItem item = listView1.Items.Add(db.Name);
item.SubItems.Add( db.Tables.Count.ToString() );
item.SubItems.Add(db.Size.ToString());
}
This Code shows how to enlisting Backup Devices
listView1.Clear();
listView1.Columns.Clear();

listView1.Columns.Add("Name");
listView1.Columns.Add("Location");

foreach (BackupDevice backupDevice in server.BackupDevices)
{
ListViewItem item = listView1.Items.Add(backupDevice.Name);
item.SubItems.Add(backupDevice.PhysicalLocation);
}
Create a new Database
Of course - we are not limited to getting information about our SQL Server - we can also create, drop and alter objects. Most SMO objects have 2 requirements - a valid (unique) Name and a valid Parent.
database.Name = dbName.Text;
database.Parent = server;
database.Create();
You see - SMO uses really compact code :-) Now - lets Create a Backup Device.
backupDevice.Parent = Server;
backupDevice.Name = "myBackupDevice";
backupDevice.PhysicalLocation = @"C:\myNewBackupDevice.bak";
backupDevice.BackupDeviceType = BackupDeviceType.Disk;
backupDevice.Create();
Scripting with T-SQL!
In some cases you might want to have a T-SQL Script of a operation. Let's take the example from above - we want a script for adding a Backup Device to our SQL Server.
backupDevice.Parent = Server;
backupDevice.Name = "myBackupDevice";
backupDevice.PhysicalLocation = @"C:\myNewBackupDevice.bak";
backupDevice.BackupDeviceType = BackupDeviceType.Disk;
StringCollection strings = backupDevice.Script();
//results:
// strings [0] = "EXEC master.dbo.sp_addumpdevice @devtype = N'disk',
// @logicalname = N'myBackupDevice', @physicalname = N'C:\myNewBackupDevice.bak'"
Doing a Backup
Finally, i want to show you how to do a Backup of your Database. Note that the class Backup doesn't represent a BackupDevice - it represents a "Backup Operation".
Backup backup = new Backup();
//we asume that there is a Logical Device with the Name "myBackupDevice"
backup.Devices.AddDevice("myBackupDevice", DeviceType.LogicalDevice);
backup.Database = "Master";
backup.SqlBackup(server);
Additional Features
The functional range of SMO is amazing!
SMO supports really everything you will need.
Indexes,
Constraints,
Relationships,
Permissions
Stored Procedures,
Full Text Catalogues,
HTTP Protocol,
Triggers,
Mirroring,
Replication,
Asymmetric Encryption,
.
.
.

In short:
Everything you desire :)
And if you understand the basics of a specific feature, you won't have problems to implement it with SMO.

Wednesday, April 8, 2009

SQL SERVER Remove Duplicate Chars From String

CREATE FUNCTION dbo.REMOVE_DUPLICATE_INSTR

(@datalen_tocheck INT,@string VARCHAR(255))

RETURNS VARCHAR(255)
AS

BEGIN
DECLARE
@str VARCHAR(255)
DECLARE @count INT
DECLARE
@start INT


DECLARE
@result VARCHAR(255)
DECLARE @end INT
SET
@start=1

SET @end=@datalen_tocheck
SET @count=@datalen_tocheck
SET @str = @string

WHILE (@count <=255)
BEGIN

IF
(@result IS NULL)
BEGIN
SET
@result=
END


SET
@result=@result+SUBSTRING(@str,@start,@end)
SET @str=REPLACE(@str,SUBSTRING(@str,@start,@end),)
SET @count=@count+@datalen_tocheck

END

RETURN
@result

END

GO;


Usage:

SET CONCAT_NULL_YIELDS_NULL OFF

SELECT dbo.Remove_duplicate_instr(<CHARacter length OF a

duplicate SUBSTRING >,<string contain duplicate>)


Example:

To keep char set in a string unique and remove duplicate 3 char long string run this UDF as inline function.

SET CONCAT_NULL_YIELDS_NULL OFF

SELECT dbo.Remove_duplicate_instr(3,�f123456789123456456

Resultset:

123456789

Tuesday, April 7, 2009

What is NOLOCK ?

Using the NOLOCK query optimizer hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which meansthat another process could be updating the data at the exact time you are reading it. Therare no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) lockThis means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the readshave completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay (blocking)

What is use of EXCEPT clause?

EXCEPT clause is similar to MINUreturns all rows in the first query that are not returned in the second query. Each SQL statement within the EXCEPT query and MINUS query must have the same number of fieldsin the result sets with similar ata types. (

What is Isolation Levels? 

Transactions specify an isolation level that defines the degree to which one transaction must be isolated from resource or datamodifications made by other transactions. Isolation levels are described in terms of which concurrency side‐effects, such as dirty reads or phantom reads, are allowed.

What is LINQ?

Language Integrated Query (LINQ) adds the ability to query objects using .NET languages. The LINQ to SQL object/relational mapping (O/RM) framework provides the following basic features:
  • Tools to create classes (usually called entities) mapped to database tables
  • Compatibility with LINQ’s standard query operations
  • The DataContext class, with features such as entity record monitoring, automatic SQL statement generation, record concurrency detection, and much more

What are synonyms?

Synonyms give you the ability to provide alternate names for database objects. You can alias object names; for example, using the Employee table as Emp. You can also shorten names. This is especially useful when dealing with three and four part names; for example, shortening server.database.owner.object to object.

What is CLR? 

In SQL Server 2008, SQL Server objects such as user‐defined functions can be created using such CLR languages. This CLR language support extends not only to user‐defined functions, but also to stored procedures and triggers. You can develop such CLR add‐ons to SQL Server using Visual Studio 2008.

How can we rewrite sub‐queries into simple select statements or with joins?

Yes we can write using Common Table Expression (CTE). A Common Table Expression (CTE) is an expression that can be thought of as a temporary result set which is defined within the execution of a single SQL statemnt. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query.

E.g.
USE AdventureWorks
GO WITH EmployeeDepartment_CTE AS
( SELECT EmployeeID,DepartmentID,ShiftID FROM HumanResources.EmployeeDepartmentHistory )
SELECT ecte.EmployeeId,ed.DepartmentID, ed.Name,ecte.ShiftID
FROM HumanResources.Department ed
INNER JOIN EmployeeDepartment_CTE ecte ON ecte.DepartmentID = ed.DepartmentID
GO

What are the Advantages of using CTE? 

  • Using CTE improves the readability and makes maintenance of complex queries easy.
  • The query can be divided into separate, simple, logical building blocks which can be then used to build more complex CTEs until final result set is generated.
  • CTE can be defined in functions, stored procedures, triggers or even views.
  • After a CTE is defined, it can be used as a Table or a View and can SELECT, INSERT, UPDATE or DELETE Data.

Which are new data types introduced in SQL SERVER 2008? 

The GEOMETRY Type: The GEOMETRY data type is a system .NET common language runtime (CLR) data type in SQL Server. This type represents data in a two‐dimensional Euclidean coordinate system.

The GEOGRAPHY Type: The GEOGRAPHY datatype’s functions are the same as with GEOMETRY. The difference between the two is that when you specify GEOGRAPHY, you are usually specifying points in terms of latitude and longitude.

New Date and Time Datatypes:
SQL Server 2008 introduces four new datatypes related to date and time: DATE, TIME, DATETIMEOFFSET, and DATETIME2. •
  • DATE: The new DATE type just stores the date itself. It is based on the Gregorian calendar and handles years from 1 to 9999.
  • TIME: The new TIME (n) type stores time with a range of 00:00:00.0000000 through 23:59:59.9999999. The precision is allowed with this type. TIME supports seconds down to 100 nanoseconds. The n in TIME (n) defines this level of fractional second precision, from 0 to 7 digits of precision.
  • The DATETIMEOFFSET Type: DATETIMEOFFSET (n) is the time‐zone‐aware version of a datetime datatype. The name will appear less odd when you consider what it really is: a date + a time + a time‐zone offset. The offset is based on how far behind or ahead you are from Coordinated Universal Time (UTC) time.
  • The DATETIME2 Type: It is an extension of the datetime type in earlier versions of SQL Server. This new datatype has a date range covering dates from January 1 of year 1 through December 31 of year 9999. This is a definite improvement over the 1753 lower boundary of the datetime datatype. DATETIME2 not only includes the larger date range, but also has a timestamp and the same fractional precision that TIME type provides

What is Filtered Index?

Filtered Index is used to index a portion of rows in a table that means it applies filter on INDEX which improves query performnce, reduce index maintenance costs, and reduce index storage costs compared with full‐table indexes. When we see an Index created with some where clause then that is actually a FILTERED INDEX.  

What is MERGE Statement?

MERGE is a new feature that provides an efficient way to perform multiple DML operations. In previous versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions, but now, using MERGE statement we can include the logic of such data modifications in one statement that even checks when the data is mtched then just update it and when unmatched then insert it. One of the most important advantages of MERGE statement is all the data is read and processed only once.  

What is CTE?

 CTE is an abbreviation Common Table Expression. A Common Table Expression (CTE) is an expression that can be thought of as a temporary result set which is defined within the execution of a single SQL statemnt. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query.

What does TOP Operator Do?

 The TOP operator is used to specify the number of rows to be returned by a query. The TOP operator has new addition in SQL SERVER 2008 that it accepts variables as well as literal values and can be used with NSERT, UPDATE, and DELETES statements. 

What are Sparse Columns?

 A sparse column is another tool used to reduce the amount of physical storage used in a database. They are the ordinary columns that have an optimized storage for null values. Sparse columns reduce the space requirements for null values at the cost of more overhead to retrieve nonnull values.

What is Replication and Database Mirroring?

Database mirroring can be used with replication to provide availability for the publication database. Database mirroring involves two copies of a single database that typically reside on different computers. At any given time, only one copy of the database is currently available to clients which are known as the principal database. Updates made by clients to the principal database are applied on the other copy of the database, known as the mirror database. Mirroring involves applying the transaction log from every insertion, update, or deletion made on the principal database onto the mirror database.

What is Policy Management?

Policy Management in SQL SERVER 2008 allows you to define and enforce policies for configuring and managing SQL Server across te enterprise. Policy‐Based Management is configured in SQL Server Management Studio (SSMS). Navigate to the Object Explorer and expand the Management node and the Policy Management node; you will see the Policies, Conditions, and Facets nodes.

What is Service Broker? 

Service Broker is a message‐queuing technology in SQL Server that allows developers to integrate SQL Server fully into distributed applications.
Service Broker is feature which provides facility to SQL Server to send an asynchronous, transactional message.
it allows a database to send a message to another database without waiting for the response, so the application will continue to function if the remote database is temporarily unavailable.

What are the basic functions for master, msdb, model, tempdb and resource databases? (sql server 2008)

SQLAuthority.com - SQL Server 2008 Interview Questions and Answers



The master database holds information for all databases located on the SQL Server instance and is theglue that holds the engine together. Because SQL Server cannot start without a functioning masterdatabase, you must administer this database with care.


The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.


The tempdb holds temporary objects such as global and local temporary tables and stored procedures.


The model is essentially a template database used in the creation of any new user database created in the instance.


The resoure Database is a read‐only database that contains all the system objects that are included with SQL Server. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata.

What is DataWarehousing?

•Subject‐oriented, meaning that the data in the database is organized so that all the data elements relating to the same real‐world event or object are linked together.

•Time‐variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time.

•Non‐volatile, meaning that data in the database is never over‐written or deleted, once committed, the data is static, read‐only, but retained for future reporting.

•Integrated, meaning that the database contains data from most or all of an organization's operational applications, and that this data is made consistent.

What is Identity?

Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers; the value of this cannot be controlled. Identity/GUID columns do not need to be indexed.

What is User Defined Functions? What kind of User‐Defined Functions can be created?

User‐Defined Functions allow defining its own T‐SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.
Different Kinds of User‐Defined Functions created are:

Scalar User‐Defined Function

A Scalar user‐defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user‐defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

Inline Table‐Value User‐Defined Function
An Inline Table‐Value user‐defined function returns a table data type and is an exceptional alternative to a view as the user‐defined function can pass parameters into a T‐SQL select command and in essence provide us with a parameterized, non‐updateable view of the underlying tables.

Multi‐statement Table‐Value User‐Defined Function
A Multi‐Statement Table‐Value user‐defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T‐SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to in essence create a parameterized, non‐updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user‐defined function, It can be used in the FROM clause of a T‐SQL command unlike the behavior found when using a stored procedure which can also return record sets.

What are primary keys and foreign keys?

Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.

What are different Types of Join?

Cross Join
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.

Inner Join
A join that displays only the rows that have a match in both joined tables is known as inner Join. This is the default type of join in the Query and View Designer.

Outer Join
A join that includes rows even if they do not have related rows in the joined table is an Outer Join. You can create three different outer join to specify the unmatched rows to be included:

Left Outer Join: In Left Outer Join all rows in the first‐named table i.e. "left" table, which appears leftmost in the JOIN clause are included. Unmatched rows in the right table do not appear.

Right Outer Join: In Right Outer Join all rows in the second‐named table i.e. "right" table, which appears rightmost in the JOIN clause are included. Unmatched rows in the left table are not included.

Full Outer Join: In Full Outer Join all rows in all joined tables are included, whether they are matched or not.

Self Join
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company has a hierarchal reporting structure whereby one member of staff reports to another. Self Join can be Outer Join or Inner Join.

What is sub‐query? Explain properties of sub‐query?

Sub‐queries are often referred to as sub‐selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub‐query is executed by enclosing it in a set of parentheses. Sub‐queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.
A subquery is a SELECT statement that is nested within another T‐SQL statement.

A subquery SELECT statement if executed independently of the T‐SQL statement, in which it is nested, will return a resultset. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T‐SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.

What is Difference between Function and Stored Procedure?

UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be. UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables. Inline UDF's can be thought of as views that take parameters and can be used in JOINs and other Rowset operations.

What is Collation?

Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case sensitivity, accent marks, kana character types and character width.

What is Cursor?

Cursor is a database object used by applications to manipulate data in a set on a row‐by‐row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.
In order to work with a cursor we need to perform some steps in the following order:
•Declare cursor
•Open cursor
•Fetch row from the cursor
•Process fetched row
•Close cursor
•Deallocate cursor

What is a Linked Server?

Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T‐SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data. Stored Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

What is Index?

An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes; they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.

What is View?

A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T‐SQL select command and can come from one to many different base tables or even other views.

What is Trigger?

A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS. Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event‐drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is Stored Procedure?

A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_depends, sp_helpdb, sp_renamedb etc.

What are different normalization forms?

1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.

2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi‐valued key, remove it to a separate table.

3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key.

BCNF: Boyce‐Codd Normal Form
If there are non‐trivial dependencies between candidate key attributes, separate them out into distinct tables.

4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.

5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many‐to‐many relationships.

ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.

DKNF: Domain‐Key Normal Form
A model free from all modification anomalies is said to be in DKNF.
Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.

What is De‐normalization?

De‐normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De‐normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.

What is Normalization?

Database normalization is a data design and organization process applied to data structures based on rules that help building relational databases. In relational database design, the process of organizing data to minimize redundancy is called normalization. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

What are the properties of the Relational tables?

Relational tables have six properties:
  • Values are atomic.
  • Column values are of the same kind.
  • Each row is unique.
  • The sequence of columns is insignificant.
  • The sequence of rows is insignificant.
  • Each column must have a unique name.

What is RDBMS?

Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

Monday, April 6, 2009

Records using Stored Procedure paging

CREATE PROCEDURE SP_Paginet
(
@Page int,
@RecsPerPage int
)
AS

-- We don't want to return the # of rows inserted
-- into our temporary table, so turn NOCOUNT ON
SET NOCOUNT ON


--Create a temporary table
CREATE TABLE #TempItems
(
ID int IDENTITY,
Name varchar(50),
Price currency
)


-- Insert the rows from tblItems into the temp. table
INSERT INTO #TempItems (Name, Price)
SELECT Name,Price FROM tblItem ORDER BY Price

-- Find out the first and last record we want
DECLARE @FirstRec int, @LastRec int
SELECT @FirstRec = (@Page - 1) * @RecsPerPage
SELECT @LastRec = (@Page * @RecsPerPage + 1)

-- Now, return the set of paged records, plus, an indiciation of we
-- have more records or not!
SELECT *,
MoreRecords =
(
SELECT COUNT(*)
FROM #TempItems TI
WHERE TI.ID &
gt;= @LastRec
)
FROM #TempItems
WHERE ID &
gt; @FirstRec AND ID &lt; @LastRec


-- Turn NOCOUNT back OFF
SET NOCOUNT OFF

SQL SERVER - Logical Query Processing Phases - Order of Statement Execution

What actually sets SQL Server apart from other programming languages is the way SQL Server processes its code. Generally, most programming languages process statement from top to bottom. By contrast, SQL Server processes them in a unique order which is known as Logical Query Processing Phase. These phases generate a series of virtual tables with each virtual table feeding into the next phase (virtual tables not viewable). These phases and their orders are given as follows:

1. FROM
2. ON
3. OUTER
4. WHERE
5. GROUP BY
6. CUBE | ROLLUP
7. HAVING
8. SELECT
9. DISTINCT
10 ORDER BY
11. TOP