Tuesday, December 22, 2009
Case Sensitive Search
SELECT Col1
FROM Table_XYZ
WHERE Col1 = 'casesearch'
To make the query case sensitive and retrieve only one record (“casesearch”) from above query, the collation of the query needs to be changed as follows.
SELECT Col1
FROM Table_XYZ
WHERE Col1 COLLATE Latin1_General_CS_AS = 'casesearch'
Adding COLLATE Latin1_General_CS_AS makes the search case sensitive.
Default Collation of the SQL Server installation SQL_Latin1_General_CP1_CI_AS is not case sensitive.
To change the collation of the any column for any table permanently run following query.
ALTER TABLE Table_XYZ
ALTER COLUMN Col1 VARCHAR(20)
COLLATE Latin1_General_CS_AS
To know the collation of the column for any table run following Stored Procedure.
EXEC sp_help DatabaseName
Second results set above script will return you collation of database DatabaseName.
Thursday, May 14, 2009
To generate insert statement for user defined tables available in SQL Server
CREATE PROC InsertGenerator
(@tableName varchar(100)) as
--Declare a cursor to retrieve column specific information for the specified table
DECLARE cursCol CURSOR FAST_FORWARD FOR
SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName
OPEN cursCol
DECLARE @string nvarchar(3000) --for storing the first half of INSERT statement
DECLARE @stringData nvarchar(3000) --for storing the data (VALUES) related statement
DECLARE @dataType nvarchar(1000) --data types returned for respective columns
SET @string='INSERT '+@tableName+'('
SET @stringData=''
DECLARE @colName nvarchar(50)
FETCH NEXT FROM cursCol INTO @colName,@dataType
IF @@fetch_status<>0
begin
print 'Table '+@tableName+' not found, processing skipped.'
close curscol
deallocate curscol
return
END
WHILE @@FETCH_STATUS=0
BEGIN
IF @dataType in ('varchar','char','nchar','nvarchar')
BEGIN
--SET @stringData=@stringData+'''''''''+isnull('+@colName+','''')+'''''',''+'
SET @stringData=@stringData+''''+'''+isnull('''''+'''''+'+@colName+'+'''''+''''',''NULL'')+'',''+'
END
ELSE
if @dataType in ('text','ntext') --if the datatype is text or something else
BEGIN
SET @stringData=@stringData+'''''''''+isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+'
END
ELSE
IF @dataType = 'money' --because money doesn't get converted from varchar implicitly
BEGIN
SET @stringData=@stringData+'''convert(money,''''''+isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+'
END
ELSE
IF @dataType='datetime'
BEGIN
--SET @stringData=@stringData+'''convert(datetime,''''''+isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+'
--SELECT 'INSERT Authorizations(StatusDate) VALUES('+'convert(datetime,'+isnull(''''+convert(varchar(200),StatusDate,121)+'''','NULL')+',121),)' FROM Authorizations
--SET @stringData=@stringData+'''convert(money,''''''+isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+'
SET @stringData=@stringData+'''convert(datetime,'+'''+isnull('''''+'''''+convert(varchar(200),'+@colName+',121)+'''''+''''',''NULL'')+'',121),''+'
-- 'convert(datetime,'+isnull(''''+convert(varchar(200),StatusDate,121)+'''','NULL')+',121),)' FROM Authorizations
END
ELSE
IF @dataType='image'
BEGIN
SET @stringData=@stringData+'''''''''+isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+'
END
ELSE --presuming the data type is int,bit,numeric,decimal
BEGIN
--SET @stringData=@stringData+'''''''''+isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+'
--SET @stringData=@stringData+'''convert(datetime,'+'''+isnull('''''+'''''+convert(varchar(200),'+@colName+',121)+'''''+''''',''NULL'')+'',121),''+'
SET @stringData=@stringData+''''+'''+isnull('''''+'''''+convert(varchar(200),'+@colName+')+'''''+''''',''NULL'')+'',''+'
END
SET @string=@string+@colName+','
FETCH NEXT FROM cursCol INTO @colName,@dataType
END
DECLARE @Query nvarchar(4000)
SET @query ='SELECT '''+substring(@string,0,len(@string)) + ') VALUES(''+ ' + substring(@stringData,0,len(@stringData)-2)+'''+'')'' FROM '+@tableName
exec sp_executesql @query
--select @query
CLOSE cursCol
DEALLOCATE cursCol
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Saturday, April 25, 2009
Import CSV File Into SQL Server Using Bulk Insert - Load Comma Delimited File Into SQL Server
This is very common request recently - How to import CSV file into SQL Server? How to load CSV file into SQL Server Database Table? How to load comma delimited file into SQL Server? Let us see the solution in quick steps.
CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.
Create TestTable
USE TestData
CREATE TABLE CSVTest
(ID INT,FirstName VARCHAR(40),
LastName VARCHAR(40),
BirthDate SMALLDATETIME)GO
Create CSV file in drive C: with name csvtest.txt with following content. The location of the file is C:\csvtest.txt1,James,Smith,19750101
2,Meggie,Smith,19790122
3,Robert,Smith,20071101
4,Alex,Smith,20040202Now run following script to load all the data from CSV to database table. If there is any error in any row it will be not inserted but other rows will be inserted.
BULK
INSERT CSVTest
FROM ‘c:\csvtest.txt’
WITH(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’)
GOCheck the content of the table.
SELECT *
FROM CSVTest
GODrop the table to clean up database.
SELECT *
FROM CSVTest
GOCASE Statement/Expression Examples and Explanation
Two basic formulations for CASE expression
1) Simple CASE expressions
A simple CASE expression checks one expression against multiple values. Within a SELECT statement, a simple CASE expression allows only an equality check; no other comparisons are made. A simple CASE expression operates by comparing the first expression to the expression in each WHEN clause for equivalency. If these expressions are equivalent, the expression in the THEN clause will be returned.
Syntax:CASE expression
WHEN expression1 THEN expression1
[[WHEN expression2 THEN expression2] [...]]
[ELSE expressionN]
END
Example:DECLARE @TestVal INT
SET @TestVal = 3
SELECT
CASE @TestVal
WHEN 1 THEN 'First'
WHEN 2 THEN 'Second'
WHEN 3 THEN 'Third'
ELSE 'Other'
END
2) Searched CASE expressions
A searched CASE expression allows comparison operators, and the use of AND and/or OR between each Boolean expression. The simple CASE expression checks only for equivalent values and can not contain Boolean expressions. The basic syntax for a searched CASE expressions is shown below:
Syntax:CASE
WHEN Boolean_expression1 THEN expression1
[[WHEN Boolean_expression2 THEN expression2] [...]]
[ELSE expressionN]
END
Example:DECLARE @TestVal INT
SET @TestVal = 5
SELECT
CASE
WHEN @TestVal <=3 THEN 'Top 3'
ELSE 'Other'
END
CASE Statement in ORDER BY Clause - ORDER BY using Variable
SP uses EXEC (or sp_executesql) to execute dynamically build SQL.
This was taking big hit on performance. The issue was how to improve the performance as well as remove the logic of preparing OrderBy from application. The solution I came up was using multiple CASE statement. This solution is listed here in simple version using AdventureWorks sample database. Another challenge was to order by direction of ascending or descending direction. The solution of that issue is also displayed in following example. Test the example with different options for @OrderBy and @OrderByDirection.
Currently:
Database only solution:USE AdventureWorks
GO
DECLARE @OrderBy VARCHAR(10)
DECLARE @OrderByDirection VARCHAR(1)
SET @OrderBy = 'State' ----Other options Postal for PostalCode,
---- State for StateProvinceID, City for City
SET @OrderByDirection = 'D' ----Other options A for ascending,
---- D for descending
SELECT AddressID, City, StateProvinceID, PostalCode
FROM person.address
WHERE AddressID < 100
ORDER BY
CASE WHEN @OrderBy = 'Postal'
AND @OrderByDirection = 'D'
THEN PostalCode END DESC,
CASE WHEN @OrderBy = 'Postal'
AND @OrderByDirection != 'D'
THEN PostalCode END,
CASE WHEN @OrderBy = 'State'
AND @OrderByDirection = 'D'
THEN StateProvinceID END DESC,
CASE WHEN @OrderBy = 'State'
AND @OrderByDirection != 'D'
THEN StateProvinceID END,
CASE WHEN @OrderBy = 'City'
AND @OrderByDirection = 'D'
THEN City END DESC,
CASE WHEN @OrderBy = 'City'
AND @OrderByDirection != 'D'
THEN City END
GO
Wednesday, April 15, 2009
Custom Paging
Problem
I need to query a large amount of data to my application window and use paging to view it. The query itself takes a long time to process and I do not want to repeat it every time I have to fetch a page. Also, the number of rows in the result set could be huge, so I am often fetching a page from the end of the result set. I can't use the default paging because I wait a long time until I get the data back. What are my options?
Solution
There are few possible solutions out there for paging through a large result set. In this tip, I am going to focus on three examples and compare the performance implications. The examples are:
- Example 1 - I use a temporary table (#temp_table) to store the result set for each session.
- Example 2 - I use a Common Table Expression (CTE) to page through the result set.
- Example 3 - I populate a global temporary table to store the complete result set.
The first two examples are similar to some of the most commonly used paging stored procedure options, the third example is my own extension which I wanted to show for comparison in this specific case of a complex query with a large large result set.
Example #1 - Using a session temporary table (#temp_table)
In this stored procedure, I create the temporary table and insert only the relevant rows into it based on the input parameters:
CREATE PROCEDURE dbo.proc_Paging_TempTable ( @Page int, @RecsPerPage int ) AS -- The number of rows affected by the different commands -- does not interest the application, so turn NOCOUNT ON SET NOCOUNT ON -- Determine the first record and last record DECLARE @FirstRec int, @LastRec int SELECT @FirstRec = (@Page - 1) * @RecsPerPage SELECT @LastRec = (@Page * @RecsPerPage + 1) -- Create a temporary table CREATE TABLE #TempItems (RowNum int IDENTITY PRIMARY KEY, Title nvarchar(100), Publisher nvarchar(50), AuthorNames nvarchar(200), LanguageName nvarchar(20), FirstLine nvarchar(150), CreationDate smalldatetime, PublishingDate smalldatetime, Popularity int) -- Insert the rows into the temp table -- We query @LatRec + 1, to find out if there are more records INSERT INTO #TempItems (Title, Publisher, AuthorNames, LanguageName, FirstLine, CreationDate, PublishingDate, Popularity) SELECT TOP (@LastRec-1) s.Title, m.Publisher, s.AuthorNames, l.LanguageName, m.FirstLine, m.CreationDate, m.PublishingDate, m.Popularity FROM dbo.Articles m INNER JOIN dbo.ArticlesContent s ON s.ArticleID = m.ID LEFT OUTER JOIN dbo.Languages l ON l.ID = m.LanguageID ORDER BY m.Popularity desc -- Return the set of paged records SELECT * FROM #TempItems WHERE RowNum > @FirstRec AND RowNum < @LastRec -- Drop the temp table DROP TABLE #TempItems -- Turn NOCOUNT back OFF SET NOCOUNT OFF GO |
Example #2 - Using a Common Table Expression (CTE)
In this example, I use a CTE with the ROW_NUMBER() function to fetch only the relevant rows:
CREATE PROCEDURE dbo.proc_Paging_CTE |
Example #3 - Using a global temporary table to hold the whole result
In this example, I use a global temporary table to store the complete result set of the query. In this scenario, this temporary table will be populated during the first execution of the stored procedure. All subsequent executions of the stored procedure will use the same temporary table. The idea behind this approach is that, when using a Global temporary table, other sessions can also use the same table (if they are aware of the GUID and need the same data). In order to drop the temporary table, you will have to either drop it explicitly or disconnect the session.
If this approach does not work for you, you could use the same technique method to create "temporary" tables in your user defined database with a unique extension. One specific scenario when this technique could be useful is when the tempdb database is already being a bottleneck. If that is the case, with this approach you can always create a dedicated database for these tables. Just do not forget to drop the temporary objects when they are not required.
CREATE PROCEDURE dbo.proc_Paging_GlobalTempTable |
Monday, April 13, 2009
Backup Database
public static void MakeBackup()
{
Server server = new Server("localhost");
Backup backup = new Backup();
backup.Action = BackupActionType.Database;
backup.BackupSetName = "Backup copy";
backup.BackupSetDescription = "Backup copy";
backup.Database = "DemoSQLServer";
backup.Devices.AddDevice("C:\\DemoSQLServer.bak",
DeviceType.File);
backup.SqlBackup(server);
}
Copy Database
//Set Source SQL Server Instance Information
Server server = null;
Microsoft.SqlServer.Management.Smo.Database ddatabase = null;
Microsoft.SqlServer.Management.Smo.Database sdatabase = null;
try {
server = new Server(DBHelper.SourceSQLServer);
server.ConnectionContext.LoginSecure = false;
server.ConnectionContext.Login = Login;
server.ConnectionContext.Password = Password;
ddatabase = new Microsoft.SqlServer.Management.Smo.Database(server, DBHelper.DestinationDatabase);
sdatabase = new Microsoft.SqlServer.Management.Smo.Database(server, DBHelper.sourceDatabase);
}
catch {
FileActions.WriteToLog(@"" + backupLogFileLocation, "Server connection failed.");
FileActions.WriteToLog(@"" + restoreLogFileLocation, "Server connection failed.");
}
try {
/*
* Backup the target database to a .bak file.
*/
Backup bUp = new Backup();
bUp.Database = DBHelper.SourceDatabase;
bUp.Devices.AddDevice(@"" + BackupFileLocation, DeviceType.File);
bUp.Initialize = true;
bUp.Action = BackupActionType.Database;
bUp.PercentComplete += new PercentCompleteEventHandler(bUp_PercentComplete);
bUp.PercentCompleteNotification = 5;
bUp.SqlBackup(server);
}
catch (Exception ex) {
FileActions.WriteToLog(@"" + backupLogFileLocation, ex.ToString());
return;
}
try {
/*
* Restore the new db from the created db backup.
*/
bool verified = false;
string errorMsg = "";
Restore res = new Restore();
res.Database = DBHelper.DestinationDatabase;
res.Action = RestoreActionType.Database;
res.Devices.AddDevice(@"" + BackupFileLocation, DeviceType.File);
// res.Devices.AddDevice(@"C:\temp\copybakup.bak", DeviceType.File);
verified = res.SqlVerify(server, out errorMsg);
//ddatabase.SetOffline();
if (verified) {
res.PercentCompleteNotification = 5;
res.ReplaceDatabase = true;
res.NoRecovery = false;
res.RelocateFiles.Add(new RelocateFile(DBHelper.SourceDatabase, @"C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\" + DBHelper.DestinationDatabase + ".mdf"));
res.RelocateFiles.Add(new RelocateFile(DBHelper.SourceDatabase + "_Log", @"C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\" + DBHelper.DestinationDatabase + ".ldf"));
res.PercentComplete += new PercentCompleteEventHandler(res_PercentComplete);
res.SqlRestore(server);
}
else {
FileActions.WriteToLog(@"" + RestoreLogFileLocation, "Backup set could not be verified.");
}
//ddatabase.SetOnline();
}
catch (Exception ex) {
FileActions.WriteToLog(@"" + RestoreLogFileLocation, ex.ToString());
//ddatabase.SetOnline;
return;
}
}
Start with SMO
The first thing we have to do is to make a connection to our server.
Now you might be thinking "Hey, there is already a class existing to connect to a SQL Server - System.Data.SqlClient.SqlConnection", and you are all right - you can use this class to build your connection to the Sql Server.
Microsoft.SqlServer.Management.Smo.Server server;
///
/// Initializes the field 'server'
///
void InitializeServer()
{
// To Connect to our SQL Server -
// we Can use the Connection from the System.Data.SqlClient Namespace.
SqlConnection sqlConnection =
new SqlConnection(@"Integrated Security=SSPI; Data Source=(local)\SQLEXPRESS");
//build a "serverConnection" with the information of the "sqlConnection"
Microsoft.SqlServer.Management.Common.ServerConnection serverConnection =
new Microsoft.SqlServer.Management.Common.ServerConnection(sqlConnection);
//The "serverConnection is used in the ctor of the Server.
server = new Server(serverConnection);
}
Object Hierarchy
Once you have got a connection to your server - accessing databases is very simple. Most of the SMO Objects are stored in a Parent/Child Collection ownership.
A Server has got a collection of Databases (The Databases Parent is the Server),
A Database has got a collection of Tables,
A Table has got a collection of Columns.....
//this Code adds a all known Databases to a Listview
//clean up the listview first.
listView1.Clear();
listView1.Columns.Clear();
//building the Coloumns
listView1.Columns.Add("Name");
listView1.Columns.Add("# of Tables");
listView1.Columns.Add("Size");
//iterate over all Databases
foreach( Database db in server.Databases )
{
//add the Data to the listview.
ListViewItem item = listView1.Items.Add(db.Name);
item.SubItems.Add( db.Tables.Count.ToString() );
item.SubItems.Add(db.Size.ToString());
}
This Code shows how to enlisting Backup Devices
listView1.Clear();
listView1.Columns.Clear();
listView1.Columns.Add("Name");
listView1.Columns.Add("Location");
foreach (BackupDevice backupDevice in server.BackupDevices)
{
ListViewItem item = listView1.Items.Add(backupDevice.Name);
item.SubItems.Add(backupDevice.PhysicalLocation);
}
Create a new Database
Of course - we are not limited to getting information about our SQL Server - we can also create, drop and alter objects. Most SMO objects have 2 requirements - a valid (unique) Name and a valid Parent.
database.Name = dbName.Text;
database.Parent = server;
database.Create();
You see - SMO uses really compact code :-) Now - lets Create a Backup Device.
backupDevice.Parent = Server;
backupDevice.Name = "myBackupDevice";
backupDevice.PhysicalLocation = @"C:\myNewBackupDevice.bak";
backupDevice.BackupDeviceType = BackupDeviceType.Disk;
backupDevice.Create();
Scripting with T-SQL!
In some cases you might want to have a T-SQL Script of a operation. Let's take the example from above - we want a script for adding a Backup Device to our SQL Server.
backupDevice.Parent = Server;
backupDevice.Name = "myBackupDevice";
backupDevice.PhysicalLocation = @"C:\myNewBackupDevice.bak";
backupDevice.BackupDeviceType = BackupDeviceType.Disk;
StringCollection strings = backupDevice.Script();
//results:
// strings [0] = "EXEC master.dbo.sp_addumpdevice @devtype = N'disk',
// @logicalname = N'myBackupDevice', @physicalname = N'C:\myNewBackupDevice.bak'"
Doing a Backup
Finally, i want to show you how to do a Backup of your Database. Note that the class Backup doesn't represent a BackupDevice - it represents a "Backup Operation".
Backup backup = new Backup();
//we asume that there is a Logical Device with the Name "myBackupDevice"
backup.Devices.AddDevice("myBackupDevice", DeviceType.LogicalDevice);
backup.Database = "Master";
backup.SqlBackup(server);
Additional Features
The functional range of SMO is amazing!
SMO supports really everything you will need.
Indexes,
Constraints,
Relationships,
Permissions
Stored Procedures,
Full Text Catalogues,
HTTP Protocol,
Triggers,
Mirroring,
Replication,
Asymmetric Encryption,
.
.
.
In short:
Everything you desire :)
And if you understand the basics of a specific feature, you won't have problems to implement it with SMO.
Wednesday, April 8, 2009
SQL SERVER Remove Duplicate Chars From String
(@datalen_tocheck INT,@string VARCHAR(255))
RETURNS VARCHAR(255)
AS
BEGIN
DECLARE @str VARCHAR(255)
DECLARE @count INT
DECLARE @start INT
DECLARE @result VARCHAR(255)
DECLARE @end INT
SET @start=1
SET @end=@datalen_tocheck
SET @count=@datalen_tocheck
SET @str = @string
WHILE (@count <=255)
BEGIN
IF (@result IS NULL)
BEGIN
SET @result=”
END
SET @result=@result+SUBSTRING(@str,@start,@end)
SET @str=REPLACE(@str,SUBSTRING(@str,@start,@end),”)
SET @count=@count+@datalen_tocheck
END
RETURN @result
END
GO;
Usage:
SET CONCAT_NULL_YIELDS_NULL OFF
SELECT dbo.Remove_duplicate_instr(<CHARacter length OF a
duplicate SUBSTRING >,<string contain duplicate>)
Example:
To keep char set in a string unique and remove duplicate 3 char long string run this UDF as inline function.
SET CONCAT_NULL_YIELDS_NULL OFF
SELECT dbo.Remove_duplicate_instr(3,�f123456789123456456
Resultset:
123456789
Tuesday, April 7, 2009
What is NOLOCK ?
What is use of EXCEPT clause?
What is Isolation Levels?
What is LINQ?
- Tools to create classes (usually called entities) mapped to database tables
- Compatibility with LINQ’s standard query operations
- The DataContext class, with features such as entity record monitoring, automatic SQL statement generation, record concurrency detection, and much more
What are synonyms?
What is CLR?
How can we rewrite sub‐queries into simple select statements or with joins?
E.g.
USE AdventureWorks
GO WITH EmployeeDepartment_CTE AS
( SELECT EmployeeID,DepartmentID,ShiftID FROM HumanResources.EmployeeDepartmentHistory )
SELECT ecte.EmployeeId,ed.DepartmentID, ed.Name,ecte.ShiftID
FROM HumanResources.Department ed
INNER JOIN EmployeeDepartment_CTE ecte ON ecte.DepartmentID = ed.DepartmentID
GO
What are the Advantages of using CTE?
- Using CTE improves the readability and makes maintenance of complex queries easy.
- The query can be divided into separate, simple, logical building blocks which can be then used to build more complex CTEs until final result set is generated.
- CTE can be defined in functions, stored procedures, triggers or even views.
- After a CTE is defined, it can be used as a Table or a View and can SELECT, INSERT, UPDATE or DELETE Data.
Which are new data types introduced in SQL SERVER 2008?
The GEOGRAPHY Type: The GEOGRAPHY datatype’s functions are the same as with GEOMETRY. The difference between the two is that when you specify GEOGRAPHY, you are usually specifying points in terms of latitude and longitude.
New Date and Time Datatypes: SQL Server 2008 introduces four new datatypes related to date and time: DATE, TIME, DATETIMEOFFSET, and DATETIME2. •
- DATE: The new DATE type just stores the date itself. It is based on the Gregorian calendar and handles years from 1 to 9999.
- TIME: The new TIME (n) type stores time with a range of 00:00:00.0000000 through 23:59:59.9999999. The precision is allowed with this type. TIME supports seconds down to 100 nanoseconds. The n in TIME (n) defines this level of fractional second precision, from 0 to 7 digits of precision.
- The DATETIMEOFFSET Type: DATETIMEOFFSET (n) is the time‐zone‐aware version of a datetime datatype. The name will appear less odd when you consider what it really is: a date + a time + a time‐zone offset. The offset is based on how far behind or ahead you are from Coordinated Universal Time (UTC) time.
- The DATETIME2 Type: It is an extension of the datetime type in earlier versions of SQL Server. This new datatype has a date range covering dates from January 1 of year 1 through December 31 of year 9999. This is a definite improvement over the 1753 lower boundary of the datetime datatype. DATETIME2 not only includes the larger date range, but also has a timestamp and the same fractional precision that TIME type provides
What is Filtered Index?
What is MERGE Statement?
What is CTE?
What does TOP Operator Do?
What are Sparse Columns?
What is Replication and Database Mirroring?
What is Policy Management?
What is Service Broker?
Service Broker is feature which provides facility to SQL Server to send an asynchronous, transactional message.
it allows a database to send a message to another database without waiting for the response, so the application will continue to function if the remote database is temporarily unavailable.
What are the basic functions for master, msdb, model, tempdb and resource databases? (sql server 2008)
The master database holds information for all databases located on the SQL Server instance and is theglue that holds the engine together. Because SQL Server cannot start without a functioning masterdatabase, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.
The resoure Database is a read‐only database that contains all the system objects that are included with SQL Server. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata.
What is DataWarehousing?
•Time‐variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time.
•Non‐volatile, meaning that data in the database is never over‐written or deleted, once committed, the data is static, read‐only, but retained for future reporting.
•Integrated, meaning that the database contains data from most or all of an organization's operational applications, and that this data is made consistent.
What is Identity?
What is User Defined Functions? What kind of User‐Defined Functions can be created?
Different Kinds of User‐Defined Functions created are:
Scalar User‐Defined Function
A Scalar user‐defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user‐defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.
Inline Table‐Value User‐Defined Function
An Inline Table‐Value user‐defined function returns a table data type and is an exceptional alternative to a view as the user‐defined function can pass parameters into a T‐SQL select command and in essence provide us with a parameterized, non‐updateable view of the underlying tables.
Multi‐statement Table‐Value User‐Defined Function
A Multi‐Statement Table‐Value user‐defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T‐SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to in essence create a parameterized, non‐updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user‐defined function, It can be used in the FROM clause of a T‐SQL command unlike the behavior found when using a stored procedure which can also return record sets.
What are primary keys and foreign keys?
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.
What are different Types of Join?
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.
Inner Join
A join that displays only the rows that have a match in both joined tables is known as inner Join. This is the default type of join in the Query and View Designer.
Outer Join
A join that includes rows even if they do not have related rows in the joined table is an Outer Join. You can create three different outer join to specify the unmatched rows to be included:
•Left Outer Join: In Left Outer Join all rows in the first‐named table i.e. "left" table, which appears leftmost in the JOIN clause are included. Unmatched rows in the right table do not appear.
•Right Outer Join: In Right Outer Join all rows in the second‐named table i.e. "right" table, which appears rightmost in the JOIN clause are included. Unmatched rows in the left table are not included.
•Full Outer Join: In Full Outer Join all rows in all joined tables are included, whether they are matched or not.
Self Join
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company has a hierarchal reporting structure whereby one member of staff reports to another. Self Join can be Outer Join or Inner Join.
What is sub‐query? Explain properties of sub‐query?
A subquery is a SELECT statement that is nested within another T‐SQL statement.
A subquery SELECT statement if executed independently of the T‐SQL statement, in which it is nested, will return a resultset. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T‐SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.
What is Difference between Function and Stored Procedure?
What is Collation?
What is Cursor?
In order to work with a cursor we need to perform some steps in the following order:
•Declare cursor
•Open cursor
•Fetch row from the cursor
•Process fetched row
•Close cursor
•Deallocate cursor
What is a Linked Server?
What is Index?
What is View?
What is Trigger?
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.
What is Stored Procedure?
e.g. sp_depends, sp_helpdb, sp_renamedb etc.
What are different normalization forms?
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi‐valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key.
BCNF: Boyce‐Codd Normal Form
If there are non‐trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many‐to‐many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain‐Key Normal Form
A model free from all modification anomalies is said to be in DKNF.
Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.
What is De‐normalization?
What is Normalization?
What are the properties of the Relational tables?
- Values are atomic.
- Column values are of the same kind.
- Each row is unique.
- The sequence of columns is insignificant.
- The sequence of rows is insignificant.
- Each column must have a unique name.
What is RDBMS?
Monday, April 6, 2009
Records using Stored Procedure paging
CREATE PROCEDURE SP_Paginet
(
@Page int,
@RecsPerPage int
)
AS
-- We don't want to return the # of rows inserted
-- into our temporary table, so turn NOCOUNT ON
SET NOCOUNT ON
--Create a temporary table
CREATE TABLE #TempItems
(
ID int IDENTITY,
Name varchar(50),
Price currency
)
-- Insert the rows from tblItems into the temp. table
INSERT INTO #TempItems (Name, Price)
SELECT Name,Price FROM tblItem ORDER BY Price
-- Find out the first and last record we want
DECLARE @FirstRec int, @LastRec int
SELECT @FirstRec = (@Page - 1) * @RecsPerPage
SELECT @LastRec = (@Page * @RecsPerPage + 1)
-- Now, return the set of paged records, plus, an indiciation of we
-- have more records or not!
SELECT *,
MoreRecords =
(
SELECT COUNT(*)
FROM #TempItems TI
WHERE TI.ID >= @LastRec
)
FROM #TempItems
WHERE ID > @FirstRec AND ID < @LastRec
-- Turn NOCOUNT back OFF
SET NOCOUNT OFF
SQL SERVER - Logical Query Processing Phases - Order of Statement Execution
What actually sets SQL Server apart from other programming languages is the way SQL Server processes its code. Generally, most programming languages process statement from top to bottom. By contrast, SQL Server processes them in a unique order which is known as Logical Query Processing Phase. These phases generate a series of virtual tables with each virtual table feeding into the next phase (virtual tables not viewable). These phases and their orders are given as follows:
1. FROM
2. ON
3. OUTER
4. WHERE
5. GROUP BY
6. CUBE | ROLLUP
7. HAVING
8. SELECT
9. DISTINCT
10 ORDER BY
11. TOP