Images

ADO.NET - PART II


ADO.NET
At the releasing time of the .NET FrameworkMicrosoftintroduced a new data access model, called ADO.NET. TheActiveX Data Object acronym was no longer relevant, asADO.NET was not ActiveX, but Microsoft kept the acronym due to the huge success of ADO. In reality, it’s an entirely new data access model written in the .NET Framework.
ADO.NET supports various data sources through both ODBCand OLE-DB, but it also offers another option of using database-specific data providers.
These data providers offer greater performance by being able to take advantage of data-source-specific optimizations.
The original release of ADO.NET included a SQL provider and an OLE-DB provider, with the ODBC and Oracle providers being introduced later. But it supports a number of other providers too.

Standardized Database Objects and Design
Foreign Keys
In the context of relational databases, a foreign key is a referential constraint between two tables. The foreign key identifies a column or a set of columns in one (referencing) table that refers to a column or set of columns in another (referenced) table. The columns in the referencing table must form a primary key or unique key in the referenced table. The values in one row of the referencing columns must occur in a single row in the referenced table.
Thus, a row in the referencing table cannot contain values that don't exist in the referenced table. This way references can be made to link information together and it is an essential part of database normalization. Multiple rows in the referencing table may refer to the same row in the referenced table. Most of the time, it reflects the one (master table, or referenced table) to many (child table, or referencing table) relationship.
The referencing and referenced table may be the same table, i.e. the foreign key refers back to the same table. Such a foreign key is known in SQL:2003 as self-referencing or recursive foreign key.
A table may have multiple foreign keys, and each foreign key can have a different referenced table. Each foreign key is enforced independently by the database system. Therefore, cascading relationships between tables can be established using foreign keys.
Improper foreign key/primary key relationships or not enforcing those relationships are often the source of many database and data modeling problems.
Defining Foreign Keys
Foreign keys are defined in the ANSI SQL Standard, through a FOREIGN KEY constraint. The syntax to add such a constraint to an existing table is defined in SQL:2003 as shown below. Omitting the column list in the REFERENCES clause implies that the foreign key shall reference the primary key of the referenced table.
Likewise, foreign keys can be defined as part of the CREATE TABLE SQL statement.
CREATE TABLE table_name (
   id    INTEGER  PRIMARY KEY,
   col2  CHARACTER VARYING(20),
   col3  INTEGER,
   ...
   CONSTRAINT col3_fk FOREIGN KEY(col3)
      REFERENCES other_table(key_col) ON DELETE CASCADE,
   ... )
If the foreign key is a single column only, the column can be marked as such using the following syntax:
CREATE TABLE table_name (
   id    INTEGER  PRIMARY KEY,
   col2  CHARACTER VARYING(20),
   col3  INTEGER FOREIGN KEY REFERENCES other_table(column_name),
   ... )

Standardized Database Objects and Design
Indexes
While indexes are not essential to database design, they are something worth considering. Indexes enable the database server to quickly scan your tables for information, making search operations much faster than the same tables without indexes.
When designing your tables, you need to think about how they will be used; especially which tables will be heavily searched or joined into extensively. Once you have worked this out, it’s relatively easy to go into the tables and assign indexes to the columns that are heavily used.
Indexes enable the database to search only a small part of the data, rather than the entire table. Thus, they make searching tables much faster, and the benefits can be huge in terms of performance. Keep in mind that tables with indexes will be slower when calling theINSERT, UPDATE and DELETE statements, so indexes should only be added to tables that really need fast access.
The column that should have indexes are those referenced in JOIN or WHERE statements. Primary keys in SQL Server automatically have indexes assigned to them, but not on foreign keys, so you will need to assign these manually. When you are ready to optimize your current or new systems, always try to apply some indexes to your tables. You could well see dramatic increases in performance.

Standardized Database Objects and Design
View
More often that not, you will be working with a normalized database schema; and when working with a normalized schema, one of the major drawbacks is the complexity of queries. Views are a great way to hide that complexity by presenting your code with clean, tidy result sets.
Views enable you to look at a table, or a number of tables, in whatever way you want. They act like tables in terms of running CRUD operations against them, but the underlying data might not actually exist in the format in which it’s represented. Views enable you to easily abstract your data from the real database layout or to present compiled views of data.
Views also enable you to encapsulate complex queries across many tables and return just the data you need, which can be very useful when you have s fully normalized database and wish to retrieve data that’s ready for presentation layer.

Standardized Database Objects and Design
Help with Normalization
You can use Views to normalize data that’s all mashed together in the same tables, such as using a SELECT DISTINCT on a texture column of a table that’s not normalized to give you a lookup table of the values within:
Create View AllCustomerTypes
As
Select Distinct CustomerType
From Customers
GO
Performing a select on all of the columns in the AllCustomerTypes query returns a normalized list of all possible Customer Types:
CustomerType
-------------------------------------------------------------------------------------
Adored
Beloved
Tolerated
Enforcing Security
By only allowing access to your database with views, you can completely control the data that your users are allowed to see, while hiding the real data and database schema from them. An example of this might be allowing your sales assistants to see what type of pets someone has, but not the customer’s address:
Create View ViewCustomersListPets
As
Select CustomerID, P.PetName
From CustomerPets CP
Join Pets P
On P.PetID = CP.PetID
GO
In this example, anyone searching for information on the customer can only gain access to the view and thus only the information exposed by the view. The salespeople can only run queries like this:
Select CustomerID,
PetName
From ViewCustomersListPets
Where CustomerID = 1
The query would return data like the following:
CustomerID PetName
----------------------------------------------------------------------------------------------
1 Dog 1 Rat
As you can see, views can be used to lock down the kind of access you provide to your applications, making your database a much safer environment to let users loose on.
Normalizing
This is a topic that is often overcomplicated, but is actually quite simple when you understand it. The best way to describe normalizing a database is think of it as breaking your data down to its logical units, i.e., the smallest possible objects that make up your data. Then, for each of them, you need to make sure that it lives in its own table that the table in some way relates to other tables where appropriate.
Why Normalize Data?
Normalization is probably the most important part of database design. Without it, you can get into a real mess – if not right now, when you need to make changes to your schema by adding more functionality or by fixing design flaws.
Data should be normalized to remove repeated or redundant data. Doing this instantly reduces the amount of space your databases need to occupy, as well as speeds up scans of your tables – with the free bonuses of gaining increased maintainability and scalability.
By reducing the amount of repeated data, the chance of finding errors in your data is dramatically decreased simply because there is less data to contain errors.
Types of Normalization
There are many different degrees of normalization in both academic and real-world settings. In most cases, the different levels of normalization are linear – for example, there are the nth Normal Forms, whereby First Normal Form is the simplest and Third Normal Form is more complicated.
There are four main forms of normalization. Each one represents a different level of normalization – form not normalized at all to the nirvana of normalization.
First Normal Form
With the first form, the goal is to remove any repeated groups of data from any tables. That’s an easy concept to grasp: If you have got the same few text labels appearing in a table, then they need to be normalized into their own table, and a foreign key should be added to the table that is storing your data. In addition, if you have multiple columns in a table storing the same type of information, this should also be hived off to a separate table. Remove repeated groups of data.
Second Normal Form
Using the Second form, all rows containing the same information should have that information hived out to separate the tables with foreign keys linking the data.
Third Normal Form
When trying to achieve third normal form, all columns in the table that are not directly dependent on the primary key should be removed and placed in their own tables, with foreign keys creating relationship between the tables.
Domain\Key Normal Form
This form is the nirvana of normalization. As mentioned previously, databases can’t be more normalized than they are when they are in the Domain \ key normal form. Each row of every table has its own identifying column.
Real-world Normalization
Outside of the academic world, there is one form of normalization: The one that works for you for the current project in the allowed time frame.
If you are a masochist and have been given enough time to use the domain \ key model, give it a whirl. However, if your boss is breathing down your neck with a mandate to finish the application yesterday, sometimes it’s better to just go with what works with the least amount of effort, and deal with the consequences later.
In 99 percent of the cases, the Second Normal Form is the level you should try to achieve in your projects. It offers the best level of maintenance without too much of a compromise on performance.
Designing a Normalized Database
To create a normalized database, you first need to understand the application that will be using your database and how they need to work, their feature sets, and any further modifications or features that are to be implemented with future releases. Once you understand the applications, you can begin to map out the database structure.
First, write down a list of all the “things” in your applications, be they customers, pets, products or countries – each of these items will need their own table. Second, work out the different ways in which you need to pull all the data together to provide the kind of information you will need.
For example, if you want to know which of your customers in England have a pet badger and bought your kitten warmers, you must have a Purchase table. If you don’t, you will need to add one. Third, map out all of your tables on paper or using a database design tool, such as the diagram tool in SQL Server 2000 Enterprise Manager.
Fourth, sit down with each of the application-specification documents again, going through each of the features you required and making sure your database can provide it. If you identify any feature that the database can’t fulfill, rethink your design so that it can. Once you have gone through the fourth step a few times, you should be ready to start creating your tables, so dive in, remember the rules, and follow your design.
Ensuring Quality Data
If you are at the beginning of a project and you are about to start putting the database together, look carefully at your application design documents and make sure you understand the functionality required inside out.
Run through each feature and try to visualize the database; you will soon see where you need one-to-many relationships, where you will need fast tables to search, and where you will need to version your records. All of this needs to be factored into your schema before you dive in and put the database together.
Validating the quality of your data is probably the most important aspect of database design after the layout of your tables. If your data is corrupted or has invalid values, then all of the applications running off the database are susceptible to unknown outcomes, and potentially could crash.
For example, in the following table of products, we don’t want any products on the database that are priced less than Rs.10, 000 or greater than Rs.1, 000,000. You can see the check constraint called CK_ProductPrice that shows this business rule:
CREATE TABLE [Products] (
[ProductID] [int] IDENTITY (1, 1) NOT NULL,
[ProductName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ProductPrice] [Money] NULL,
CONSTRAINT CK_ProductPrice CHECK 

( ProductPrice >= 10000 And ProductPrice  <= 1000000 ),
CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED
(
 [ProductID]
) ON [PRIMARY]
) ON [PRIMARY]
GO
Now suppose someone tried to insert a row into the database that doesn’t pass our check constraint, such as the following:
Insert Into Products
( ProductName, ProductPrice)
Values 
(‘My Socks’, 5)
The database will very helpfully throw out a check constraint conflict error message, preventing those socks from ever appearing in the products list:
INSERT statement conflicted with COLUMN CHECK constraint ‘CK_ProductPrice’. The conflict occurred in database ‘ADO.NET’, table ‘Products’, column ‘ProductPrice’. The statement has been terminated.
To summarize, the database should be designed around the data and required functionality of your applications. That way, you will get the database that you need, and your boss will shower you with lavish praise.
Making a Flat Database Normalized

ADO.NET Essentials
Overview of .NET Data Provider
.Net data providers form the interface between the application and the data source. The data providers that the .net framework provides allow you to connect to a data source, execute commands, and retrieve results. You can either directly work with the data, retrieved by the data provider or store it in an ADO.Net Dataset object. Dataset allow you to store and manipulate data from multiple data sources. You can use a dataset to transfer data between different tiers.
The .net framework provides the OLEDB.NET data provider to access data from OLEDB and SqlServer.NET data provider to access data from Microsoft SQL Server.NET (version 7.0 or later). You can also use the OLEDB.NET data provider to access data from SqlServer (version 7.0 or later). The SQL Server.NET data provider makes native calls to SQL Server that’s why the SQL Server.NET is more efficient than OLEDB.NET data provider.
You can also use the ODBC.NET data provider and the .NET data provider for Oracle to access data from the ODBC data source and Oracle database server, respectively.
The .NET data provider consists of these four core objects:
? Connection
? Command
? DataReader
? DataAdapter
Connection object: This object provides a connection to a data source.
Command object: This object executes a command against a data source.
DataReader: This object provides a read-only, forward-only stream of data from a data source.
DataAdapter: This object populates a dataset and resolves updates with the data source.
The Connection Class
The Connection Class allows you to create a connection with a data source. To make a connection to a data source, you provide any data provider connection class information about the data source such as the name of the database, the source computer, the data provider, the user name and the password. The syntax for creating a connection using SqlConnection class is shown in the following code.
using System.Data.SqlClient;
// Creating an instance of the connection object
SqlConnection connection = new SqlConnection();
//Declaration of connection string
cnnection.ConnectionString = 
“Data Source = localhost; user id=sa; pwd=; 
Initial Catalog = Northwind”;
//Open the connection
connection.Open();

The Command Class
The Command Class represents a query statement or a stored procedure that you want to execute against a data source. For example: Suppose you can use the SqlCommand object to execute a batch insert or a batch update against a data source. The syntax for creating a SqlCommand object is provided in the following code.
using System.Data.SqlClient;
// Creating an instance of the connection object
SqlConnection myConnection = new SqlConnection();
// Creating an instance of the command object
SqlCommand myCommand = new SqlCOmmand();
//Creat a query string
string mySQL = “Select COUNT(ProductID) from Products”;
//Declaration of connection string
myConnection.ConnectionString = 
“Data Source = localhost; user id=sa; pwd=; 
Initial Catalog = Northwind”;
//Open the connection
myConnection.Open();
//Declare the connection for the command object
myCommand.Connection = myConnection;
//Run the query string into the command object
myCommand.CommandText = mySQL.Trim();
//Show the result of the query string
Console.WriteLine(myCommand.ExecuteScalar().ToString());
After you specify the Connection and CommandText properties of the SqlCommand object, you execute the specified command against the selected data source. To do the above task you can use ExecuteScalar (), ExecuteReader (), or ExecuteNonQuery () method.
The ExecuteScalar Method
This method executes a SQL query statement against a specified data source and returns a single value. The single value represents the value in the first row and the first column. The rest of the values are ignored. The ExecuteScalar () method is commonly used to execute aggregate functions on a table.
For example, if you want to retrieve the numbers of employee residing in the city ‘Delhi’, you need to use the COUNT function. The SQL statement might look like this:
SELECT COUNT (*) from Employees where City = ‘Delhi’;
This SQL statement returns just one value that is part of a single row and single column.
The ExecuteReader Method
The ExecuteReader method executes a SQL statements or a store procedure against the available data source and returns as SqlDataReader object. The following code shows how to use the ExecuteReader methods to retrieve a SqlDataReader object.
SqlCommand myCommand = new SqlCommand();
SqlConnection myConnection = new SqlConnection();
string mySQL = “Select * from Products”;
myConnection.ConnectionString = “Data Source = localhost; 
user id=sa; pwd=; Initial Catalog = Northwind”;
myConnection.Open();
myCommand.Connection = myConnection;
myCOmmand.CommandText = mySQL.Trim();
SqlDataReader myReader;
myReader = myCommand.ExecuteReader();
Console.WriteLine(myReader.FieldCount.ToString());
The ExecuteReader method is used to return a SqlDataReader object that contains a read only, forward only copy of the data from the data source.
The ExecuteNonQuery Method
The ExecuteNonQuery method is used to execute INSERTUPDATE or DELETE statements. The ExecuteNonQuery method executes these three statements against the data source and returns an integer value specifying the number of rows affected by the INSERT,UPDATE or DELETE statements. The given code shows how to use the ExecuteNonQuery method.
SqlCommand myCommand = new SqlCommand();
SqlConnection myConnection = new SqlConnection();
string mySQL = “INSERT INTO PRODUCT VALUES(…)”;
myConnection.ConnectionString = “Data Source = localhost; 
user id=sa; pwd=; Initial Catalog = Northwind”;
myConnection.Open();
myCommand.Connection = myConnection;
myCommand.CommandText = mySQL.Trim();
int NoOfRows = 0;
NoOfRows = myCommand.ExecuteNonQuery();
Console.WriteLine(“({0}) rows affected.”, NoOfRows);
Data Source
A data source is a database server for which the .Net Framework provides a data provider or an XML file. To access the data source, you create a connection to the data source using the data providers of ADO.NET. The ADO.NET data providers enable you to establish a connection with a data source and perform other tasks, such as executing SQL commands on data sources.

ADO.NET Essentials
Connection Strings
Everybody needs connection strings at one time or another, and while they are incredibly simple to construct and use, they are also often misunderstood. ADO.NET offers three ways to create your connection strings. First, you can specify a simple string, like this:
String ConnectionString = “server=localhost; uid=eBIZ; pwd=eBIZHost; InitialCatalog= MyDb”;
The second way is by using the new ConnectionStringBuilder classes built into each of the providers. The third way is to use the ConnectionStringCollection and the ConnectionStrings section of ConfugurationSettings.
Before moving on, it’s important to understand the different options open to you regarding the connection strings for your provider. Familiarize yourself with the documentation for your chosen provider.
Connection String Collection
With .NET 2.0, there’s a new way to store and use your connection strings in the Framework. Microsoft has brought together the configuration files(app.config and web.config), as well as the managed System.Configuration API to give us a really neat way of loading our connection strings.
Under the configuration section in the config files is a new element called connectionStrings. This is where all the connection strings for your application can live:
<connectionStrings>
 <add name= “UserDatabase”
providerName= “SqlClient”
connectionString= “server=localhost; uid=myuser; pwd=mypassword”
/>
</connectionStrings>
Once your connection strings are stored inside the configuration files, it’s simple to actually get at them. The following lines of code shows the retrieval of a connection string:
ConfigurationManager.ConnectionStrings (“UserDatabase”).ToString();
The ConfigurationManager class lives inside the System.Configuration namespace.
Remember: Before you create another entry in the appsettings section of your configuration files, if you are adding a connection string, create it in the connectionStrings section.
Note: always try to encrypt your connection strings. That way, if someone does manage to take a peek inside your configuration files, they still have to decrypt the strings before they can be used.
Provider-Specific Connection Strings
Each database server allows connections in the different ways, and while they all share common features such as “username” and “password”, the way those features are implemented varies among providers. Some common examples of differences for the four main providers are listed in the following table.
OdbcOleDbSqlClientOracleClient
Database ServerServerData SourceServer or “Data Source”Server or “Data Source”
UsernameUIDUser IDUID or User IDUser ID
PasswordPWDPasswordPWD or PasswordPassword
Should Integrated Security Be Used?Trusted_ ConnectionIntegrated SecurityTrusted_ Connection or “Integrated Security”Integrated Security
Database/ Catalog to Connect toDatabaseInitial CatalogDatabase or Initial Catalog
Use Connection PoolingOLEDB ServicesPoolingPooling

ADO.NET Essentials
Connections
Opening a Connection (and Making Sure It’s Open)
Before you start working on database, you will need to open a connection to the database.
Try
{
static public string constr =
 "Data Source=UJJWAL22;Initial Catalog=calendar;Integrated Security=True";
}
catch(Exception ee)
{
MessageBox.Show(ee.ToString(),
”eBIZ”,MessageBoxButtons.OK,MessageBoxIcon.Information);
}
SqlConnection con=new SqlConnection(constr);
Try
{
if(con.State== ConnectionState.Close)
con.Open();
.
.
.
con.Close();
}
catch(Exception ee)
{
MessageBox.Show(ee.ToString(),”eBIZ”,
MessageBoxButtons.OK,MessageBoxIcon.Information);
}

When you are doing any database work, it’s important to only have the connection open as long as it’s absolutely necessary so that it can be used by others when needed. Get in, do your stuff, and get right out again.
Closing a Connection (and Making Sure It’s Closed)
Many developers don’t give a thought to closing their connections, but closing a connection is just as important – if not more so – than opening one. If your code for closing connection is buggy or faulty, you might run out of available connections and your applications can grind to a halt under heavy use.
Try
{
static public string constr = 
"Data Source=UJJWAL22;Initial Catalog=calendar;Integrated Security=True";
}
catch(Exception ee)
{
MessageBox.Show(ee.ToString(),”eBIZ”,
MessageBoxButtons.OK,MessageBoxIcon.Information);
}
SqlConnection con=new SqlConnection(constr);
Try
{
con.Open();
.
.
.
if(con.State == ConnectionState.Open)
con.Close();  
}
catch(Exception ee)
{
MessageBox.Show(ee.ToString(),”eBIZ”,
MessageBoxButtons.OK,MessageBoxIcon.Information);
    }


ADO.NET Essentials
DataSet
ADO.NET DataSets enable you to store data from multiple data sources. You can relationships between the tables in a DataSet, even though relationships might not exist between tables in the data sources. In addition, you can create different data views from the data contained within the DataSets.
Understanding ADO.NET DataSet
ADO.NET Datasets support in-memory, disconnected data architecture. You establish a connection to a data source, retrieve records from a DataSet, and then close the connection. This enables you to work with the data without requiring you to connect to the database. In-memory, disconnected data enables you to minimize the number of live connections to a database.
In addition, disconnected data architecture reduces the number of round trips to a database server and allows the database server to fulfill other requests. The in-memory, disconnected architecture provides quick access to data because the DataSet is present in the memory of a client computer, and all data operations, such as updating tables, occur in the client level.
An ADO.NET DataSet is a collection of tables and information about the relationships between tables. Each table further consists of collections of rows, columns, and constraints. You use the following classes to work with Datasets in ADO.NET:
DataSet Class: This class includes the Tables collection of the tables in a DataSet. In addition, the DataSet class contains the Relations collection that represents the relationships between the tables in a DataSet.
DataTable class: This class includes the Rows and Columns collections that represent rows and columns. In addition, the DataTable class contains the ChildRelations and ParentRelations collections that represent relationships between tables.
DataRow class: This class includes the RowState property. The RowState property indicates whether a row has changed after loading the data table from a database. The RowState property can take the following values: Deleted, Modified, New, and Unchanged.
img
Figure: The structure of ADO.NET DataSet
A DataTable also contains the Constraints collection. The Constraints collection allows DataSets to implement all constraints including the constraints at the database level. When you modify the data of a table in a DataSet, the DataSet implements all the constraints at the client level. Therefore, when a DataSet connects to a database to perform an update operation, data is updated quickly; the round trip from the database server to the client is avoided. The above figure displayed the structure of an ADO.NET DataSet.
Creating ADO.NET DataSets
You can create an ADO.NET DataSet by creating an object of the DataSet class that in present in the System.Data namespace. When you create an object of the DataSet class by calling the DataSet constructor, you can optionally pass the name of the DataSet as an argument to the constructor. For example:
// Create a new instance of the DataSet class
DataSet EmployeeDS=new DataSet (“Employees”);
After you create a DataSet, you can populate the DataSet by adding tables to the DataSet. To add tables to a DataSet, you create DataTable objects and add them to a DataSet.
DataSet EmployeeDS=new DataSet(“Employees”);
DataTable EmpTable= EmployeeDS.Tables.Add(“Employees”);
Adding Columns to a Table in a DataSet
After you add a table to a DataSet, you add columns to the table according to your requirements. The following code shows how to populate a table with columns.
DataSet EmployeeDS = new DataSet(“Employees”);
DataTable EmpTable = EmployeeDS.Tables.Add(“Employees”);
//Add columns to the DataSet
EmpTable.Columns.Add (“EmployeeID”, Type.GetTypes(“System.String”)); EmpTable.Columns.Add (“LastName”, Type.GetTypes(“System.String”)); EmpTable.Columns.Add (“FirstName”, Type.GetTypes(“System.String”));
Setting Constraints on a Table in a DataSet
When you add columns in DataSet, you set constraints, such as a primary key, on the columns by using the Primary Key and Unique properties of the DataColumn object. The following code shows how to add the primary key constraints on a column.
DataSet EmployeeDS = new DataSet(“Employees”);
DataTable EmpTable = EmployeeDS.Tables.Add(“Employees”);

//Specifying an identifier for the primary key column
DataColumn pkCol= EmpTable.Columns.Add 
(“EmployeeID”, Type.GetTypes (“System.String”));
EmpTable.Columns.Add (“LastName”, Type.GetTypes(“System.String”));
EmpTable.Columns.Add (“FirstName”, Type.GetTypes(“System.String”));

// Specifying the EmployeeID column as the primary key
EmpTable.PrimaryKey = new DataColumn[] {pkCol};
Creating Relationships Between the Tables in a DataSet
A DataSet object can contain multiple tables. You can also create relationships between the tables in a DataSet. You can use various methods of the DataRelation object to create and manage relationships between the tables in a DataSet.
To create a relationship between two tables in a DataSet, you use the Add method of the DataRelation object. The Add method takes a name for the relationship being created, and the DataColumn references of the columns that you want to define as the parents and child columns in the relationship. The following code shows how to create relationships between two tables in a Dataset.
EmployeeDS.Relations.Add(“Employees_Territories”,
 EmployeeDS.Tables[“Employees”].Columns[“EmployeeID”],
 EmployeeDS.Tables[“EmployeeTerritories”].Columns[“EmployeeID”]);
Note: When you create a relationship between tables, a UniqueConstraint is added to the parent table, and a ForeignKeyConstraint is added to the child table by default.
Manipulating an ADO.NET DataSet
After you create and populate a DataSet, you can manipulate it by performing tasks such as merging DataSet contents and copying the contents of one DataSet to another DataSet. Each of these tasks is explained in the following sections.
Merging the contents of DataSets
You can merge the contents of a DataSet, DataTable, or DataRow object with another DataSet, DataTable or DataRow object. When you merge the contents of two DataSets, the data in a DatSet is added to another DataSet. However, you must remember certain rules before you merge the contents of two DataSets. The following sections describe these rules.
Check Primary Keys
When you merge the contents of DataSets, the table receiving the new data or schema ensures data integrity by checking the primary key values. The table receiving the new data matches the primary key values of its existing rows with those of the rows from the incoming table.
The data in the existing rows is modified only if the columns from the incoming schema match those of the existing schema. In addition, the new rows containing primary key values that don’t match any existing rows are added to the existing table.
However, if the table receiving new data doesn’t have primary key, new rows from the incoming data are added to the table. If the incoming and existing tables have primary keys on different columns, an exception is thrown, and the DataSet raises the MergeFailed event. An exception is also thrown when the incoming and existing tables contain columns that have the same name but different data types.
Preserve Changes
When you merge a DataSet, DataTable or DataRow object with a DataSet, you can specify whether to preserve the changes in the existing DataSet and how to manage the new schema elements in the incoming data. You use the PreserveChanges property to specify whether to preserve the changes in the existing DataSet.
If the PreserveChanges property is set to True, the existing values in the table receiving data are not overwritten with the incoming values. However, if the PreserveChanges flag is set to False, the existing values are overwritten by the incoming values in the current row version of existing row.
Apply Constraints
When you merge the contents of two DataSets, Constraints are not checked when the Merge method is executed. After you add the data to a receiving table, constraints are enforced on the current values in the DataSet. Therefore, you need to ensure that you can handle the exceptions in your code.
You use the Merge method of a DataSet object to combine the contents of a DataSet, DataTable or DataRow object with another DataSet, DataTable or DataRow object.
static public string constr = "Data Source=UJJWAL22;
Initial Catalog=calendar;Integrated Security=True";
SqlConnection con=new SqlConnection(constr);
SqlDataAdapter EmpDA=new SqlDataAdapter
(“Select EmployeeID, ”+ “FirstName from Employees”, con);
con.Open();
DataSet EmpDS=new DataSet();
EmpDA.FillSchema(EmpDS, SchemaType.Source, “Employees”);
EmpDA.Fill(EmpDS, “Employees”);
DataSet TerDS=new DataSet();
TerDS.ReadXml(“Territories.xml”, XmlReadMode.ReadSchema);
TerDS.AcceptChanges();
con.Close();
EmpDS.Merge(TerDS, true. MissingSchemaAction.AddWithKey);
Copying DataSet Contents
Sometimes, you might need to work with data without affecting the original data in a DataSet. Alternatively, you might have to create a subset of the data present in a DataSet. In such cases, you can create a copy of an existing DataSet instead of connecting to the database server and creating a new DataSet.
ADO.NET allows you to crreate a copy of a DataSet. You can also create a DataSet that is a subset of an existing DataSet. ADO.NET enables you to:
• Create an exact copy of a DataSet including the schema, data, row state information, and row version.

• Create a DataSet that contains the schema of an existing DataSet containing only the modified rows. You can specify that the copy contain all the modified rows or only the rows with the specified DataRowState.

• Copy only the schema or relational structure of a DataSet but not any rows from the original DataSet.
DataSet EmployeeDS=new DataSet(“Employees”);
DataSet copyDS=EmployeeDS.Copy();
As mentioned earlier, you can also create a copy of a DataSet object that includes only the schema and the modified data. To create a copy of a DataSet that contains only the modified records, you use the GetChanges method of the DataSet object. The GetChanges method also allows you to retrieve the rows with specified row states.
DataSet changeDS=EmployeeDS.GetChanges();
ADO.NET also allows you to copy only the schema of an existing DataSet object. You use the Clone method to create a copy of the schema of a DataSet. The Following code shows how to use the Clone method to copy the schema of a DataSet.
DataSet changeDS=EmployeeDS.Clone();
Creating Data Views
There are two ways to create a DataView. You can use the DataView constructor, or you can create a reference to the DefaultView property of the DataTable. The DataView constructor can be empty, or will also take either a DataTable as a single argument, or a DataTable along with filter criteria, sort criteria, and a row state filter.
Because the index for a DataView is built both when the DataView is created, and when any of the Sort, RowFilter, or RowStateFilter properties are modified, you will achieve best performance by supplying any initial sort order or filtering criteria as constructor arguments when you create the DataView.
Creating a DataView without specifying sort or filter criteria and then setting the Sort, RowFilter, or RowStateFilter properties later results in the index being built at least twice: once when the DataView is created, and again when any of the sort or filter properties are modified.
Note that if you create a DataView using the constructor that does not take any arguments, you will not be able to use the DataView until you have set the Table property.
The following code example demonstrates how to create a DataView using the DataView constructor. A RowFilter, Sort column, and DataViewRowState are supplied along with the DataTable.
DataView custDV = new DataView(custDS.Tables["Customers"], 
                               "Country = 'USA'", 
                               "ContactName", 
                               DataViewRowState.CurrentRows);
The following code example demonstrates how to obtain a reference to the default DataView of a DataTable using the DefaultView property of the table.
DataView custDV = custDS.Tables["Customers"].DefaultView;
Handling ADO.NET DataSet Events
The DataSet object provides three events: Disposed, Initialized, and MergeFailed.
The MergeFailed Event
The most commonly used event of the DataSet object is MergeFailed, which is raised when the schema of the DataSet objects being merged are in conflict. This occurs when a target and source DataRow have the same primary key value, and the EnforceConstraints property is set to true.
For example, if the primary key columns of a table being merged are the same between the tables in the two DataSet objects, an exception is thrown and the MergeFailed event is raised. The MergeFailedEventArgs object passed to the MergeFailed event have a Conflict property that identifies the conflict in schema between the two DataSet objects, and a Table property that identifies the name of the table in conflict.
The following code fragment demonstrates how to add an event handler for the MergeFailed event.
workDS.MergeFailed += new MergeFailedEventHandler(DataSetMergeFailed);

private static void DataSetMergeFailed(
  object sender, MergeFailedEventArgs args)
{
  Console.WriteLine("Merge failed for table " + args.Table.TableName);
  Console.WriteLine("Conflict = " + args.Conflict);
}
The Initialized Event
The Initialized event occurs after the DataSet constructor initializes a new instance of the DataSet. The IsInitialized property returns true if the DataSet has completed initialization; otherwise it returns false.
The BeginInit method, which begins the initialization of a DataSet, sets IsInitialized to false. The EndInit method, which ends the initialization of the DataSet, sets it to true. These methods are used by the Visual Studio design environment to initialize a DataSet that is being used by another component. You will not commonly use them in your code.
The Disposed Event
DataSet is derived from the MarshalByValueComponent class, which exposes both the Dispose method and the Disposed event. The Disposedevent adds an event handler to listen to the disposed event on the component. You can use the Disposedevent of a DataSet if you want to execute code when the Disposemethod is called. Dispose releases the resources used by the MarshalByValueComponent.
Typed DataSet
When a created DataSet derives from the DataSet class, that applies the information contained in the XSD to create a Typed class, this DataSet is said to be a Typed Dataset. Information from the schema whic comprises the tables, columns, and rows is created and compiled to a new DataSet derived from the XSD. The Typed DataSet class features all functionality of the DataSet class. This may be used with methods that take an instance of the DataSet class as a parameter.
Note that an UnTyped DataSet does not have any schema. It is exposed simply as a mere collection.
Creating a Typed ADO.NET DataSet
Right click your project in the Solution Explorer.
Click Add New Item.
Select DataSet.
This adds a new XSD to the project. The schema created may be viewed as an XML. When this xsd file is compiled, two files are created by Visual Studio. The first file that contains the .vb or .cs extension contains the information about the proxy class. This class contains methods & properties that are required to access the database data. The second file has an extension xsx and this contains information about the layout of the XSD.
string mySQL = “Select * from Products”;
SqlDataAdapter myAdapter = new SqlDataAdapter ();
SqlCommand myCommand = new SqlCommand ();
SqlConnection myConnection = new SqlConnection ();
myConnection.ConnectionString = “Data Source = 
localhost; user id=sa; pwd=; Initial Catalog = Northwind”;
myConnection.Open();
myCommand.Connection = myConnection;
myCommand.CommandText = mySQL.Trim ();
myAdapter.SelectCommand = myCommand;
DataSet myDataSet = new DataSet ();
myAdapter.Fill(myDataSet, “Products”);
foreach (DataRow myRow in myDataSet.Tables[0].Rows)
{
 Console.WriteLine (myRow[0]);
}

//InsertCommand
mySQL = “Insert into Products(ProductName) values (‘Nuts’)”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
int count;
count = myAdapter.InsertCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

//UpdateCommand
mySQL = “Update Products set 
ProductName = ‘Caps’ where ProductID=78”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
count = myAdapter.UpdateCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

//DeleteCommand
mySQL = “Delete from Products where ProductID=78”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
count = myAdapter.DeleteCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

ADO.NET Essentials
Using Data Adapters
A data adapter forms an interface between the application and the data source for retrieving and saving data. The SqlDataAdapter object represents a connection and a set of command objects that helps populate a dataset object.
You use the SqlDataAdapter object to fill the DataSet object with data. When a DataSet object is filled, the primary key information available on the table is not implicitly imported to the DataSet object. Therefore, when you populate a DataSet, you have to import the information explicitly to the DataSet.
To use the SqlDataAdapter object for filling the DataSet, you specify properties such as which Command and Connection object to use. The SqlDataAdapter object consists of four properties that you use to specify Command objects:
SelectCommand
InsertCommand
UpdateCommand
DeleteCommand
The SelectCommand Property
You use the SelectCommand property to set the SQL statement or a stored procedure that select data from the data source. You can set the SelectCommand property of an existing SqlCommand object, or you can set the CommandText property to the default Command object that is created when you use the SelectCommand object. When a SelectCommand property does not return any rows, no table is added to the DataSet and no exception is thrown.
The InsertCommand Property
You use the InsertCommand property to set the SQL statement or a stored procedure that insert data into the data source. You can set the InsertCommand property to an existing SqlCommand object, or you can set the CommandText property of the default Command object that is created when you use the InsertCommand object. The InsertCommand property returns an integer value that specifies the number of rows affected by the corresponding insert operation.
The UpdateCommand Property
You use the UpdateCommand property to set the SQL statement or a stored procedure that updates data in the data source. You can set the UpdateCommand property to an existing SqlCommand object, or you can set the CommandText property of the default Command object that is created when you use the UpdateCommand object. When you execute an UpdateCommand object, it returns the number of rows affected by the update operation.
The DeleteCommand Property
You use the DeleteCommand property to set the SQL statement or a stored procedure that delete data from the data source. You can set the DeleteCommand property to an existing SqlCommand object, or you can set the CommandText property of the default Command object that is created when you use the DeleteCommand object. When you execute the DeleteCommand property, it returns the number of rows affected by the delete operation.
Example:
string mySQL = “Select * from Products”;
SqlDataAdapter myAdapter = new SqlDataAdapter ();
SqlCommand myCommand = new SqlCommand ();
SqlConnection myConnection = new SqlConnection ();
myConnection.ConnectionString = “Data Source = localhost;
 user id=sa; pwd=; Initial Catalog = Northwind”;
myConnection.Open();
myCommand.Connection = myConnection;
myCommand.CommandText = mySQL.Trim ();
myAdapter.SelectCommand = myCommand;
DataSet myDataSet = new DataSet ();
myAdapter.Fill(myDataSet, “Products”);
foreach (DataRow myRow in myDataSet.Tables[0].Rows)
{
 Console.WriteLine (myRow[0]);
}

//InsertCommand
mySQL = “Insert into Products(ProductName) values (‘Nuts’)”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
int count;
count = myAdapter.InsertCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

//UpdateCommand
mySQL = “Update Products set ProductName = ‘Caps’ where ProductID=78”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
count = myAdapter.UpdateCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

//DeleteCommand
mySQL = “Delete from Products where ProductID=78”;
myCommand.CommandText = mySQL;
myAdapter.InsertCommand = myCOmmand;
count = myAdapter.DeleteCommand.ExecuteNonQuery();
Console.WriteLine (“({0}) rows affected.”, count);

ADO.NET Essentials
The DataReader Class
The DataReader class provides read-only and forward-only access to data from a data source. The DataReader object reduces the system overhead at any given because only one row of data exists in memory.
To fetch the next record, the DataReader object reconnects to the data source and retrieves the data. You can create a DataReader object by executing the ExecuteReader () method of the command object. The ExecuteReader () method of the Command object returns an object of the type DataReader. There are two data readers:
The SqlDataReader class
The OleDbDataReader class
The SqlDataReader Class
The SqlDataReader class allows you to read the read-only, forward-only data from a Sql-Server Database. To create a SqlDataReader object, you execute the ExecuteReader () method of the SqlCommand class. You need to call the Open () method on the connection object that the SqlCommand object uses before calling ExecuteReader (). You close the SqlDataReader object by using the Close () method to release the SqlConnection object.
string myString;
myString = “Data Source = localhost;
 user id=sa; pwd=; Initial Catalog = Northwind”;
string mySQL = “Select lastname, firstname from Employees”;
SqlConnection myConnection = new SqlConnection(myString);
SqlCommand myCommand = new SqlCommand();
myCommand.CommandText = mySQL;
myCommand.Connection = myConnection;
myConnection.Open();
SqlDataReader myReader;
myReader = myCommand.ExecuteReader();
while (myReader.Read())
{
 Console.WriteLine(
(myReader.GetString(0)+“,” +myReader.GetString(1)));
}
myReader.Close();
myConnection.Close();
in the above code, the SqlDataReader object is initialize to the object returned by the ExecuteReader() method. You use the Read () method of the SqlDataReader object to read a row from the data source. The DataReader object has given only one row at a time.
You call the Read () method of the SqlDataReader object to fetch subsequent rows. After reading from the SqlDataReader object, you close the SqlDataReader object. After closing the SqlDataReader object, you can call the IsClose() method and the RecordsAffected property.
The IsClosed() Method
The IsCLosed() method of the SqlDataReader class returns a value indicating whether the SqlDataReader is Closed.
The RecordsAffected Property
The RecordsAffected property of the class returns the number of rows that are changed, inserted, or deleted by the current database operation performed by Transact-SQL statements. The RecordsAffected property always call the Close () method before returning the affected rows to ensure an accurate return of values.
The OleDbDataReader Class
The OleDbDataReader class provides read-only and forward-only access to data from any data source. To create an OleDbDataReader object, you execute the ExecuteReader () method of the OleDbCommand class.
If the OleDbDataReader object is being used, the associated OleDbConnection provides the required data for the OleDbDataReader object. During this period, no other operation can use the OleDbConnection object to retrieve or send data from or to the data source. You close the OleDbDataReader object by using the Close() method to release the OleDbConnection object.

ADO.NET Essentials
DataTable
A DataTable is an innovative addition to the Dot Net Framework. Often a new developer may confuse it with a database table. Typically, one or more DataTables are used in a DataSet to construct a detached mini-database in the client's cache. However, a DataTable may stand alone by its own right, and we may use it in many situations that do not necessarily warrant database connectivity or DataSets.
A DataTable is a relational database-like table in the memory. It has a structural definition (like database table definition), and constraints like Unique constraints. We may also develop database-like views. We may create hierarchical relationships among many DataTables dynamically in a DataSet.
It has many properties, methods, and events like Columns, Rows, Clear, NewRow, PrimaryKey, GetRowType, RowChanged, RowChanging, OnRemoveColumn, OnCloumnChanging, Select, etc. The major properties and collection objects involved in a DataTable are:
• DataRow : stored in a collection named Rows
• DataColumn: stored in a collection named Columns
• PrimaryKey: references to a specified DataColumn o
bject of the DataTable, or to a group of 
DataColumn objects (in case of concatenated key)
• Constraint: stored in a collection named Constraints
• DefaultView: Gets a customized view o
f the table which may include a filtered or sorted view
• DataRelation: Stored either in ParentRelations collection, o
r ChildRelations collection (in the context of a DataSet).
RowState
While it goes completely against all best practices and defines most of the rules laid down by the ADO.NET team, people often want a way to manually set the RowState for specific rows in their DataTables. In ADO.NET 2.0 new methods have been added to enable this.
You now have explicit control for RowState using the SetAdded() and SetModified() methods, meaning you can override the DataRow’s default behavior,
The following example shows this new explicit control and how you can use it:
public void SetModified()
{
 DataTable OriginalTable=new DataTable();
 OriginalTable.TableName= “InternetSettings”;
 OriginalTable.Column.Add(“AnimalOrFungiName”);
 OriginalTable.Column.Add(“TimesSpotted”);
 DataRow BadgerRow=OriginalTable.NewRow;
 BadgerRow(“AnimalOrFungiName”)= “Badger”;
 BadgerRow(“TimesSpotted”)= “More than you can possibly imagine.”;
 OriginalTable.Rows.Add(BadgerRow);
 OriginalTable.AcceptChanges();
 If(DataRowState.Unchanged!=OriginalTable.Rows(0).RowState)
  MessageBox.Show(“Not Equal”, “eBIZ”
, MessageBoxButtons.OK, MessageBoxIcon.Information);
 OriginalTable.Rows(0).SetModified();
  MessageBox.Show(“Not Equal”, “eBIZ”
, MessageBoxButtons.OK, MessageBoxIcon.Information);

}
It is the current state of the row with respect to its relationship to the DataRowCollection. The value of RowState is dependent on the method DataRow.AcceptChanges. A newly created DataRow has its RowState set to Detached until it is added to the DataRowCollection via the DataRowcollection::Add method.
The RowState is initialized to Added until the DataRow::AcceptChanges method is called, at which point the RowState becomes Unchanged. If you call DataRow::Delete, the row's RowState becomes Deleted, and subsequently calling DataRow Collection::Remove (or DataRowCollection::RemoveAt) will not affect its RowState. The opposite is also true: Removing a DataRow from a Data RowCollection sets the row's RowState to Detached, but subsequently deleting it will not alter its RowState value.
DataView
It’s now possible to take the data from a DataView and convert it directly to a standalone DataTable object, which reflects exactly the schema and content of the DataView. To achieve this, you use the ToTable () method of the DataView.
public void ToTable()
{
 DataTable OriginalTable=new DataTable();
 OriginalTable.TableName= “InternetSettings”;
 OriginalTable.Column.Add(“AnimalOrFungiName”);
 OriginalTable.Column.Add(“TimesSpotted”);
 DataRow BadgerRow=OriginalTable.NewRow;
 BadgerRow(“AnimalOrFungiName”)= “Badger”;
 BadgerRow(“TimesSpotted”)= “More than you can possibly imagine.”;
 OriginalTable.Rows.Add(BadgerRow);
 DataRow MushroomRow=OriginalTable.NewRow;
 MushroomRow(“AnimalOrFungiName”)= “Mushroom”;
 MushroomRow(“TimesSpotted”)= “Just the twice, normall
y after spotting Badgers.”;
 OriginalTable.Rows.Add(MushroomRow);
 DataRow SnakeRow=OriginalTable.NewRow;
 MushroomRow(“AnimalOrFungiName”)= “Snake”;
 MushroomRow(“TimesSpotted”)= “Just the once, and crikey, 
what a scare I had!”;
 OriginalTable.Rows.Add(SnakeRow);
 DataView Dv=new DataView();
 Dv.Table=OriginalTable;
 Dv.RowFilter= “AnimalOrFungiName= ‘ Badger’ ”;
 DataTable ToTableTable=Dv.ToTable;
 int expected=1;
 int actual=ToTableTable.Rows.Count;
 if(expected!=actual)
  MessageBox.Show(“The number of rows r
eturned was not the number expected.”
, “eBIZ”, MessageBoxButtons.OK, MessageBoxIcon.Information);
 If(Badger!=ToTableTable.Rows(0)( “AnimalOrFungiName”).ToString())
  MessageBox.Show(“Th
e AnimalOrFungiName was not the value we expected.”
, “eBIZ”, MessageBoxButtons.OK, MessageBoxIcon.Information);
}
The preceding example shows you how to create a table from a DataView. This, in and of itself, isn’t very useful, but the table can then be used anywhere else, such as on a Web service or saved to disk.
Serialization
In ADO.NET 1.0 and 1.1, DataSets would always serialize into XML text format, even when you explicitly told them to binary serialize. In ADO.NET 2.0, the Framework now listens to your demands as a developer and actually binary serializes the DataSets when you tell it to.
The benefits of this serialization format are massive, especially in the areas of remoting and Web services, where it is quite common to pass DataSets between endpoint across the wire. There is less processing time at both ends, but the greatest gain is on the receiving end of any transfer, as it does not need to be parsed and is deserialized directly into the memory.
In addition to experiencing reduced processing time, less memory is used both at runtime and when saving DataSets to disk. This is because the footprint of the DataSet is much smaller when expressed in a binary format.
Because the DataSet is smaller in its binary serialized format, it will clearly use less bandwidth when it’s moved over the wire than it would XML serialization. This is true in almost all real-world scenarios; however, be aware that this is not always the case.
Binary serialization will actually make the DataSet larger when serialized than with XML serialization when you have a single-row, single-table DataSet. The majority of the information that is serialized in this case is metadata describing the structure of the DataSet and its tables.
In general, using binary serialization makes it possible to get about 80 times the current performance from the same applications, depending on the amount of data they moves across the wire.
Keep in mind that you don’t have to use binary serialization. In fact, the default will still be XML so that all of your existing .NET 1.0 and 1.1 applications continue to work even when hosted under the .NET 2.0 Framework.
Note: If you try to send a binary serialized DataSet to an application hosted under .NET 1.0 or .NET 1.1, the process will fail. Those Frameworks can’t deserialize binary format DataSets!
DataTableReader
Microsoft has come up with a little class called the DataTableReader, which is essentially a very thin object setting on top of the DataTable; it’s a representation of the table’s data as a DbDataReader, meaning you can stream data straight out of the DataTable.
Streaming
With the new streaming functionality built into the DataTable, not only can you stream into a DataTable using the Load(IDataReader) method:
myDataTable.Load(Reader);
but you can also get a DataReader out of a DataTable, enabling you to stream in and out of tables:
DbDataReader Reader= myDataTable.CreateDataReader();
There is no longer a need to use a StreamReader in you code when dealing with DataTable.
DataSet, DataTable, DataReader, or an Object?
Use the right tool for the right job – that’s been the mantra in engineering shops for many years, and the same applies to the world of ADO.NET. The DataSet is almost always larger than a collection of objects that match the DataSet’s structure in memory.
To help you choose the right tool for the right job, we have drawn up a simple task- oriented guide.

ADO.NET Essentials
DataSet, DataTable, DataReader, or an Object?
Showing Data to Users without any Manipulation
Use a Reader if all you need to do is bind Data up to a grid or show the details of a product. Why bother with a bulky DataSet? You don’t need relationships, you don’t need state management and you certainly don’t need multiple tables. A simple forward-only Reader will do the job in the most efficient way.
Editing Data That Lives in One Table
This one isn’t quite as clear-cut. If you don’t really need the change management that the DataSet offers, and usually you was not then just use a Reader to get the data back from your data source and use ExecuteNonQuery() to update any data that the user changes.
Editing Data Spread across More Than One Table
Use a DataSet. It’s exactly what it was designed for – managing the complex relationships between many tables and their respective rows of data.
Editing Continuously Updated Data for Which the Chance of Collisions Is High
Again, use the DataSet. It has very cool change management and tracking systems built into it. Combine a DataSet with the DataAdapter and you will be laughing all the way to the bank.
Getting One Value from a Database
Use an object, employ the use of ExecuteScalar(), or use an output parameter to get your value. There is no need for a DataReader or a DataSet. The extra code they both require isn’t worth considering for a single value, not to mention the fact that DataSet are usually slower.

Standard Data Types
Data Types in SQL Server
First we will consider the way in which SQL Server stores character data. There are two basic variations in this type, fixed-length and variable-length storage, but there is a bit more to it than that.
It this and the following sections, specification relate to SQL 2005; however, most of the statements hold true for the existing, equivalent SQL 2000 types. Where available, the equivalent SQL-92 specification synonym is provided in parentheses next to the SQL Server type.
Binary datatypes
Binary data is similar to hexadecimal data and consists of the characters 0 through 9 and A through F, in groups of two characters each. You should specify 0x before binary value when input it.
There are two binary datatypes:
• binary[(n)]
• varbinary[(n)]
Binary[(n)] datatype can store up to 8000 bytes of fixed-length binary data. You can specify the maximum byte length with n.
Varbinary[(n)] datatype can store up to 8000 bytes of variable-length binary data. You can specify the maximum byte length with n. Variable-length means that binary data can contain less than n bytes, and the storage size will be the actual length of the data entered.
You should use varbinary datatype instead of binary datatype, when you expect null values or a variation in data size.
Character datatypes
Character datatypes are used to store any combination of letters, symbols, and numbers. You should enclose character data with quotation marks, when enter it.
There are two character datatypes:
• char[(n)]
• varchar[(n)]
Char[(n)] datatype can store up to 8000 bytes of fixed-length character data. You can specify the maximum byte length with n.
Varchar[(n)] datatype can store up to 8000 bytes of variable-length character data. You can specify the maximum byte length with n. Variable-length means that character data can contain less than n bytes, and the storage size will be the actual length of the data entered.
You should use varchar datatype instead of char datatype, when you expect null values or a variation in data size.
Date and Time datatypes
There are two datetime datatypes:
• datetime
• smalldatetime
Datetime is stored in 8 bytes of two 4-byte integers: 4 bytes for the number of days before or after the base date of January 1, 1900, and 4 bytes for the number of milliseconds after midnight.
Datetime datatype can store dates from January 1, 1753, to December 31, 9999, with accuracy of 3.33 milliseconds.
If you will not specify date portion of the datetime value, then January 1, 1900 is supplied; if you will not specify time portion of datetime value, then 12:00:00:000AM is supplied.
Smalldatetime is stored in 4 bytes of two 2-byte integers: 2 bytes for the number of days after the base date of January 1, 1900, and 2 bytes for the number of minutes after midnight.
Smalldatetime datatype can store dates from January 1, 1900, to June 6, 2079, with accuracy to the minute.
If you will not specify date portion of the datetime value, then January 1, 1900 is supplied; if you will not specify time portion of datetime value, then 12:00AM is supplied.
Numeric datatypes
There are two kinds of the numeric datatypes:
• Exact Numeric Data
• Approximate Numeric Data
The difference between Exact Numeric Data and Approximate Numeric Data in that Exact Numeric Data can store all decimal numbers with complete accuracy, and Approximate Numeric Data cannot.
Exact Numeric Data are:
• decimal[(p[, s])]
• numeric[(p[, s])]
The decimal and numeric datatypes are synonyms in the SQL Server 2000. Exact Numeric Data holds values from 10^38 - 1 through - 10^38 - 1. The storage size varies based on the specified precision, and it ranges from a minimum of 2 bytes to a maximum of 17 bytes.
p - is a precision, that specify the maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The maximum precision is 28 digits. s - is a scale, that specify the maximum number of decimal digits that can be stored to the right of the decimal point, and it must be less than or equal to the precision.
Approximate Numeric Data are:
• float[(n)]
• real
Float[(n)] datatype is stored in 8 bytes and is used to hold positive or negative floating-point numbers. By default, this column has a 15-digit precision. Float[(n)] datatype can store positive values from 2.23E-308 to 1.79E308 and negative values from -2.23E-308 to -1.79E308.
Real datatype is stored in 4 bytes and is used as float datatype to hold positive or negative floating-point numbers. This column has a 7-digit precision. Real datatype can store positive values from 1.18E-38 to 3.40E38 and negative values from -1.18E-38 to -3.40E38.
Integer datatypes
There are four integer datatypes:
• tinyint
• smallint
• int
• bigint
Tinyint is stored in 1 byte and is used to hold integer values from 0 through 255.
Smallint is stored in 2 bytes and is used to hold integer values from -32768 through 32,767.
Int is stored in 4 bytes and is used to hold integer values from -2147483648 through2147483647.
Bigint is stored in 8 bytes and is used to hold integer values from -9223372036854775808through 9223372036854775807.
Monetary datatypes
Monetary datatypes are usually used to store monetary values. There are two monetary datatypes:
• money
• smallmoney
Money datatype is stored in 8 bytes and is used to hold monetary values from-922337203685477.5808 through 922337203685477.5807.
Smallmoney datatype is stored in 4 bytes and is used to hold monetary values from - 214748.3648 through 214748.3647.
Special datatypes
These are the special datatypes:
• bit
• cursor
• sql_variant
• table
• timestamp
• uniqueidentifier
• XML
Bit datatype is usually used for true/false or yes/no types of data, because it holds either 1 or 0. All integer values other than 1 or 0 are always interpreted as 1. One bit column stores in 1 byte, but multiple bit types in a table can be collected into bytes. Bit columns cannot be NULL and cannot have indexes on them.
Cursor datatype is used for variables or stored procedure OUTPUT parameters that contain a reference to a cursor. The variables created with the cursor data type are nullable. You cannot use this datatype for a column in a CREATE TABLE statement.
sql_variant datatype is used to store values of various SQL Server supported data types, except text, ntext, timestamp, and sql_variant. The maximum length of sql_variant datatype is 8016 bytes. You can store in one column of type sql_variant the rows of different data types, for example int, char, and varchar values.
This is the example of using sql_variant datatype:
SET NOCOUNT ON
GO
if object_id('tbTest') is not null drop table tbTest
GO
CREATE TABLE tbTest (
  id int primary key,
  sql_v sql_variant
)
GO
INSERT INTO tbTest VALUES (1, 1)
INSERT INTO tbTest VALUES (2, 1.0)
INSERT INTO tbTest VALUES (3, '1')
INSERT INTO tbTest VALUES (4, 0x01)
GO
SELECT sql_v FROM tbTest
GO
Here is the result set:
sql_v                   
------------------------
1
1.0
1
0x01
Table datatype is used to store a result set for later processing. You cannot use this datatype for a column in a CREATE TABLE statement. You should use DECLARE@local_variable to declare variables of type table.
Table variables should be used instead of temporary tables, whenever possible, because table variables are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined, and table variables require less locking and logging resources.
This is the example of using table datatype:
DECLARE @tbl table (id int)
INSERT INTO @tbl VALUES (1)
INSERT INTO @tbl VALUES (2)
SELECT * FROM @tbl
Here is the result set:
id
-----------
1
2
Timestamp datatype is stored in 8 bytes as binary(8) datatype. The timestamp value is automatically updated every time a row containing a timestamp column is inserted or updated.
Timestamp value is a monotonically increasing counter whose values will always be unique within a database and can be selected by queried global variable @@DBTS.
Uniqueidentifier is a GUID (globally unique identifier). A GUID is a 16-byte binary number that is guaranteed to be unique in the world. This datatype is usually used in replication or as primary key to unique identify rows in a table.
You can get the new uniqueidentifier value by calling the NEWID function.
Note You should use IDENTITY property instead of uniqueidentifier, if global uniqueness is not necessary, because the uniqueidentifier values are long and more slowly generated.
XML datatype is a new feature of SQL Server 2005. It enables SQL Server to work with XMLin a much more meaningful manner than the previous versions. This new type enables querying and modification on XML in place in SQL Server. In an XML column has schemas specified, it can efficiently be used in queries and joined upon. The maximum storage size for items in a column of this type is 2 GB.
Text and image datatypes
Text and image data are stored on the Text/Image pages, not on the Data pages as otherSQL Server 2000 data.
There are three datatypes in this category:
• text
• ntext
• image
Text datatype is a variable-length datatype that can hold up to 2147483647 characters. This datatype is used when you want to store the character values with the total length more than 8000 bytes.
ntext datatype is a variable-length unicode datatype that can hold up to 1073741823characters. This datatype is used when you want to store the variable-length unicode data with the total length more than 4000 bytes.
Image datatype is a variable-length datatype that can hold up to 2147483647 bytes of binary data. This datatype is used when you want to store the binary values with the total length more than 8000 bytes. This datatype is also used to store pictures.
Unicode Character datatypes
A column with unicode character datatype can store all of the characters that are defined in the various character sets, not only the characters from the particular character set, which was chosen during SQL Server Setup. Unicode datatypes take twice as much storage space as non-Unicode datatypes.
The unicode character data, as well as character data, can be used to store any combination of letters, symbols, and numbers. You should enclose unicode character data with quotation marks, when enter it.
There are two unicode character datatypes:
• nchar[(n)]
• nvarchar[(n)]
nchar[(n)] datatype can store up to 4000 bytes of fixed-length unicode character data. You can specify the maximum byte length with n.
nvarchar[(n)] datatype can store up to 4000 bytes of variable-length unicode character data. You can specify the maximum byte length with n. Variable-length means that character data can contain less than n bytes, and the storage size will be the actual length of the data entered.
You should use nvarchar datatype instead of nchar datatype, when you expect null values or a variation in data size.
User-Defined datatypes
You can create your own User-Defined datatypes by executing sp_addtype system stored procedure. Once a User-Defined datatype is created, you can use it in the CREATE TABLE orALTER TABLE statements, as built-in SQL Server 2000 datatypes.
This is the syntax of sp_addtype stored procedure:
sp_addtype [ @typename = ] type, 
    [ @phystype = ] system_data_type 
    [ , [ @nulltype = ] 'null_type' ] 
    [ , [ @owner = ] 'owner_name' ]
where,
typename - is the User-Defined datatype name.
phystype - is the SQL Server 2000 datatype, on which the User-Defined datatype is based.
nulltype - is the NULL or NOT NULL.
owner - is the owner of the User-Defined datatype.
Mapping SQL Server Data Types to .NET
SQL Server TypeSqlDbTypeDbTypeSqlType.NET Primitive
BIGINTBigIntSqlInt64Int64
BINARYBinarySqlBinary SqlBytesByte[]
BITBitBooleanSqlBooleanBoolean
CHARCharAnsiString FixedLengthSqlString SqlChars SqlStreamClassString Char[]
DATETIMEDateTimeDate DateTime TimeSqlDateTimeDateTime
DECIMALDecimalDecimal UInt32
UInt 64 VarNumeric
SqlDecimalDecimal
FLOATFloatDoubleSqlDoubleDouble
IMAGEImageSqlBinary SqlBytesByte[]
INTIntInt32
Uint16
SqlInt32Int32
MONEYMoneyCurrencySqlMoneyDecimal
NCHARNcharStringFixed LengthSqlString SqlChars SqlStreamCharsString Char[]
NTEXT/ VARBINARY (MAX)NtextSqlString SqlChars SqlStreamCharsString Char[]
NUMERICDecimalSqlDecimalDecimal
NVARCHARNvarCharStringSqlString SqlChars SqlStreamCharsString Char[]
REALRealSingleSqlSingleSingle
SMALL-DATETIMESmallDateTimeSqlDateTimeDateTime
SMALLINTSmallIntInt16
SByte
SqlInt16Int16
SMALLMONEYSmallMoneySqlMoneyDecimal
SQL_VARIANTVariantObjectObjectObject
SYSNAMENVarcharSqlStringString
TEXT/ VARCHAR (MAX)TextSqlString SqlCharsString Char[]
TIMESTAMPTimestampSqlBinaryByte[]
TINYINTTinyIntByteSqlByteByte
UNIQUE-IDENTIFIERUniqueIdentifierGuidSqlGuidGuid
VARBINARYVarBinaryBinarySqlBinary SqlBytesByte[]
VARCHARVarCharAnsiStringSqlString SqlCharsString Char[]
XMLXmlSqlXmlXmlDocumant XmlNode XmlElement String

0 comments: