| Tree |
|
| Counting Nodes of Binary Tree |
|
| To count the number of nodes in a given binary tree, the tree is required to be traversed recursively until a leaf node is encountered. When a leaf node is encountered, a count of 1 is returned to its previous activation (which is an activation for its parent), which takes the count returned from both the children's activation, adds 1 to it, and returns this value to the activation of its parent. This way, when the activation for the root of the tree returns, it returns the count of the total number of the nodes in the tree. |
|
|
|
| Program |
|
| A complete C program to count the number of nodes is as follows: |
|
#include <stdio.h>
#include <stdlib.h>
struct tnode
{
int data;
struct tnode *lchild, *rchild;
};
int count(struct tnode *p)
{
if( p == NULL)
return(0);
else
if( p->lchild == NULL && p->rchild == NULL)
return(1);
else
return(1 + (count(p->lchild) + count(p->rchild)));
}
struct tnode *insert(struct tnode *p,int val)
{
struct tnode *temp1,*temp2;
if(p == NULL)
{
p = (struct tnode *) malloc(sizeof(struct tnode)); /* insert the new node as root node*/
if(p == NULL)
{
printf("Cannot allocate\n");
exit(0);
}
p->data = val;
p->lchild=p->rchild=NULL;
}
else
{
temp1 = p;
/* traverse the tree to get a pointer to that node whose child will be the newly created node*/
while(temp1 != NULL)
{
temp2 = temp1;
if( temp1 ->data > val)
temp1 = temp1->lchild;
else
temp1 = temp1->rchild;
}
if( temp2->data > val)
{
temp2->lchild = (struct tnode*)malloc(sizeof(struct tnode)); /
*inserts the newly created node
as left child*/
temp2 = temp2->lchild;
if(temp2 == NULL)
{
printf("Cannot allocate\n");
exit(0);
}
temp2->data = val;
temp2->lchild=temp2->rchild = NULL;
}
else
{
temp2->rchild = (struct tnode*)malloc(sizeof(struct tnode));/ *inserts the newly created node
as left child*/
temp2 = temp2->rchild;
if(temp2 == NULL)
{
printf("Cannot allocate\n");
exit(0);
}
temp2->data = val;
temp2->lchild=temp2->rchild = NULL;
}
}
return(p);
}
/* a function to binary tree in inorder */
void inorder(struct tnode *p)
{
if(p != NULL)
{
inorder(p->lchild);
printf("%d\t",p->data);
inorder(p->rchild);
}
}
void main()
{
struct tnode *root = NULL;
int n,x;
printf("Enter the number of nodes\n");
scanf("%d",&n);
while( n --- > 0)
{
printf("Enter the data value\n");
scanf("%d",&x);
root = insert(root,x);
}
inorder(root);
printf("\nThe number of nodes in tree are :%d\n",count(root));
}
|
|
|
| Explanation |
|
| Input: |
- The number of nodes that the tree to be created should have
- The data values of each node in the tree to be created
|
|
| Output: |
- The data value of the nodes of the tree in inorder
- The count of number of node in a tree.
|
|
| Example |
|
|
|
|
|
|
|
| Tree |
|
| Deleting Node from BST |
|
| Of course, if we are trying to delete a leaf, there is no problem. We just delete it and the rest of the tree is exactly as it was, so it is still a BST. |
|
| There is another simple situation: suppose the node we're deleting has only one subtree. In the following example, `3' has only 1 subtree. |
|
|
|
|
| To delete a node with 1 subtree, we just `link past' the node, i.e. connect the parent of the node directly to the node's only subtree. This always works, whether the one subtree is on the left or on the right. |
|
|
| Deleting `3' gives us: |
|
|
|
|
|
| which we normally draw: |
|
|
|
|
| Finally, let us consider the only remaining case: how to delete a node having two subtrees. For example, how to delete `6'? We'd like to do this with minimum amount of work and disruption to the structure of the tree. |
|
| The standard solution is based on this idea: we leave the node containing `6' exactly where it is, but we get rid of the value 6 and find another value to store in the `6' node. This value is taken from a node below the `6's node, and it is that node that is actually removed from the tree. |
|
|
| Deletion of a node with two children |
|
| To delete a node from a binary search tree, the method to be used depends on whether a node to be deleted has one child, two children, or no child. |
|
|
|
|
| To delete a node printed to by x, we start by letting y be a pointer to the node that is the root of the node pointed to by x. We store the pointer to the left child of the node pointed to by x in a temporary pointer temp. We then make the left child of the node pointed to by y the left child of the node pointed to by x. We then traverse the tree with the root as the node pointed to by temp to get its right leaf, and make the right child of this right leaf the right child of the node pointed to by x, as shown in image below: |
|
|
|
|
| Another method is to store the pointer to the right child of the node pointed to by x in a temporary pointer temp. We then make the left child of the node pointed by y to be the right child of the node pointed to by x. We then traverse the tree with the root as the node pointed to by temp to get its left leaf, and make the left child of this left leaf the left child of the node pointed to by x, as shown in image below: |
|
|
|
|
|
| Deletion of a Node with One Child |
|
| Consider the following binary tree: |
|
|
|
| If we want to delete a node pointed to by x, we can do that by letting y be a pointer to the node that is the root of the node pointed to by x. Make the left child of the node pointed to by y the right child of the node pointed to by x, and dispose of the node pointed to by x, as shown in |
|
|
|
|
|
|
|
| Graph |
|
| Introduction |
|
| Graphs are natural models that are used to represent arbitrary relationships among data objects. We often need to represent such arbitrary relationships among the data objects while dealing with problems in computer science, engineering, and many other disciplines. Therefore, the study of graphs as one of the basic data structures is important. |
|
| A Graph is a kind of data structure, specifically an abstract data type (ADT), that consists of a set of nodes (also called vertices) and a set of edges that establish relationships (connections) between the nodes. The graph ADT follows directly from the graph concept from mathematics. |
|
| Informally, G=(V,E) consists of vertices, the elements of V, which are connected by edges, the elements of E. Formally, a graph, G, is defined as an ordered pair, G=(V,E), where V is a finite set and E is a set consisting of two element subsets of V. |
|
|
| Comparison with other data structures |
|
| Graph data structures are non-hierarchical and therefore suitable for data sets where the individual elements are interconnected in complex ways. For example, a computer network can be modeled with a graph. |
|
| Hierarchical data sets can be represented by a binary or nonbinary tree. It is worth mentioning, however, that trees can be seen as a special form of graph. |
|
|
|
|
| Graph |
|
| Representations of Graph |
|
| Choices of representation |
|
| Two main data structures for the representation of graphs are used in practice. The first is called an adjacency list, and is implemented by representing each node as a data structure that contains a list of all adjacent nodes. The second is an adjacency matrix, in which the rows and columns of a two-dimensional array represent source and destination vertices and entries in the graph indicate whether an edge exists between the vertices. Adjacency lists are preferred for sparse graphs; otherwise, an adjacency matrix is a good choice. Finally, for very large graphs with some regularity in the placement of edges, a symbolic graph is a possible choice of representation. |
|
|
| Array Representation |
|
| One way of representing a graph with n vertices is to use an n2 matrix (that is, a matrix with n rows and n columns—that means there is a row as well as a column corresponding to every vertex of the graph). If there is an edge from vi to vj then the entry in the matrix withrow index as vi and column index as vj is set to 1 (adj[vi, vj] = 1, if (vi, vj) is an edge ofgraph G). If e is the total number of edges in the graph, then there will 2e entries which will be set to 1, as long as G is an undirected graph. Whereas if G were a directed graph, only e entries would have been set to 1 in the adjacency matrix. The adjacency matrixrepresentation of an undirected as well as a directed graph is show in image below: |
|
|
|
|
|
| Example |
| The adjacency matrix representation of the following diagraph(directed graph), along with the indegree and outdegree of each node is shown here: |
|
|
|
|
|
|
|
|
| Linked List Representation |
|
| Another way of representing a graph G is to maintain a list for every vertex containing all vertices adjacent to that vertex, as shown in image below: |
|
|
|
|
|
|
|
| Graph |
|
| Traversing a Graph |
|
| Perhaps the most fundamental graph problem is to traverse every edge and vertex in a graph in a systematic way. Indeed, most of the basic algorithms you will need for book keeping operations on graphs will be applications of graph traversal. |
|
| These include: |
- Printing or validating the contents of each edge and/or vertex.
- Copying a graph, or converting between alternate representations.
- Counting the number of edges and/or vertices.
- Identifying the connected components of the graph.
- Finding paths between two vertices, or cycles if they exist.
|
|
| Since any maze can be represented by a graph, where each junction is a vertex and each hallway an edge, any traversal algorithm must be powerful enough to get us out of an arbitrary maze. For efficiency, we must make sure we don't get lost in the maze and visit the same place repeatedly. By being careful, we can arrange to visit each edge exactly twice. For correctness, we must do the traversal in a systematic way to ensure that we don't miss anything. To guarantee that we get out of the maze, we must make sure our search takes us through every edge and vertex in the graph. |
|
| The key idea behind graph traversal is to mark each vertex when we first visit it and keep track of what we have not yet completely explored. Although bread crumbs or unraveled threads are used to mark visited places in fairy-tale mazes, we will rely on Boolean flags or enumerated types. |
|
|
| Each vertex will always be in one of the following three states: |
- undiscovered - the vertex in its initial, virgin state.
- discovered - the vertex after we have encountered it, but before we have checked out all its incident edges.
- completely-explored - the vertex after we have visited all its incident edges.
|
|
| Obviously, a vertex cannot be completely-explored before we discover it, so over the course of the traversal the state of each vertex progresses from undiscovered to discovered to completely-explored. |
|
| We must also maintain a structure containing all the vertices that we have discovered but not yet completely explored. Initially, only a single start vertex is considered to have been discovered. To completely explore a vertex, we must evaluate each edge going out of it. If an edge goes to an undiscovered vertex, we mark it discovered and add it to the list of work to do. If an edge goes to a completely-explored vertex, we will ignore it, since further contemplation will tell us nothing new about the graph. We can also ignore any edge going to a discovered but not completely-explored vertex, since the destination must already reside on the list of vertices to completely explore. |
|
| Regardless of which order we use to fetch the next vertex to explore, each undirected edge will be considered exactly twice, once when each of its endpoints is explored. Directed edges will be consider only once, when exploring the source vertex. Every edge and vertex in the connected component must eventually be visited. Why? Suppose the traversal didn't visit everything, meaning that there exists a vertex u that remains unvisited whose neighbor v was visited. This neighbor v will eventually be explored, and we will certainly visit u when we do so. Thus we must find everything that is there to be found. |
|
| The order in which we explore the vertices depends upon the container data structure used to store the discovered but not completely-explored vertices. |
|
| There are two important possibilities: |
- Queue - by storing the vertices in a first in, first out (FIFO) queue, we explore the oldest unexplored vertices first. Thus our explorations radiate out slowly from the starting vertex, defining a so-called breadth-first search.
- Stack - by storing the vertices in a last in, first out (LIFO) stack, we explore the vertices by lurching along a path, visiting a new neighbor if one is available, and backing up only when we are surrounded by previously discovered vertices. Thus our explorations quickly wander away from our starting point, defining a so-called depth-first search.
|
|
|
|
|
| Graph |
|
| DAG (Directed Acyclic Graph) |
|
| a Directed Acyclic Graph, also called a dag or DAG, is a directed graph with no directed cycles; that is, for any vertex v, there is no nonempty directed path that starts and ends on v. DAGs appear in models where it doesn't make sense for a vertex to have a path to itself; for example, if an edge u?v indicates that v is a part of u, such a path would indicate that u is a part of itself, which is impossible. Informally speaking, a DAG "flows" in a single direction. |
|
|
| Concept |
|
| A directed acyclic graph (DAG) is a directed graph with no cycles. A DAG represents more general relationships than trees but less general than arbitrary directed graphs. An example of a DAG is given in image below |
|
|
|
|
| DAGs are useful in representing the syntactic structure of arithmetic expressions with common sub-expressions. |
|
| For example, consider the following expression: |
| (a+b)*c + ((a+b + e)) |
|
|
| In this expression, the term (a + b) is a common sub-expression, and therefore represented in the DAG by the vertices with more than one incoming edge, as shown in image below: |
|
|
|
|
| Each directed acyclic graph gives rise to a partial order = on its vertices, where u = v exactly when there exists a directed path from u to v in the DAG. However, many different DAGs may give rise to this same reachability relation. Among all such DAGs, the one with the fewest edges is the transitive reduction of each of them and the one with the most is their transitive closure. In particular, the transitive closure is the reachability order =. |
|
|
| Properties |
|
| Every directed acyclic graph has a topological sort, an ordering of the vertices such that each vertex comes before all vertices it has edges to. In general, this ordering is not unique. Any two graphs representing the same partial order have the same set oftopological sort orders. |
|
| DAGs can be considered to be a generalization of trees in which certain subtrees can be shared by different parts of the tree. In a tree with many identical subtrees, this can lead to a drastic decrease in space requirements to store the structure. Conversely, a DAG can be expanded to a forest of rooted trees using this simple algorithm: |
|
- While there is a vertex v with in-degree n > 1,
- Make n copies of v, each with the same outgoing edges but no incoming edges.
- Attach one of the incoming edges of v to each vertex.
- Delete v.
|
|
| If we explore the graph without modifying it or comparing nodes for equality, this forest will appear identical to the original DAG. |
|
| Some algorithms become simpler when used on DAGs instead of general graphs. For example, search algorithms like depth-first search without iterative deepening normally must mark vertices they have already visited and not visit them again. If they fail to do this, they may never terminate because they follow a cycle of edges forever. Such cycles do not exist in DAGs. |
|
|
|
|
| Hashing, Searching & Sorting |
|
| Introduction |
|
| There are many applications requiring a search for a particular element. Searching refers to finding out whether a particular element is present in the list. The method that we use for this depends on how the elements of the list are organized. If the list is an unordered list, then we use linear or sequential search, whereas if the list is an ordered list, then we use binary search. |
|
| The search proceeds by sequentially comparing the key with elements in the list, and continues until either we find a match or the end of the list is encountered. If we find a match, the search terminates successfully by returning the index of the element in the list which has matched. If the end of the list is encountered without a match, the search terminates unsuccessfully. |
|
|
| Searching Algorithm |
|
| A search algorithm, broadly speaking, is an algorithm that takes a problem as input and returns a solution to the problem, usually after evaluating a number of possible solutions. Most of the algorithms studied by computer scientists that solve problems are kinds of search algorithms. The set of all possible solutions to a problem is called the search space. Brute-force search or "naïve"/uninformed search algorithms use the simplest, most intuitive method of searching through the search space, whereas informed search algorithms use heuristic functions to apply knowledge about the structure of the search space to try to reduce the amount of time spent searching. |
|
| Types of Searching Algorithm: |
|
| 1 | Uninformed search |
| 2 | List search |
| 3 | Tree search |
| 4 | Graph search |
| 5 | SQL search |
| 6 | Tradeoff-based search |
| 7 | Informed search |
| 8 | Adversarial search |
|
|
|
| Uninformed search |
|
| An uninformed search algorithm is one that does not take into account the specific nature of the problem. As such, they can be implemented in general, and then the same implementation can be used in a wide range of problems thanks to abstraction. The drawback is that most search spaces are extremely large, and an uninformed search (especially of a tree) will take a reasonable amount of time only for small examples. As such, to speed up the process, sometimes only an informed search will do. |
|
|
| List search |
|
| List search algorithms are perhaps the most basic kind of search algorithm. The goal is to find one element of a set by some key (perhaps containing other information related to the key). As this is a common problem in computer science, the computational complexity of these algorithms has been well studied. The simplest such algorithm is linear search, which simply examines each element of the list in order. It has expensive O(n) running time, where n is the number of items in the list, but can be used directly on any unprocessed list. A more sophisticated list search algorithm is binary search; it runs in O(log n) time. This is significantly better than linear search for large lists of data, but it requires that the list be sorted before searching (see sorting algorithm) and also be random access. Interpolation search is better than binary search for large sorted lists with fairly even distributions, but has a worst-case running time of O(n). |
|
| Grover's algorithm is a quantum algorithm that offers quadratic speedup over the classical linear search for unsorted lists. However, it requires a currently non-existent quantum computer on which to run. |
|
| Hash tables are also used for list search, requiring only constant time for search in the average case, but more space overhead and terrible O(n) worst-case search time. Another search based on specialized data structures uses self-balancing binary search trees and requires O(log n) time to search; these can be seen as extending the main ideas of binary search to allow fast insertion and removal. See associative array for more discussion of list search data structures. |
|
| Most list search algorithms, such as linear search, binary search, and self-balancing binary search trees, can be extended with little additional cost to find all values less than or greater than a given key, an operation called range search. The glaring exception is hash tables, which cannot perform such a search efficiently. |
|
|
| Tree search |
|
| Tree search algorithms are the heart of searching techniques. These search trees of nodes, whether that tree is explicit or implicit (generated on the go). The basic principle is that a node is taken from a data structure, its successors examined and added to the data structure. By manipulating the data structure, the tree is explored in different orders for instance level by level (Breadth-first search) or reaching a leaf node first and backtracking (Depth-first search). Other examples of tree-searches include Iterative-deepening search, Depth-limited search, Bidirectional search and Uniform-cost search. |
|
|
| Graph search |
|
| Many of the problems in graph theory can be solved using graph traversal algorithms, such as Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm. These can be seen as extensions of the tree-search algorithms. |
|
|
| SQL search |
|
| Many of the problems in Tree search can be solved using SQL type searches. SQL typically works best on Structured data. It offers one advantage over hierarchical type search in that it allows access to the data in many different ways. In a Hierarchical search your path is forced by the branches of the tree (example name by alphabetical order) while with SQL you have the flexibility of accessing the data along multiple directions (name, address, income etc...). |
|
|
| Tradeoff-based search |
|
| While SQL search offers great flexibility to search the data, it still operates like a computer does: by constraints. In SQL, constraints are used to eliminate the data, while tradeoff based search uses a more "human" metaphor. For example if you are searching for a car in a dataset, your SQL statement looks like: select car from dataset where price < $30,000, and Consumption > 30MPG, and Color = 'RED'. While a tradeoff type query would look like "I like the red car, but I will settle for the blue if it is $2,000 cheaper". |
|
|
| Informed search |
|
| In an informed search, a heuristic that is specific to the problem is used as a guide. A good heuristic will make an informed search dramatically out-perform any uninformed search. |
|
| There are few prominent informed list-search algorithms. A possible member of that category is a hash table with a hashing function that is a heuristic based on the problem at hand. Most informed search algorithms explore trees. These include Best-first search, andA*. Like the uninformed algorithms, they can be extended to work for graphs as well. |
|
|
| Adversarial search |
|
| In games such as chess, there is a game tree of all possible moves by both players and the resulting board configurations, and we can search this tree to find an effective playing strategy. This type of problem has the unique characteristic that we must account for any possible move our opponent might make. To account for this, game-playing computer programs, as well as other forms of artificial intelligence like machine planning, often use search algorithms like the minimax algorithm, search tree pruning, and alpha-beta pruning. |
|
|
| Searching Algorithm |
|
| A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: |
|
- The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
- The output is a permutation, or reordering, of the input.
|
|
| Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956. Although many consider it a solved problem, useful new sorting algorithms are still being invented to this day (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. |
|
|
| Popular Searching Algorithm |
|
- Bubble sort
- Selection sort
- Insertion sort
- Shell sort
- Merge sort
- Heapsort
- Quicksort
- Bucket sort
- Radix sort
|
|
| 1. Bubble Sort |
| Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education. A slightly better variant, cocktail sort, works by inverting the ordering criteria and the pass direction on alternating passes. Its average case and worst case are both O(n²). |
|
| 2. Selection sort |
| Selection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort also requires only n swaps, and hence just T(n) memory writes, which is optimal for any sorting algorithm. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms. |
|
| 3. Insertion sort |
| Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place.Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists. This method is much more efficient than the bubble sort, though it has more constraints. |
|
| 4. Shell sort |
| Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory. |
|
| 5. Merge sort |
| Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). |
|
| 6. Heapsort |
| Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time. |
|
| 7. Quicksort |
| Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n). |
|
| 8. Bucket sort |
| Bucket sort is a sorting algorithm that works by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. A variation of this method called thesingle buffered count sort is faster than the quick sort and takes about the same time to run on any set of data. More information is available. |
|
| 9. Radix sort |
| Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small. |
|
|
|
|
| Hashing, Searching & Sorting |
|
| Hashing Function |
|
| A data object called a symbol table is required to be defined and implemented in many applications, such as compiler/assembler writing. A symbol table is nothing but a set of pairs (name, value), where value represents a collection of attributes associated with the name, and the collection of attributes depends on the program element identified by the name. |
|
| For example, if a name x is used to identify an array in a program, then the attributes associated with x are the number of dimensions, lower bound and upper bound of each dimension, and element type. Therefore, a symbol table can be thought of as a linear list of pairs (name, value), and we can use a list data object for realizing a symbol table. |
|
| A symbol table is referred to or accessed frequently for adding a name, or for storing or retrieving the attributes of a name. Therefore, accessing efficiency is a prime concern when designing a symbol table. The most common method of implementing a symbol table is to use a hash table. |
|
| Hashing is a method of directly computing the index of the table by using a suitable mathematical function called a hash function. |
|
| Note |
| The hash function operates on the name to be stored in the symbol table, or whose attributes are to be retrieved from the symbol table. |
|
| If h is a hash function and x is a name, then h(x) gives the index of the table where x, along with its attributes, can be stored. If x is already stored in the table, then h(x) gives the index of the table where it is stored, in order to retrieve the attributes of x from the table. There are various methods of defining a hash function. One is the division method. In this method, we take the sum of the values of the characters, divide it by the size of the table, and take the remainder. This gives us an integer value lying in the range of 0 to (n-1), if the size of the table is n. |
|
| Another method is the mid-square method. In this method, the identifier is first squared and then the appropriate number of bits from the middle of the square is used as the hash value. Since the middle bits of the square usually depend on all the characters in the identifier, it is expected that different identifiers will result in different values. The number of middle bits that we select depends on the table size. Therefore, if r is the number of middle bits that we are using to form the hash value, then the table size will be 2r. So when we use this method, the table size is required to be a power of 2. |
|
| A third method is folding, in which the identifier is partitioned into several parts, all but the last part being of the same length. These parts are then added together to obtain the hash value. |
| To store the name or to add attributes of the name, we compute the hash value of the name, and place the name or attributes, as the case may be, at that place in the table whose index is the hash value of the name. |
|
| To retrieve the attribute values of the name kept in the symbol table, we apply the hash function of the name to that index of the table where we get the attributes of the name. So we find that no comparisons are required to be done; the time required for the retrieval is independent of the table size. The retrieval is possible in a constant amount of time, which will be the time taken for computing the hash function. |
|
| Therefore a hash table seems to be the best for realization of the symbol table, but there is one problem associated with the hashing, and that is collision. |
|
| Hash collision occurs when the two identifiers are mapped into the same hash value. This happens because a hash function defines a mapping from a set of valid identifiers to the set of those integers that are used as indices of the table. |
|
| Therefore we see that the domain of the mapping defined by the hash function is much larger than the range of the mapping, and hence the mapping is of a many-to-one nature. Therefore, when we implement a hash table, a suitable collision-handling mechanism is to be provided, which will be activated when there is a collision. |
|
| Collision handling involves finding an alternative location for one of the two colliding symbols. For example, if x and y are the different identifiers and h(x = h(y), x and y are the colliding symbols. If x is encountered before y, then the ith entry of the table will be used for accommodating the symbol x, but later on when y comes, there is a hash collision. Therefore we have to find a suitable alternative location either for x or y. This means we can either accommodate y in that location, or we can move x to that location and place y in the ith location of the table. |
|
| Various methods are available to obtain an alternative location to handle the collision. They differ from each other in the way in which a search is made for an alternative location. |
|
|
| The following are commonly used collision-handling techniques: |
|
| Linear Probing or Linear Open Addressing |
|
| In this method, if for an identifier x, h(x) = i, and if the ith location is already occupied, we search for a location close to the ith location by doing a linear search, starting from the(i+1)th location to accommodate x. This means we start from the (i+1)th location and do the linear search until we get an empty location; once we get an empty location we accommodate x there. |
|
|
| Rehashing |
|
| In rehashing we find an alternative empty location by modifying the hash function and applying the modified hash function to the colliding symbol. For example, if x is the symbol and h(x) = i, and if the ith location is already occupied, then we modify the hash function h to h1, and find out h1(x), if h1(x) = j. If the jth location is empty, then we accommodate x in the jth location. Otherwise, we once again modify h1 to some h2 and repeat the process until the collision is handled. Once the collision is handled, we revert to the original hash function before considering the next symbol. |
|
|
| Overflow chaining |
|
| Overflow chaining is a method of implementing a hash table in which the collisions are handled automatically. In this method, we use two tables: a symbol table to accommodate identifiers and their attributes, and a hash table, which is an array of pointers pointing to symbol table entries. Each symbol table entry is made of three fields: the first for holding the identifier, the second for holding the attributes, and the third for holding the link or pointer that can be made to point to any symbol table entry. |
|
| The insertions into the symbol table are done as follows: |
| If x is the symbol to be inserted, it will be added to the next available entry of the symbol table. The hash value of x is then computed. If h(x) = i, then the ith hash table pointer is made to point to the symbol table entry in which x is stored, if the ith hash table pointer does not point to any symbol table entry. If the ith hash table pointer is already pointing to some symbol table entry, then the link field of the symbol table entry containing x is made to point to that symbol table entry to which the ith hash table pointer is pointing to, and the ith hash table pointer is made to point to the symbol entry containing x. This is equivalent to building a linked list on the ith index of the hash table. |
|
| The retrieval of attributes is done as follows: |
If x is a symbol, then we obtain h(x), use this value as the index of the hash table, and traverse the list built on this index to get that entry which contains x. A typical hash tableimplemented using this technique is shown here.
The symbols to b stored are x1,y1,z1,x2,y2,z2. The hash function that we use is h(symbol) = (value of first letter of the symbol) mod n, where n is the size of table. |
if h(x1) = i
h(y1) = j
h(z1) = k |
|
| then |
h(x2) = i
h(y2) = j
h(z2) = k |
|
|
| Therefore, the contents of the symbol table will be the one shown in image below: |
|
|
|
|
|
|
|
| Hashing, Searching & Sorting |
|
| Linear Search |
|
| Linear search is a search algorithm, also known as sequential search, that is suitable for searching a set of data for a particular value. |
|
| It operates by checking every element of a list one at a time in sequence until a match is found. Linear search runs in O(N). If the data are distributed randomly, on average(N+1)/2 comparisons will be needed. The best case is that the value is equal to the first element tested, in which case only 1 comparison is needed. The worst case is that the value is not in the list (or is the last item in the list), in which case N comparisons are needed. |
|
| The simplicity of the linear search means that if just a few elements are to be searched it is less trouble than more complex methods that require preparation such as sorting the list to be searched or more complex data structures, especially when entries may be subject to frequent revision. Another possibility is when certain values are much more likely to be searched for than others and it can be arranged that such values will be amongst the first considered in the list. |
|
| There are many applications requiring a search for a particular element. Searching refers to finding out whether a particular element is present in the list. The method that we use for this depends on how the elements of the list are organized. If the list is an unordered list, then we use linear or sequential search, whereas if the list is an ordered list, then we usebinary search. |
|
| The search proceeds by sequentially comparing the key with elements in the list, and continues until either we find a match or the end of the list is encountered. If we find a match, the search terminates successfully by returning the index of the element in the list which has matched. If the end of the list is encountered without a match, the search terminates unsuccessfully. |
|
|
| Example Program: |
|
#include <stdio.h>
#define MAX 10
/*Linear Search Function*/
void lsearch(int list[],int n,int element)
{
int i, flag = 0;
for(i=0;i<n;i++)
if( list[i] == element)
{
printf(" The element whose value is %d is present at position %d in list\n", element,i+1);
flag =1;
break;
}
if( flag == 0)
printf("The element whose value is %d is not present in theist\n", element);
}
void readlist(int list[],int n)
{
int i;
printf("Enter the elements\n");
for(i=0;i<n;i++)
scanf("%d",&list[i]);
}
/*Function to print content of list */
void printlist(int list[],int n)
{
int i;
printf("The elements of the list are: \n");
for(i=0;i<n;i++)
printf("%d\t",list[i]);
}
main()
{
int list[MAX], n, element;
printf("Enter the number of elements in the list max = 10\n");
scanf("%d",&n);
readlist(list,n);
printf("\nThe list before sorting is:\n");
printlist(list,n);
printf("\nEnter the element to be searched\n");
scanf("%d",&element);
lsearch(list,n,element);
}
|
|
| Output: |
|
|
|
|
|
|
|
| Hashing, Searching & Sorting |
|
| Binary Search |
|
| A Binary Search algorithm (or binary chop) is a technique for finding a particular value in a sorted list. It makes progressively better guesses, and closes in on the sought value by selecting the median element in a list, comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guess that turns out to be too high becomes the new top of the list, and a guess that is too low becomes the new bottom of the list. Pursuing this strategy iteratively, it narrows the search by a factor of two each time, and finds the target value. A Binary Search is an example of a dichotomic divide and conquer search algorithm |
|
| The prerequisite for using binary search is that the list must be a sorted one. We compare the element to be searched with the element placed approximately in the middle of the list. |
|
| If a match is found, the search terminates successfully. Otherwise, we continue the search for the key in a similar manner either in the upper half or the lower half. If the elements of the list are arranged in ascending order, and the key is less than the element in the middle of the list, the search is continued in the lower half. If the elements of the list are arranged in descending order, and the key is greater than the element in the middle of the list, the search is continued in the upper half of the list. The procedure for the binary search is given in the following program. |
|
|
| The algorithm |
|
| The most common application of binary search is to find a specific value in a sorted list. To cast this in the frame of the guessing game (see Example below), realize that we are now guessing the index, or numbered place, of the value in the list. This is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to N. A query might be: what is the telephone number for a given name X. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon it would be used to report the associated telephone number and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an index value of zero), indeed the question of interest might be only whether Xis in the list or not. |
|
| If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a linear search, and the procedure is much simpler than organizing a hash table though that would be faster still, typically averaging just over one probe. This applies for a uniform distribution of search items but if it is known that some few items are much more likely to be sought for than the majority then a linear search with the list ordered so that the most popular items are first may do better. |
|
| The binary search begins by comparing the sought value X to the value in the middle of the list; because the values are sorted, it is clear whether the sought value would belong before or after that middle value, and the search then continues through the correct half in the same way. Only the sign of the difference is inspected: there is no attempt at aninterpolation search based on the size of the differences. |
|
|
| Example Program |
|
#include <stdio.h>
#define MAX 10
void bsearch(int list[],int n,int element)
{
int l,u,m, flag = 0;
l = 0;
u = n-1;
while(l <= u)
{
m = (l+u)/2;
if( list[m] == element)
{
printf(" The element whose value is %d is present at position %d in list\n", element,m+1);
flag =1;
break;
}
else
if(list[m] < element)
l = m+1;
else
u = m-1;
}
if( flag == 0)
printf("The element whose value is %d is not present in the list\n", element);
}
void readlist(int list[],int n)
{
int i;
printf("Enter the elements\n");
for(i=0;i<n;i++)
scanf("%d",&list[i]);
}
void printlist(int list[],int n)
{
int i;
printf("The elements of the list are: \n");
for(i=0;i<n;i++)
printf("%d\t",list[i]);
}
int main()
{
int list[MAX], n, element;
printf("Enter the number of elements in the list max = 10\n");
scanf("%d",&n);
readlist(list,n);
printf("\nThe list before sorting is:\n");
printlist(list,n);
printf("\nEnter the element to be searched\n");
scanf("%d",&element);
bsearch(list,n,element);
}
|
|
| Output |
|
|
|
|
|
|
|
| Hashing, Searching & Sorting |
|
| Bubble Sort |
|
| Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing two items at a time and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. |
|
| Bubble sorting is a simple sorting technique in which we arrange the elements of the list by forming pairs of adjacent elements. That means we form the pair of the ith and (i+1)thelement. If the order is ascending, we interchange the elements of the pair if the first element of the pair is greater than the second element. That means for every pair |
| (list[i],list[i+1]) for i :=1 to (n-1) if list[i] > list[i+1], |
|
| we need to interchange list[i] and list[i+1]. Carrying this out once will move the element with the highest value to the last or nth position. Therefore, we repeat this process the next time with the elements from the first to (n-1)th positions. This will bring the highest value from among the remaining (n-1) values to the (n-1)th position. We repeat the process with the remaining (n-2) values and so on. Finally, we arrange the elements in ascending order. This requires to perform (n-1) passes. In the first pass we have (n-1) pairs, in the second pass we have (n-2) pairs, and in the last (or (n-1)th) pass, we have only one pair. Therefore, the number of probes or comparisons that are required to be carried out is |
|
| example: |
5 1 4 2 8 - unsorted array
1 4 2 5 8 - after one pass
1 2 4 5 8 - sorted array |
|
| The algorithm gets its name from the way smaller elements "bubble" to the top (i.e. the beginning) of the list via the swaps. (Another opinion: it gets its name from the way greater elements "bubble" to the end.) Because it only uses comparisons to operate on elements, it is a comparison sort. This is the easiest comparison sort to implement. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
0 comments: