Chap 1. Analysis
Syllabus : Algorithm definition, space complexity, time complexity, worst case –best case – average case complexity, asymptotic notation( O, , notation), sorting algorithms (insertion sort, heap sort) , sorting in linear time, searching algorithms, recursive algorithms ( Tower of Hanoi , Permutations). [6]
Algorithms: An algorithm is a finite set of instructions in a specific sequence, which if followed, solves a particular problem. In short, it is also called as a step by step solution of the problem.
Properties / Characteristics of the algorithm:
- Input: every algorithm can have 0 or more externally supplied quantities.
- Output: every algorithm must produce at least one output.
- Definiteness: Every step must be clear and an unambiguous.
- Effectiveness: Every instruction must be simple/basic so that, it can be carried out by a person using paper and pencil.
- Finiteness: Algorithm must be terminated after executing finite number of steps.
Advantages:
- It is very easy to write and easy to understand.
- The logic to solve the problem becomes easy to understand with the help of algorithms.
Disadvantages:
- It is time consuming.
- It is difficult to show branching and looping.
Note: The algorithms that are definite and effective are also called computational procedures. Eg. OS for a digital computer
To help us achieve the criterion of definiteness, algorithms are written in a programming language. Such languages are designed so that each legitimate sentence has a unique meaning.
A program is the expression of an algorithm in a programming language. Sometimes the words such as procedure, function and subroutine are used synonymously for program.
Some of the areas of the study of algorithms:
- How to devise algorithms: Creating an algorithm is an art which may never be fully automated. There are different design techniques such as divide and conquer, greedy method, dynamic programming, backtracking or branch and bound methods.
- How to validate algorithms: Once an algorithm is devised, it is necessary to show that it computes the correct answer for all possible legal inputs. This process is called as program validation.
Once the validity of the method has been shown, a program can be written and the second phase begins. This phase is referred to as program proving or sometimes as program verification. There are two methods to do this: first one is known as inserting assertions ( a set of statements related to the input and output variables of the program. And the second method is known as specification (the definition of what a computer program is expected to do. Specifications are most important for external interfaces that must remain stable.)
Assertions and specifications can be expressed in the predicate calculus.
- How to analyze algorithms (Analysis of algorithms / performance analysis):- It is a task of determining how much computing time and storage an algorithm requires. The measure used for this is known as complexity. It is quantitative analysis. This method can be used to choose best algorithm from different algorithms for the same problem. This method can also be used to check whether the software will meet any efficiency constraints that exist.
- How to test a program: Testing a program consists of two phases:
- Debugging: is the process of executing programs on sample data sets to determine whether faulty results occur and, if so, to correct them. (E. Dijkstra has pointed out, “debugging can only point to the presence of errors, but not to their absence”)
- Profiling or performance measurement: is the process of executing a correct program on data sets and measuring the time and space it takes to compute the results.
Computational Complexity:
The efficiency or complexity of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).
Time Complexity: is the amount of computer time it needs to run to completion.
Space Complexity: is the amount of memory it needs to run to completion.
Performance evaluation can be loosely divided into two phases.
- Priory analysis: As an algorithm is executed, it does not use CPU.
- Posterior analysis: As an algorithm is executed, it uses CPU.
The performance analysis in both the phases is measured using the special terminology, asymptotic notation. There are 5 asymptotic notations which are as below:
- Ω (Omega) (lower bound): The f(n) = Ω(g(n)) (read as “f of n is omega of g of n”) if and only if there exist positive constants c and n0 such that f(n)≥c * g(n) for all n, n ≥ 0. or in other words f(n)= Ω(g(n)) iff
- Θ(Theta): The f(n) = Θ(g(n)) (read as “f of n is Theta of g of n”) if and only if there exist positive constants c1, c2 and n0 such that c1*g(n) ≤ f(n) ≤ c2*g(n) for all n, n ≥ 0.
- o (Little Oh) : The f(n) = o(g(n)) (read as “f of n is little oh of g of n”) if and only if
- ω (Little Omega): The f(n)=ω(g(n))(read as “f of n is little omega of g of n”) if and only if
Examples:
- 3n+2=O(n)
- 10n2+3n+2=O(n2)
- 6*2n + n2 = O (2n)
- 3n+2=Ω(n)
- 10n2+3n+2= Ω(n2)
- 6*2n + n2 = Ω(2n)
- 10n2+3n+2= Ω(n)
- 6*2n + n2 = Ω(n2)
- 3n+2=Θ(n)
- 10n2+3n+2= Θ(n2)
- 6*2n + n2 = Θ(2n)
- 3n+2=o(n2)
- 10n2+3n+2=o(n3)
- 6*2n + n2 = o (3n)
- 6*2n + n2 = o (2nlog n)
- 6*2n + n2 ≠ o (2n)
There are different measurements of the speed of any given algorithm (time complexity). Given an input size of N, they can be described as follows:
Name | Description | Formula | Example |
factorial time (speed: slower) | takes an amount of time proportional to N raised to the Nth power | N! | Brute force solution to Traveling Salesman Problem |
exponential time (slow) | takes an amount of time proportional to a constant raised to the Nth power | KN | Brute force solution to Rubik's Cube |
Sub-exponential time (slow) | runs in time greater than polynomial time ("super-polynomial time"), but less than exponential time. | Less than O(n2) | |
constant time (fastest) | takes a fixed amount of time, no matter how large the input is | K | Array index lookup |
linear time (even faster) | takes an amount of time directly proportional to N | K * N | Iterating through an array |
logarithmic time (much faster) | takes an amount of time proportional to the logarithm of N | log(N) | Binary Search |
linearithmic time (faster) | takes an amount of time between linear and polynomial | N * log(N) | The Linear logarithmic sorts (quicksort, heapsort, mergesort) |
Quadratic (fast) | | N2 | Sequential or bubble sort |
Cubic (fast) | | N3 | Matrix multiplication |
subquadratic time (faster) | if its running time f(n) is less than o(n2). | <= K * N2 | |
polynomial time (fast) | takes an amount of time proportional to N raised to some constant power | NK | Comparison sorts (bubble, insertion, selection sort) |
Complexity Analysis
A given operation can have different time complexities with different orders/sets of input. The different methods of time complexity analysis are as follows:
Name | Description | |
best-case | A case where the operation executes as fast as it possibly can | |
average-case | A case where the operation executes in a time comparable to the majority of possible cases | |
worst-case | A case where the operation executes as slowly as it possibly can | |
amortized worst-case | The average worst-case taken over an infinite number of inputs | |
Choosing the right algorithm depends upon which cases you expect your application to encounter. For example, an application that must protect itself from malicious input will avoid naive implementations of quicksort, which has a worst-case time complexity of N2 despite having one of the fastest average-case time complexities compared to all other sorts.
Analyze the following algorithms:
Q 1) Write an algorithm to find addition of first n numbers.
Algorithm add_first_n(n)
{
sum=0;
for i = 1 to n do
sum = sum + i;
return sum;
}
Q 2) Write an algorithm to find addition of any n numbers.
Algorithm add_any_n(n)
{
sum=0;
for i = 1 to n do
{
read no;
sum = sum + no;
}
return sum;
}
Q 3) Write an algorithm to find addition of elements of given array with n integers.
Algorithm add_array_elements(a, n)
{
sum=0;
for i = 1 to n do
sum = sum + a[i];
return sum;
}
Q 4) Let a[1..n] be an array of positive and negative integers. Write an algorithm to count all positive integers.
Algorithm count(a, n)
{ count=0;
for i = 1 to n do
if (a[i] > 0) then
count = count + 1;
return count;
}
Q 5) Let a[1..n] be an array of integers. Write an algorithm to find an index such that a[i] = i.
Algorithm find_index_1(a, n)
{
for i = 1 to n do
if (a[i] = i) then
return i;
return 0;
}
Q 6) Let a[1..n] be an array of integers. Write an algorithm to find an index such that a[i] i.
Algorithm find_index_2(a, n)
{
for i = 1 to n do
if (a[i] i) then
return i;
return 0;
}
Q 7) Write an algorithm to add two arrays pf order m X n.
Algorithm add_matrices(a, b, c, m, n)
{
sum=0;
for i = 1 to m do
for j = 1 to n do
c[i, j] = a[i, j] + b[i, j];
return c;
}
Q 8) Write an algorithm to multiply two matrices of order m X n and n X p.
Algorithm multiply(a, b, c, m, n, p)
{
sum=0;
for i = 1 to m do
for j = 1 to p do
{
c[i, j] =0;
for k = 1 to n do
c[i, j] = c[i, j] + a[i, k] * b[k, j];
}
return c;
}
Q 9) Write an algorithm to check whether the given number is perfect square or not.
Algorithm per_sqr(n)
{
for i = 1 to n/2 do
if (i * i = n) then
break;
if (i > n/2) then
write “The given number is not perfect square”;
else
write “The given number is perfect number”;
}
Q 10) Write an algorithm to check whether the given number is prime or not.
Algorithm prime (n)
{
for i = 1 to n/2 do
if ((n % i) = 0) then
return false;
return true;
}
Q 11) Write an algorithm to print first n even numbers.
Algorithm even_nu_1(n)
{
ev=0;
for i = 1 to n do
{
write ev;
ev = ev + 2;
}
}
Q 12) Write an algorithm to print even numbers up to n.
Algorithm even_nu_2(n)
{
ev=0;
while (ev <= n) do
{
write ev;
ev = ev + 2;
}
}
Q 13) Write an algorithm to read an integer and reverse the number.
Algorithm reverse_num(n, num)
{
num=0;
while (n 0)
{
num = num + (n mod 10);
n = n div 10;
}
return num;
}
Q 14) Write an algorithm to check whether it is a palindrome or not.
Boolean Algorithm palindrome(n)
{
n1=n;
num=0;
while (n 0)
{
num = num + (n mod 10);
n = n div 10;
}
If (n1=num)
return true;
else
return false;
}
Q 15) Write an algorithm to read an integer until we read n even numbers and stror them in an array.
Algorithm even_nu_3(a, n)
// a[1..n] is the output array.
{
count=0;
while (count <= n)
{
read num;
if ((num % 2) = 0) then
{
count = count + 1;
a[count] = num;
}
}
return a;
}
Recursive algorithm: An algorithm calling itself is called as recursive algorithm.
Algorithm RSum_firstn(n)
{ if ( n<=0) then return 0;
else
return (n + Rsum_firstn(n-1));
}
Algorithm RSum_array(a,n)
{ if ( n<=0) then return 0;
else
return (a[n] + Rsum_array(a,n-1));
}
Algorithm Expo(x,y)
{ if (y == 0) then return 1;
else
if (y == 1) then return x;
else
return (x * expo(x, y-1));
}
Algorithm TowersOfHanoi(n, x, y, z)
{ if(n>=1) then
{
TowersOfHanoi(n-1, x, z, y);
Write(“move top disk from tower”, x,” to top of tower ”, y);
TowersOfHanoi(n-1, z, y, x);
}
}
Algorithm Perm(a, k, n)
{ if(k=n) then write (a[1:n]); //output permutation.
else
For i := k to n do
{
t:= a[k]; a[k] := a[i]; a[i] := t ;
perm(a, k+1, n);
t := a[k]; a[k] := a[i]; a[i]: = t;
}
}
Different Sorting Algorithms :
- Sorting: It is an arrangement of set of data in some given order.
There are two types of sorting:
- Internal Sorting: If all the data that is to be sorted can be adjusted at a time in main memory, then internal sorting methods are used.
- External Sorting: If all the data that is to be sorted can not be adjusted at a time in main memory and some has to be kept in auxiliary memory (hard disk, floppy, CD, etc.), then internal sorting methods are used.
- Stable sort: A sorting algorithm is stable if it preserves the order of duplicate keys.
There are different methods of internal sorting which are as below:
Algorithm SeqSort(a,n)
{
For i = 1 to n-1 do
For j = (i+1) to n do
If (a[i] > a[j}) then
{
t = a[i]; a[i] = a[j]; a[j] = t;
}
Return a;
}
Time Complexity: Worst case=O(n2), Average case=O(n2), best case=O(n2)
-----------------------------------------------------------------------------------------------
Algorithm bubbleSort(a,n)
{
For i = 2 to n step -1 do
For j = 1 to (i-1) do
If (a[i] > a[i+1]) then
{
t = a[i]; a[i] = a[i+1]; a[i+1] = t;
}
Return a;
}
Time Complexity: Worst case=O(n2), Average case=O(n2), best case=O(n2)
-----------------------------------------------------------------------------------------------
Algorithm SelectionSort(a, n)
{
For i = 1 to n-1 do
{
j = i;
For k = i+1 to n do
if (a[k] < a[ j]) then j = k;
t = a[ i];
a[ i] = a[ j];
a[ j] = t;
}
}
Time Complexity: Worst case=O(n2), Average case=O(n2), best case=O(n2)
----------------------------------------------------------------------------------------------
Algorithm InsertionSort(a, n)
{
For j = 2 to n do
{
Item = a[ j];
i = j-1;
While ((i >= 1) and (item < a[i])) do
{
A[i+1] = a[i];
i = i-1;
}
A[i+1] = item;
}
}
Time Complexity: Worst case=O(n2), Average case=O(n2), best case=O(n)
----------------------------------------------------------------------------------------------
Algorithm shell_sort(a,n)
{
inc=round(n/2);
while(inc>0)
{
for i = inc to n-1 do
{
temp=a[i];
j = i;
while ((j>=inc) and (a[j-inc] > temp))
{
a[ j] = a[ j –inc];
j = j-inc;
}
a[ j ] = temp;
}
inc = round(inc / 2.2);
}
return a;
}
Time Complexity: worst case= O(nlogn2) , Average case= , best case=
-----------------------------------------------------------------------------------------------
Algorithm bucketSort(a, n)
{
For i = 1 to n do
Insert a[i] into list B[ a[i] ];
For i = 0 to n-1 do
Sort list B[i] with insertion sort
Return the concatenation of the lists B[0], B[1], B[2], …… , B[n-1] together in order.
}
Time Complexity: Worst case=O(n2*k), Avg case=O(n*k), best case=O(n*k)
----------------------------------------------------------------------------------------------
Algorithm CountSort(a,b,n)
{
max=a[i];
for i = 1 to n do
if (a[i] > max)
max=a[i];
for i = 1 to max do
c[i] = 0;
for i = 1 to n do
c[a[i]] ++;
for i = 2 to max do
c[i] = c[i] + c[i-1];
for i = n down to 1 do
{
b[c[a[i]]] = a[ i];
c[a[i]] --;
}
}
Time Complexity: if n = max then
Worst case=O(n), Average case=O(n), best case=O(n)
----------------------------------------------------------------------------------------------
Radix Sort: There are two classifications of radix sort, which are as below:
- Least Significant Digit Radix Sort: The sorting starts with unit’s place upto most significant place.
Consider the list of keys viewed in a different way:
170, 045, 075, 090, 002, 024, 802, 066
The first counting pass starts on the least significant digit of each key, producing an array of bucket sizes:
2 (bucket size for digits of 0: 170, 090)
2 (bucket size for digits of 2: 002, 802)
1 (bucket size for digits of 4: 024)
2 (bucket size for digits of 5: 045, 075)
1 (bucket size for digits of 6: 066)
A second counting pass on the next more significant digit of each key will produce an array of bucket sizes:
2 (bucket size for digits of 0: 002, 802)
1 (bucket size for digits of 2: 024)
1 (bucket size for digits of 4: 045)
1 (bucket size for digits of 6: 066)
2 (bucket size for digits of 7: 170, 075)
1 (bucket size for digits of 9: 090)
A third and final counting pass on the most significant digit of each key will produce an array of bucket sizes:
6 (bucket size for digits of 0: 002, 024, 045, 066, 075, 090)
1 (bucket size for digits of 1: 170)
1 (bucket size for digits of 8: 802)
The sorted elements are 002, 024, 045, 066, 075, 090, 170, 802
- Most Significant Digit Radix Sort: The sorting starts with most significant place upto unit’s place.
Sort the list:
170, 045, 075, 090, 002, 024, 802, 066
170, 045, 075, 090, 002, 024, 802, 066
- Sorting by most significant digit (100s place) gives:
Zero hundreds bucket: 045, 075, 090, 002, 024, 066
One hundreds bucket: 170
Eight hundreds bucket: 802 - Sorting by next digit (10s place) is only needed for those numbers in the zero hundreds bucket (no other buckets contain more than one item):
Zero tens bucket: 002
Twenties bucket: 024
Forties bucket: 045
Sixties bucket: 066
Seventies bucket: 075
Nineties bucket: 090 - Sorting by least significant digit (1s place) is not needed, as there is no tens bucket with more than one number. Therefore, the now sorted zero hundreds bucket is concatenated, joined in sequence, with the one hundreds bucket and eight hundreds bucket to give:
002, 024, 045, 066, 075, 090, 170, 802
Elementary Data structures
Stack:
There are two types of operations on stack as: push and pop
When stack is represented as an array, algorithms can be written as follows.
- Algorithm Push(item){ if (top > n) thenwrite “Stack is full”;else{ top=top+1;stack[top] = item;}}Algorithm Pop(item){ if (top <= 0) thenwrite “Stack is empty”;else{ item = stack[top];top=top-1;}}Time Complexity: O(1)Time Complexity: O(1)
When stack is represented as a linked list, Assume that the node is declared as
Struct node
{
data type data;
struct node *next;
}
And the algorithms are as follows.
- Algorithm Push(item){ temp = new node;if (temp = NULL) thenwrite “Stack is full”;else{tempdata = item;temp->next = top;top = temp;}}Algorithm Pop(item){ if (top = NULL) thenwrite “Stack is empty”;else{ item = topdata;temp = top;top=topnext;free (temp);}}Time Complexity: O(1)Time Complexity: O(1)
Queue:
There are two types of operations on queue as: add and delete
When queue is represented as an array, the algorithms are as follows.
- Algorithm AddQ(item){ if (rear > n) thenwrite “Queue is full”;else{ rear = rear+1;que[rear] = item;}}
Algorithm DeleteQ(item){ if (rear = front) thenwrite “Queue is empty”;else{ front=front+1;item = que[front];}}Time Complexity: O(1)Time Complexity: O(1)
When queue is represented as a linked list, Assume that the node is declared as
Struct node
{
data type data;
struct node *next;
}
And the algorithms are as follows.
- Algorithm AddQ(item){ temp = new node;if (temp = NULL) thenwrite “Queue is full”;else{tempdata = item;tempnext=NULL;if (rear != NULL){rearnext=temp;rear = temp;}elsefront = rear = temp;}}Algorithm DeleteQ(item){ if (front = NULL) thenwrite “Queue is empty”;else{ item = frontdata;temp = front;front=front next;if (front = NULL) thenrear = NULL;tempnext = NULL:free (temp);}}Time Complexity: O(1)Time Complexity: O(1)
Circular Queue:
There are two types of operations on circular queue as: add and delete
When queue is represented as an array, the algorithms are as follows.
- Algorithm AddCQ(item){ rear=(rear+1) mod n;if (rear = front) then{ write “Queue is full”;if (front = 0) thenrear = n – 1;elserear = rear –1;}elseq[rear] = item;}Algorithm DeleteCQ(item){ if (rear = front) thenwrite “Queue is empty”;else{ front=(front+1) mod n;item = q[front];}}
Time Complexity: O(1)Time Complexity: O(1)
When queue is represented as a linked list, Assume that the node is declared as
Struct node
{
data type data;
struct node *next;
}
And the algorithms are as follows.
- Algorithm AddCQ(item){temp = new node;if (temp = NULL) thenwrite “Queue is full”;else{ tempdata = item;if (rear != NULL){rearnext=temp;rear = temp;rearnext=front;}else{front = rear = temp;rearnext = front;}}}Algorithm DeleteCQ(item){if (front = NULL) thenwrite “Queue is empty”;else{item = frontdata;temp = front;if (front = rear)front = rear = NULL;else{front=front next;frontnext = NULL:rearnext = front;}free (temp);}}Time Complexity: O(1)Time Complexity: O(1)
Trees: A tree is a finite set of one or more nodes such that there is a specially designated node called root and the remaining are partitioned into n>=0 disjoint sets T1, T2, …….., Tn (where every Ti is a sub-tree).
A tree is a connected acyclic graph (or sometimes, a connected directed acyclic graph in which every vertex has indegree 0 or 1).
A tree is simple graph, not contains loops as well as parallel edges.
Consider the following tree:
Terminologies related to tree:
- The number of sub-trees of a node is called its degree.
- The degree of node A is 3, and of node C is 1.
- The degree of a tree is the maximum degree of the nodes in the tree.
- The root node is node A. a node with degree zero.
- A rooted tree has a top node as root.
- A node with zero degree is known as leaf node or terminal node or external node.
- The other nodes are non-terminals or interior node or internal node.
- An edge in tree is called as Branch.
- If there are n nodes in a tree, then there are total n-1 branches.
- There is exactly one path from root to every other vertex.
- A Length is a path is a number of branches on the path.
- The roots of the sub-trees of a node X are the children of X. The node X is the parent node.
- Children of the same parent are said to be siblings.
- If there is a path from a vertex x to vertex y, then x (…parent node) is ancestor of y and y (…. Child) descendant of x.
- The size of a node is the number of descendants it has including itself.
- The depth (level) of a node v is the length of the path from root to node v. The root node is at level zero.
- The height of a node v is the length of the longest path from node v to a leaf node. The nodes at highest level are at height 0(zero).
- The max depth (level) of any leaf node is the depth of the tree.
- The Eccentricity of vertex v in a tree is the distance of farthest vertex from v.
- The Centre of a tree is a vertex of minimum eccentricity. There are one or two vertices as centre. It is also known as root node.
- The radius of a tree is the eccentricity of centre of a tree. (height of root node)
- The Diameter of a tree is the maximum eccentricity of any vertex.
- A forest is a set of n>=0 disjoint trees.
Binary Tree: it is tree data structure in which each node has at most two children. Typically the first node is known as the parent and the child nodes are called left and right. Binary trees are commonly used to implement binary search trees and binary heaps.
Properties of binary trees
- The number of nodes n in a complete binary tree : n = 2h + 1 − 1 where h is the height of the tree.
- The number of nodes n in a complete binary tree is minimum: n = 2h and maximum: n = 2h + 1 − 1 where h is the height of the tree.
- The number of nodes in a complete binary tree are n, then the minimum height is log2(n+1)-1 and maximum height is (n-1)/2.
- The number of nodes n in a perfect binary tree can also be found using this formula: n = 2L − 1 where L is the number of leaf nodes in the tree.
- The number of leaf nodes n in a perfect binary tree can be found using this formula: n = 2h where h is the height of the tree.
- The number of leaf nodes in a Complete Binary Tree of n-internal-nodes is (n+1). Or in other words For any non-empty binary tree with n0 leaf nodes and n2 nodes of degree 2, n0 = n2 + 1.
- The number of leaf node in a Complete Binary Tree of n-node is round(n/2).
- The maximum number of vertices at kth level of a binary tree is 2k.
- The depth (level) of a node v is the length of the path from root to node v. The root node is at level zero.
- The height of a node v is the length of the longest path from node v to a leaf node. The nodes at highest level are at height 0(zero).
- The height of the root is the height of the tree.
- The max depth of any leaf node is the depth of the tree.
- Generally depth of a tree and height of a tree is same and it is equal to maximum height of root node or maximum depth of leaf node.
Types of a binary tree:
- A rooted binary tree is a rooted tree in which every node has at most two children.
- A full binary tree (sometimes proper/strictly binary tree or 2-tree) is a tree in which every node other than the leaves has two children.
- A perfect or complete binary tree is a full binary tree in which all leaves are at the same depth or same level.
- An almost complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.
- An infinite complete binary tree is a tree with levels, where for each level d the number of existing nodes at level d is equal to 2d.
- A degenerate tree is a tree where for each parent node; there is only one associated child node. This means that in a performance measurement, the tree will behave like a linked list data structure. This tree is also called as skewed Binary tree. There are two types of skewed trees: left skewed tree and right skewed tree.
- Binary Search Tree: a binary search tree (BST) is a node based binary tree data structure which has the following properties:
- The left subtree of a node contains only nodes with keys less than the node's key.
- The right subtree of a node contains only nodes with keys greater than the node's key.
- Both the left and right sub-trees must also be binary search trees.
- Each node (item in the tree) has a distinct key.
The operations commonly performed with a heap are
- create 2. insert 3. delete 4. search
Let the binary search tree is implemented using linked whose structure is as below: struct node
{
int data;
struct node *lchild, *rchild;
}
Algorithm create_BST(root, n)
{
For i = 1 to n do
{
temp = new node;
read temp->data;
temp->lchild = temp->rchild=NULL;
if ( root = NULL)
root = temp;
else
{
temp1 = root;
while (temp1 != NULL)
{
temp2 = temp1;
if (temp1-> data = temp->data)
break;
else
if (temp1->data < temp->data)
temp1=temp1->rchild;
else
temp1 = temp1->lchild;
}
if (temp2->data < temp->data)
temp2 -> rchild = temp;
else
if (temp2->data > temp->data)
temp2 -> lchild = temp;
} // else
} // for
} // algorithm
Algorithm insert_BST(root, x)
{
temp2=NULL;
temp1 = root;
while (temp1 != NULL)
{
temp2 = temp1;
if (temp1-> data = temp->data)
break;
else
if (temp1->data < temp->data)
temp1=temp1->rchild;
else
temp1 = temp1->lchild;
} // while
If (temp1 = NULL)
{
temp = new node;
temp->data =x;
temp->lchild = temp->rchild=NULL;
if (root == NULL)
root = temp;
else
if (temp2->data < x)
temp2 -> rchild = temp;
else
temp2 -> lchild = temp;
} // if
} // algorithm
Boolean Algorithm search_BST(root,x)
{
if ( root = NULL)
return 0;
else
{
temp1 = root;
while (temp1 != NULL)
{
if (temp1-> data = x)
return 1;
else
if (x < temp1->data)
temp1=temp1->lchild;
else
temp1 = temp1->rchild;
}
return 0;
} // else
} // algorithm
Algorithm delete_BST(rt, x)
{
// Search the element
t2=NULL:
t1 = rt;
while (t1 != NULL)
{
t2= t1;
if (t1-> data = x)
break;
else
if (x > t1->data)
t1=t1->rchild;
else
t1 = t1->lchild;
}
}
// delete the element if present
if (t1 = NULL)
Display “element can not be deleted (either not found or tree is empty”);
else
{
if ((t1->rchild=NULL) and (t1->lchild=NULL)) // leaf node
{
if (t2->rchild = t1)
t2->rchild = NULL;
else
t2->lchild = NULL;
free(t1);
}
else
if ((t1->rchild != NULL) and (t1->lchild == NULL)) // with right child
{
if (t2->rchild = t1)
t2->rchild = t1->rchild;
else
t2->lchild = t1->rchild;
t1->lchild=t1->rchild=NULL;
free(t1);
}
else
if ((t1->rchild == NULL) and (t1->lchild != NULL)) //with left child
{
if (t2->rchild = t1)
t2->rchild = t1->lchild;
else
t2->lchild = t1->lchild;
t1->lchild=t1->rchild=NULL;
free(t1);
}
else
// with both children the element is replaced by the larget element in its left tree or smallest element in the right subtree.
if ((t1->rchild != NULL) and (t1->lchild != NULL)) //with 2 child
{
t3=t1->lchild;
if ((t3->rchild == NULL) and (t3->lchild==NULL))
{
t1->data=t3->data;
t1->lchild=NULL;
free(t3);
}
else
if ((t3->lchild!=NULL) and t3->rchild==NULL))
{
t1->data=t3->data;
t1->lchild=t3->lchild;
free(t3);
}
else
if (t3->rchild != NULL)
{
t4=t3->rchild;
while (t4->rchild!=NULL)
{
t3=t4;
t4=t4->rchild;
}
t1->data = t4->data;
t3->rchild=t4->lchild;
free(t4);
}
}
} //algorithm
Tree Traversal Techniques
There are six ways for tree traversals : LDR, LRD, DLR, RDL, RLD, DRL.
Three main are given below :
Inorder Traversal
Procedure inorder(input: T)
begin
if T = nil then
return;
endif
inorder(T->left);
visit(T);
inorder(T->right);
end
Preorder Traversal
Procedure preorder(input: T)
begin
if T = nil then
return;
endif
visit(T);
preorder(T->left);
preorder(T->right);
end
Postorder Traversal
Procedure postorder(input: T)
begin
if T = nil then
return;
endif
postorder(T->left);
postorder(T->right);
visit(T);
end
Time complexity for all the problems is O(n).
- Heap: a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B) (for max heap) or key(A) ≤ key(B) (for min heap).
The operations commonly performed with a heap are
- delete-max or delete-min: removing the root node of a max- or min-heap, respectively
- increase-key or decrease-key: updating a key within a max- or min-heap, respectively
- insert: adding a new key to the heap
- merge: joining two heaps to form a valid new heap containing all the elements of both.
Algorithm Insert(a,n)
{ i = n;
item = a[n];
while ((i>1) and (a[ i/2 ] < item) do
{ a[i] = a[ i/2 ];
i = i/2;
}
a[i] = item;
return true;
}
Algorithm Adjust(a,i,n)
{ j = 2i;
item = a[i];
while (j <= n) do
{
if ((j < n) and (a[j] < a[j+1])) then
j = j + 1;
if (item >= a[j]) then
break;
a[ j/2 ] = a[j]; j = 2j;
}
A[ j/2 ] = item;
}
Algorithm DelMax(a,n,x)
{
if (n =0) then
{
Write(“Heap is empty”);
Return false;
}
x = a[1]; a[1] = a[n];
Adjust(a,1,n-1); return true;
}
Heap Sort:
Algorithm Heapify(a,n)
{
For i = n/2 to 1 step -1 do Adjust(a, i, n);
}
Algorithm Heapsort(a,n)
{
Heapify(a,n);
For i = n to 2 step -1 do
{
t = a[i];
a[i] = a[1];
a[1] = t;
Adjust(a, 1, i-1);
}
}
Time complexity :O(n logn)
-----------------------------------------------------------------------------------------
Tree Sort: A tree sort is a sort algorithm that builds a binary search tree from the keys to be sorted, and then traverses the tree (in-order) so that the keys come out in sorted order. Time complexity :O(n logn)
----------------------------------------------------------------------------------------------
Priority Queues:
Any data structure that supports the operations of search min(max), insert, delete min(or max) respectively) is called as a priority queue.
Priority queue can be implemented using stack, queue, heap data structure, etc. Generally it is implemented using max-heap and min-heap.
---------------------------------------------------------------------------------------------
Comparing analysis of algorithms or Ranking the functions:
As the value of n increases, the time required for execution of an algorithm changes. Generally it increases. This increase is called as growth rate.
While selecting the algorithm with respect to time, we should select the algorithm whose growth rate is smaller. That is it is capable of executing the algorithm with highest value of n.
The growth rate can be calculated by taking increasing value of n.
Question: Find growth rate of following functions
log n, n2, 3n, 2n2+5, n!
Answer: First of all we will construct the table as below:
n | log n | n2 | 3n | 2n2+5 | n! |
1 | 0 | 1 | 3 | 7 | 1 |
2 | 1 | 4 | 6 | 13 | 2 |
4 | 2 | 16 | 12 | 37 | 24 |
8 | 3 | 64 | 24 | 133 | 40320 |
16 | 4 | 256 | 48 | 517 | 2E+13 |
32 | 5 | 1024 | 96 | 2053 | 3E+35 |
64 | 6 | 4096 | 192 | 8197 | 1E+89 |
Ranking of the functions:
Rank 1: log n,
Rank 2: 3n,
Rank 3: n2,
Rank 4: 2n2+5,
Rank 5: n!
Functions in increasing order of their growth rate::
log n, 3n, n2, 2n2+5, n!
Functions in descending order of their growth rate:
n!, 2n2+5, n2, 3n, log n
Note:
Increasing Order: n1 < n2 < n3 < n4….
Non-decreasing Order: n1 <= n2 <= n3 <= n4….
Decreasing Order: n1 > n2 > n3 > n4….
Non-increasing Order: n1 >= n2 >= n3 >= n4….
No comments:
Post a Comment