What is an Algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
State the Euclid’s algorithm for finding GCD of two given numbers. ALGORITHM Euclid (m, n)
// Computes gcd(m,n) by Euclid’s algorithm
//Input : Two non negative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n =! 0 do
r m mod n
m n
n r
return m.
What are Sequential Algorithms?
The central assumption of the RAM model is that instructions are executed one after another, one
operation at a time. Accordingly, algorithms designed to be executed on such machines are called
Sequential algorithms.
What are Parallel Algorithms?
The central assumption of the RAM model does not hold for some newer computers that can execute
operations concurrently, i.e., in parallel algorithms that take advantage of this capability are called Parallel algorithms.
What is Exact and Approximation algorithm?
The principal decision to choose solving the problem exactly is called exact algorithm.
The principal decision to choose solving the problem approximately is called Approximation
algorithm.
What is Algorithm Design Technique?
An algorithm design technique is a general approach to solving problems algorithmically that is
applicable to a variety of problems from different areas of computing.
Define Pseudo code.
A Pseudo code is a mixture of a natural language and programming language like constructs. A pseudo code is usually more precise than a natural language, and its usage often yields more succinct algorithm descriptions.
Define Flowchart.
A method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps.
Explain Algorithm’s Correctness
To prove that the algorithm yields a required result for every legitimate input in a finite amount of time.
Example: Correctness of Euclid’s algorithm for computing the greatest common divisor stems from
correctness of the equality gcd (m, n) = gcd (n, m mod n).
What is Efficiency of algorithm?
Efficiency of an algorithm can be precisely defined and investigated with mathematical rigor. There are two kinds of algorithm efficiency
i. Time Efficiency – Indicates how fast the algorithm runs
ii. Space Efficiency – Indicates how much extra memory the algorithm needs.
What is generality of an algorithm?
It is a desirable characteristic of an algorithm. Generality of the problem the algorithm solves is sometimes easier to design an algorithm for a problem posed in more general terms.
What is algorithm’s Optimality?
Optimality is about the complexity of the problem that algorithm solves. What is the minimum amount of effort any algorithm will need to exert to solve the problem in question is called algorithm’s Optimality.
What do you mean by "Sorting" problem?
The sorting problem asks us to rearrange the items of a given list in ascending order (or descending order).
What do you mean by "Searching" problem?
The searching problem deals with finding a given value, called a search key, in a given set.
What do you mean by "Worst case-Efficiency" of an algorithm?
The "Worst case-Efficiency" of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.
Ex: If you want to sort a list of numbers in ascending order when the numbers are
given in descending order. In this running time will be the longest.
What do you mean by "Best case-Efficiency" of an algorithm?
The ²Best case-Efficiency” of an algorithm is its efficiency for the Best-case input of size n, which is an input(or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size.
Ex: If you want to sort a list of numbers in ascending order when the numbers are
given in ascending order. In this running time will be the smallest.
Define the "Average-case efficiency" of an algorithm?The "Average-case efficiency" of an algorithm is i ts efficiency for the input of size n, for which the algorithm runs between the best case and the worst case among all possible inputs of that size.
What do you mean by "Amortized efficiency"?
The “Amortized efficiency” applies not only a single run of an algorithm but rather to a sequence of operations performed on the same data structure. It turns out that in some situations a single operation can be expensive ,but the total time for an entire sequence of n such operations is always significantly better than the worst case efficiency of that single operation multiplied by n. This is known as “Amortized efficiency”.
How to measure the algorithm’s efficiency?
It is logical to investigate the algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input size.
Example: It will be the size of the list for problems of sorting, searching, finding the list’s smallest
element, and most other problems dealing with lists.
What is called the basic operation of an algorithm?
The most important operation of the algorithm is the operation contributing the most to the total running time is called basic operation of an algorithm.
How to measure an algorithm’s running time?
Let Cop be the time of execution of an algorithm’s basic iteration on a particular computer and let C (n) be the number of times this operation needs to be executed for this algorithm. Then we can estimate the running time T(n) of a program implementing this algorithm on that computer by the formula
T(n) ≈ Cop C(n)
Define order of growth.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency. To compare and rank such orders of growth we use three notations
i. O (Big oh) notation
ii. Ω (Big Omega) notation &
iii. Θ (Big Theta) notation
Define Big oh notation .
A function t(n) is said to be in O(g(n)) denoted t(n) ε O (g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
T (n) < c g (n) for n > n0
Prove that 100n+5 E O (n2)?
Clearly 100n+5 £ 100n+n (for all n >= 5) = 101n<=101n2
By choosing n0=5 and c=101 we find that 100n+5EO (n2).
Define Ω notation
A function t(n) is said to be in Ω (g(n)), denoted t(n) Î Ω (g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
T (n) < c g (n) for n > n0
What is the use of Asymptotic Notations?
The notations are used to indicate and compare the asymptotic orders of growth of functions expressing algorithm efficiency .
Explain divide and conquer algorithms .
Divide and conquer is probably the best known general algorithm design technique. It work according to the following general plan:
Merge sort is a perfect example of a successful application of the divide and conquer technique. It sorts a given array A[0...n-l] by dividing it into two halves A[0...[n/2] - 1] and A[[n/2]....n-l], sorting each of them recursively, and then merging the two smaller sorted arrays into a single sorted one.
Define Binary Search
Binary Search is remarkably efficient algorithm for searching in a sorted array. It works by comparing a search key K with the array's middle element A[m]. If they match, the algorithm stops;
Otherwise, the same operation is repeated recursively for the first half of the array if K< A[m] and for the second half if K > A[m]
A[0].............A[m-l] A[m] A[m+l]............. A[n-l]
What can we say about the average case efficiency of binary search?
A sophisticated analysis shows that the average number of key comparisons made by binary search is only slightly smaller than that in the worst case
Cavg(n) == log2n
Define Binary Tree
A binary tree T is defined as a finite set of nodes that is either empty or consists of s root and two disjoint binary trees TL, and TR called, respectively the left and right sub tree of the root.
How divide and conquer technique can be applied to binary trees?
Since the binary tree definition itself divides a binary tree into two smaller structures of the same type, the left sub tree and the right sub tree, many problems about binary trees can be solved by applying the divide-conquer technique.
Explain Internal and External Nodes.
To draw the tree's extension by replacing the empty sub trees by special nodes.The extra nodes shown by little squares are called external. The original nodes shown by little circles are called internal.
Define Preorder, inorder and postorder Traversal.
PREORDER -The root is visited before the left and right subtrees are visited (in that order)
INORDER -The root is visited after visiting its left subtree but before visiting the right Subtree
POSTORDER - The root is visited after visiting the left and right subtrees(in that order)
Define the Internal Path Length.
The Internal Path Length I of an extended binary tree is defined as the sum of the lengths of the paths taken over all internal nodes- from the root to each internal node.
Define the External Path Length.
The External Path Length E of an extended binary tree is defined as the sum of the lengths of the paths taken over all external nodes- from the root to each external node.
Explain about greedy technique.
The greedy technique suggests constructing a solution to an optimization problem through a sequence of steps, each expanding a partially constructed solution obtained so far, until a complete solution to the problem is reached. On each step, the choice made must be feasible, locally optimal and irrevocable.
Define Spanning Tree
A Spanning Tree of a connected graph is its connected acyclic subgraph(i.e., a tree) that contains all
the vertices of the graph.
Define Minimum Spanning Tree
A minimum spanning tree of a weighted connected graph is its spanning tree of the smallest weight
,where the weight of a tree is defined as the sum of the weights on all its edges.
Define min-heap
A min-heap is a complete binary tree in which every element is less than or equal to its children. All
the principal properties of heaps remain valid for min-heaps, with some obvious modifications.
Define Kruskal's Algorithm
Kruskal's algorithm looks at a minimum spanning tree for a weighted connected graph G =(V,E) as an
acyclic subgraph with | V| - 1 edges for which the sum of the edge weights is the smallest.
Define Prim's Algorithm
Prim's algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted connected graph.It works by attaching to a previously constructed subtree a vertex to the vertices already in the tree.
Explain Dijkstra's Algorithm
Dijkstra's algorithm solves the single - source shortest - path problem of finding shortest paths from a given vertex (the source) to all the other vertices of a weighted graph or digraph. It works as prim's algorithm but compares path lengths rather than edge lengths. Dijkstra's algorithm always yields a correct solution for a graph with non negative weights.
Define Dynamic Programming
Dynamic programming is a technique for solving problems with overlapping problems. Typically, these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type. Rather than solving overlapping subproblems again and again, dynamic programming suggests solving each of the smaller sub problems only once and recording the results in a table from which we can then obtain a solution to the original problem.
Define Binomial Coefficient
The Binomial Coefficient, denoted C(n,k) or k(n) is the number of Combinations(subsets) of k elements from an n-element set(0<k<n). The name "binomial coefficients" comes from the participation of these numbers in the so called binomial formula
(a+b)n =(n,0)an+..........+C(n,i)an-i bi +.......+C(n,n) bn
Define Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n by n Boolean matrix
T = {ti,}, in which the element in the ith row (1<i<n) and the jth column (l<j<n) is 1 if there exists a non trivial directed path from the ith vertex to jth vertex ; otherwise , tij is 0.
Explain Warshalls algorithm
Warshall's algorithm constructs the transitive closure of a given digraph with n vertices through a series of n by n Boolean matrices R(0), ………, R(k-l)R(k),…….., R(n)
Each of these matrices provides certain information about directed paths in the digraph.
Explain All-pair shortest-paths problemGiven a weighted connected graph (undirected or directed), the all pairs shortest paths problem asks to find the distances(the lengths of the shortest path) from each vertex to all other vertices.
Explain Floyd's algorithm
It is convenient to record the lengths of shortest paths in an n by n matrix D called the distance matrix: the element dij in the ith row and the jth column of this matrix indicates the length of the shortest path from the ith vertex to the jth vertex . We can generate the distance matrix with an algorithm that is very similar to warshall's algorithm. It is called Floyd's algorithm.
What does Floyd’s algorithm do?
It is used to f ind the distances (the lengths of the shortest paths) from each vertex to all other vertices of a weighted connected graph.
Explain principle of Optimality
It says that an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances.
Explain Optimal Binary Search Trees
One of the principal application of Binary Search Tree is to implement the operation of searching. If
probabilities of searching for elements of a set are known, it is natural to pose a question about an optimal binary search tree for which the average number of comparisons in a search is the smallest possible.
Explain Knapsack problem
Given n items of known weights w1,w2...........wn and values v1,v2............vn and a knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack.(Assuming all the weights and the knapsack's capacity are positive integers the item values do not have to be integers.)
Explain the Memory Function technique
The Memory Function technique seeks to combine strengths of the top down and bottom-up approaches to solving problems with overlapping subproblems. It does this by solving, in the top-down fashion but only once, just necessary sub problems of a given problem and recording their solutions in a table.
Explain "Traveling salesman problem"?
A salesman has to travel n cities starting from any one of the cities and visit the remaining cities exactly once and come back to the city where he started his journey in such a manner that either the distance is minimum or cost is minimum. This is known as traveling salesman problem.
Explain what is an algorithm in computing?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
State the Euclid’s algorithm for finding GCD of two given numbers. ALGORITHM Euclid (m, n)
// Computes gcd(m,n) by Euclid’s algorithm
//Input : Two non negative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n =! 0 do
r m mod n
m n
n r
return m.
What are Sequential Algorithms?
The central assumption of the RAM model is that instructions are executed one after another, one
operation at a time. Accordingly, algorithms designed to be executed on such machines are called
Sequential algorithms.
What are Parallel Algorithms?
The central assumption of the RAM model does not hold for some newer computers that can execute
operations concurrently, i.e., in parallel algorithms that take advantage of this capability are called Parallel algorithms.
What is Exact and Approximation algorithm?
The principal decision to choose solving the problem exactly is called exact algorithm.
The principal decision to choose solving the problem approximately is called Approximation
algorithm.
What is Algorithm Design Technique?
An algorithm design technique is a general approach to solving problems algorithmically that is
applicable to a variety of problems from different areas of computing.
Define Pseudo code.
A Pseudo code is a mixture of a natural language and programming language like constructs. A pseudo code is usually more precise than a natural language, and its usage often yields more succinct algorithm descriptions.
Define Flowchart.
A method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps.
Explain Algorithm’s Correctness
To prove that the algorithm yields a required result for every legitimate input in a finite amount of time.
Example: Correctness of Euclid’s algorithm for computing the greatest common divisor stems from
correctness of the equality gcd (m, n) = gcd (n, m mod n).
What is Efficiency of algorithm?
Efficiency of an algorithm can be precisely defined and investigated with mathematical rigor. There are two kinds of algorithm efficiency
i. Time Efficiency – Indicates how fast the algorithm runs
ii. Space Efficiency – Indicates how much extra memory the algorithm needs.
What is generality of an algorithm?
It is a desirable characteristic of an algorithm. Generality of the problem the algorithm solves is sometimes easier to design an algorithm for a problem posed in more general terms.
What is algorithm’s Optimality?
Optimality is about the complexity of the problem that algorithm solves. What is the minimum amount of effort any algorithm will need to exert to solve the problem in question is called algorithm’s Optimality.
What do you mean by "Sorting" problem?
The sorting problem asks us to rearrange the items of a given list in ascending order (or descending order).
What do you mean by "Searching" problem?
The searching problem deals with finding a given value, called a search key, in a given set.
What do you mean by "Worst case-Efficiency" of an algorithm?
The "Worst case-Efficiency" of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.
Ex: If you want to sort a list of numbers in ascending order when the numbers are
given in descending order. In this running time will be the longest.
What do you mean by "Best case-Efficiency" of an algorithm?
The ²Best case-Efficiency” of an algorithm is its efficiency for the Best-case input of size n, which is an input(or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size.
Ex: If you want to sort a list of numbers in ascending order when the numbers are
given in ascending order. In this running time will be the smallest.
Define the "Average-case efficiency" of an algorithm?The "Average-case efficiency" of an algorithm is i ts efficiency for the input of size n, for which the algorithm runs between the best case and the worst case among all possible inputs of that size.
What do you mean by "Amortized efficiency"?
The “Amortized efficiency” applies not only a single run of an algorithm but rather to a sequence of operations performed on the same data structure. It turns out that in some situations a single operation can be expensive ,but the total time for an entire sequence of n such operations is always significantly better than the worst case efficiency of that single operation multiplied by n. This is known as “Amortized efficiency”.
How to measure the algorithm’s efficiency?
It is logical to investigate the algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input size.
Example: It will be the size of the list for problems of sorting, searching, finding the list’s smallest
element, and most other problems dealing with lists.
What is called the basic operation of an algorithm?
The most important operation of the algorithm is the operation contributing the most to the total running time is called basic operation of an algorithm.
How to measure an algorithm’s running time?
Let Cop be the time of execution of an algorithm’s basic iteration on a particular computer and let C (n) be the number of times this operation needs to be executed for this algorithm. Then we can estimate the running time T(n) of a program implementing this algorithm on that computer by the formula
T(n) ≈ Cop C(n)
Define order of growth.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency. To compare and rank such orders of growth we use three notations
i. O (Big oh) notation
ii. Ω (Big Omega) notation &
iii. Θ (Big Theta) notation
Define Big oh notation .
A function t(n) is said to be in O(g(n)) denoted t(n) ε O (g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
T (n) < c g (n) for n > n0
Prove that 100n+5 E O (n2)?
Clearly 100n+5 £ 100n+n (for all n >= 5) = 101n<=101n2
By choosing n0=5 and c=101 we find that 100n+5EO (n2).
Define Ω notation
A function t(n) is said to be in Ω (g(n)), denoted t(n) Î Ω (g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some non negative integer n0 such that
T (n) < c g (n) for n > n0
What is the use of Asymptotic Notations?
The notations are used to indicate and compare the asymptotic orders of growth of functions expressing algorithm efficiency .
Explain divide and conquer algorithms .
Divide and conquer is probably the best known general algorithm design technique. It work according to the following general plan:
- A problem's instance is divided into several smaller instances of the same problem, ideally of about
the same size. - The smaller instances are solved
- If necessary, the solutions obtained for the smaller instances are combined to get a solution to the
- original problem
Merge sort is a perfect example of a successful application of the divide and conquer technique. It sorts a given array A[0...n-l] by dividing it into two halves A[0...[n/2] - 1] and A[[n/2]....n-l], sorting each of them recursively, and then merging the two smaller sorted arrays into a single sorted one.
Define Binary Search
Binary Search is remarkably efficient algorithm for searching in a sorted array. It works by comparing a search key K with the array's middle element A[m]. If they match, the algorithm stops;
Otherwise, the same operation is repeated recursively for the first half of the array if K< A[m] and for the second half if K > A[m]
A[0].............A[m-l] A[m] A[m+l]............. A[n-l]
What can we say about the average case efficiency of binary search?
A sophisticated analysis shows that the average number of key comparisons made by binary search is only slightly smaller than that in the worst case
Cavg(n) == log2n
Define Binary Tree
A binary tree T is defined as a finite set of nodes that is either empty or consists of s root and two disjoint binary trees TL, and TR called, respectively the left and right sub tree of the root.
How divide and conquer technique can be applied to binary trees?
Since the binary tree definition itself divides a binary tree into two smaller structures of the same type, the left sub tree and the right sub tree, many problems about binary trees can be solved by applying the divide-conquer technique.
Explain Internal and External Nodes.
To draw the tree's extension by replacing the empty sub trees by special nodes.The extra nodes shown by little squares are called external. The original nodes shown by little circles are called internal.
Define Preorder, inorder and postorder Traversal.
PREORDER -The root is visited before the left and right subtrees are visited (in that order)
INORDER -The root is visited after visiting its left subtree but before visiting the right Subtree
POSTORDER - The root is visited after visiting the left and right subtrees(in that order)
Define the Internal Path Length.
The Internal Path Length I of an extended binary tree is defined as the sum of the lengths of the paths taken over all internal nodes- from the root to each internal node.
Define the External Path Length.
The External Path Length E of an extended binary tree is defined as the sum of the lengths of the paths taken over all external nodes- from the root to each external node.
Explain about greedy technique.
The greedy technique suggests constructing a solution to an optimization problem through a sequence of steps, each expanding a partially constructed solution obtained so far, until a complete solution to the problem is reached. On each step, the choice made must be feasible, locally optimal and irrevocable.
Define Spanning Tree
A Spanning Tree of a connected graph is its connected acyclic subgraph(i.e., a tree) that contains all
the vertices of the graph.
Define Minimum Spanning Tree
A minimum spanning tree of a weighted connected graph is its spanning tree of the smallest weight
,where the weight of a tree is defined as the sum of the weights on all its edges.
Define min-heap
A min-heap is a complete binary tree in which every element is less than or equal to its children. All
the principal properties of heaps remain valid for min-heaps, with some obvious modifications.
Define Kruskal's Algorithm
Kruskal's algorithm looks at a minimum spanning tree for a weighted connected graph G =(V,E) as an
acyclic subgraph with | V| - 1 edges for which the sum of the edge weights is the smallest.
Define Prim's Algorithm
Prim's algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted connected graph.It works by attaching to a previously constructed subtree a vertex to the vertices already in the tree.
Explain Dijkstra's Algorithm
Dijkstra's algorithm solves the single - source shortest - path problem of finding shortest paths from a given vertex (the source) to all the other vertices of a weighted graph or digraph. It works as prim's algorithm but compares path lengths rather than edge lengths. Dijkstra's algorithm always yields a correct solution for a graph with non negative weights.
Define Dynamic Programming
Dynamic programming is a technique for solving problems with overlapping problems. Typically, these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type. Rather than solving overlapping subproblems again and again, dynamic programming suggests solving each of the smaller sub problems only once and recording the results in a table from which we can then obtain a solution to the original problem.
Define Binomial Coefficient
The Binomial Coefficient, denoted C(n,k) or k(n) is the number of Combinations(subsets) of k elements from an n-element set(0<k<n). The name "binomial coefficients" comes from the participation of these numbers in the so called binomial formula
(a+b)n =(n,0)an+..........+C(n,i)an-i bi +.......+C(n,n) bn
Define Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n by n Boolean matrix
T = {ti,}, in which the element in the ith row (1<i<n) and the jth column (l<j<n) is 1 if there exists a non trivial directed path from the ith vertex to jth vertex ; otherwise , tij is 0.
Explain Warshalls algorithm
Warshall's algorithm constructs the transitive closure of a given digraph with n vertices through a series of n by n Boolean matrices R(0), ………, R(k-l)R(k),…….., R(n)
Each of these matrices provides certain information about directed paths in the digraph.
Explain All-pair shortest-paths problemGiven a weighted connected graph (undirected or directed), the all pairs shortest paths problem asks to find the distances(the lengths of the shortest path) from each vertex to all other vertices.
Explain Floyd's algorithm
It is convenient to record the lengths of shortest paths in an n by n matrix D called the distance matrix: the element dij in the ith row and the jth column of this matrix indicates the length of the shortest path from the ith vertex to the jth vertex . We can generate the distance matrix with an algorithm that is very similar to warshall's algorithm. It is called Floyd's algorithm.
What does Floyd’s algorithm do?
It is used to f ind the distances (the lengths of the shortest paths) from each vertex to all other vertices of a weighted connected graph.
Explain principle of Optimality
It says that an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances.
Explain Optimal Binary Search Trees
One of the principal application of Binary Search Tree is to implement the operation of searching. If
probabilities of searching for elements of a set are known, it is natural to pose a question about an optimal binary search tree for which the average number of comparisons in a search is the smallest possible.
Explain Knapsack problem
Given n items of known weights w1,w2...........wn and values v1,v2............vn and a knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack.(Assuming all the weights and the knapsack's capacity are positive integers the item values do not have to be integers.)
Explain the Memory Function technique
The Memory Function technique seeks to combine strengths of the top down and bottom-up approaches to solving problems with overlapping subproblems. It does this by solving, in the top-down fashion but only once, just necessary sub problems of a given problem and recording their solutions in a table.
Explain "Traveling salesman problem"?
A salesman has to travel n cities starting from any one of the cities and visit the remaining cities exactly once and come back to the city where he started his journey in such a manner that either the distance is minimum or cost is minimum. This is known as traveling salesman problem.
Explain what is an algorithm in computing?
An algorithm is a well-defined
computational procedure that take some value as input and generate some value
as output. In simple words, it’s a sequence of computational steps that
converts input into the output.
Explain what is Quick Sort
algorithm?
Quick Sort algorithm has the ability
to sort list or queries quickly. It is based on the principle of partition
exchange sort or Divide and conquer. This type of algorithm occupies less
space, and it segregates the list into three main parts
- Elements less than the Pivot element
- Pivot element
- Elements greater than the Pivot element
Explain what is time complexity
of Algorithm?
Time complexity of an algorithm
indicates the total time needed by the program to run to completion. It
is usually expressed by using the big O notation.
Mention what are the types of
Notation used for Time Complexity?
The types of Notations used for Time
Complexity includes
- Big Oh: It indicates “fewer than or the same as” <expression>iterations
- Big Omega: It indicates “more than or same as” <expression>iterations
- Big Theta: It indicates “the same as”<expression>iterations
- Little Oh: It indicates “fewer than” <expression>iterations
- Little Omega: It indicates “more than” <expression>iterations
Explain how binary search works?
It is an
efficient method of finding out a required item from a given list,provided the
list is in order.The process is:1.
First the
middle item of the sorted list is found.2.
Compare the
item with this element.3.
If they are
equal search is complete.4.
If the middle
element is greater than the item being searched, this processis repeated in the
upper half of the list.5.
If the middle
element is lesser than the item being searched, this process isrepeated in the
lower half of the list.
Explain whether it is possible to
use binary search for linked lists?
Since random access is not
acceptable in linked list, it is impossible to reach the middle element of O(1)
time. Thus, binary search is not possible for linked list.
Explain what is heap sort?
A heap can be defined
as a binary tree with keys assigned to its nodes( one key per node) provided
the following two conditions are met: 1 The tree’s shape requirement
–
The binary
tree is essentially complete ( orsimply complete), that is, all its levels are
full except possibly the last level,where only some rightmost leaves may be
missing.2. The parental dominance requirement
–
The key at
each node is greater thanor equal to the keys at its children.
Explain what is Skip list?
Skip list the method for data
structuring, where it allows the algorithm to search, delete and insert
elements in a symbol table or dictionary. In a skip list, each element is
represented by a node. The search function returns the content of the
value related to key. The insert operation associates a specified key with a
new value, while the delete function deletes the specified key.
Explain what is Space complexity
of insertion sort algorithm?
Insertion sort is an in-place
sorting algorithm which means that it requires no extra or little.
storage. For insertion sort, it requires only single list elements to be stored
out-side the initial data, making the space-complexity 0(1).
Explain what a “Hash Algorithm”
is and what are they used for?
“Hash Algorithm” is a hash function
that takes a string of any length and decreases it to a unique fixed length
string. It is used for password validity, message & data integrity
and for many other cryptographic systems.
Explain how to find whether the
linked list has a loop?
To know whether the linked list has
a loop, we will take two pointer approach. If we maintain two pointers,
and we increase one pointer after processing two nodes and other after
processing every node, we are likely to encounter a situation where both the
pointer will be pointing to the same node. This will only occur if linked list
has a loop.
Explain how encryption algorithm
works?
Encryption is the process of
converting plaintext into a secret code format referred as “Ciphertext”. To convert the text, algorithm uses a string of bits referred as “keys” for
calculations. The larger the key, the greater the number of potential patterns
for creating cipher text. Most encryption algorithm use codes fixed blocks of
input that have length about 64 to 128 bits, while some uses stream method.
List out some of the commonly
used cryptographic algorithms?
Some of the commonly used
cryptographic algorithms are
- 3-way
- Blowfish
- CAST
- CMEA
- GOST
- DES and Triple DES
- IDEA
- LOKI and so on
Explain what is the difference between
best case scenario and worst case scenario of an algorithm?
- Best case scenario: Best case scenario for an algorithm is explained as the arrangement of data for which the algorithm performs best. For example, we take a binary search, for which the best case scenario would be if the target value is at the very center of the data you are searching. The best case time complexity would be 0 (1)
- Worst case scenario: It is referred for the worst set of input for a given algorithm. For example quicksort, which can perform worst if you select the largest or smallest element of a sublist for the pivot value. It will cause quicksort to degenerate to O (n2).
Explain what is Radix Sort
algorithm?
Radix sort puts the element in order
by comparing the digits of the numbers. It is one of the linear sorting
algorithms for integers.
Explain what is a recursive
algorithm?
Recursive algorithm is a method of
solving a complicated problem by breaking a problem down into smaller and
smaller sub-problems until you get the problem small enough that it can be
solved easily. Usually, it involves a function calling itself.
Mention what are the three laws
of recursion algorithm?
All recursive algorithm must follow
three laws
- It should have a base case
- A recursive algorithm must call itself
- A recursive algorithm must change its state and move towards the base case
Explain what is bubble sort
algorithm?
Bubble sort algorithm is also
referred as sinking sort. In this type of sorting, the list to be sorted out compares the pair of adjacent items. If they are organized in the wrong order,
it will swap the values and arrange them in the correct order.
What is the
time complexity of linear search?
Θ(n)
What is the
time complexity of binary search?
Θ(log2n)
What is the
major requirement for binary search?
The given list
should be sorted.
What is parental
dominance?
The key at each node is greater than or equal to the keys at its
children.
No comments:
Post a Comment