In this section we consider backtracking algorithms. As in the preceding section, we view the problem to be solved as a sequence of decisions. A backtracking algorithm systematically considers all possible outcomes for each decision. In this sense, backtracking algorithms are like the brute-force algorithms discussed in the preceding section. However, backtracking algorithms are distinguished by the way in which the space of possible solutions is explored. Sometimes a backtracking algorithm can detect that an exhaustive search is unnecessary and, therefore, it can perform much better.
[Read More]
Optimal Substructure
A problem is said to have optimal substructure if the globally optimal solution can be constructed from locally optimal solutions to subproblems. The general form of problems in which optimal substructure plays a roll goes something like this. Let’s say we have a collection of objects called A. For each object o in A we have a “cost,” c(o). Now find the subset of A with the maximum (or minimum) cost, perhaps subject to certain constraints.
[Read More]
Overlapping subproblems in Dynamic programming
A problem is said to have overlapping subproblems if it can be broken down into subproblems which are reused multiple times. This is closely related to recursion. To see the difference consider the factorial function, defined as follows (in Python):
def factorial(n):
if n == 0: return 1
return n*factorial(n-1)
Thus the problem of calculating factorial(n) depends on calculating the subproblem factorial(n-1). This problem does not exhibit overlapping subproblems since factorial is called exactly once for each positive integer less than n.
[Read More]
Common characterstics in dynamic programming
Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing.
Like divide and conquer, DP solves problems by combining solutions to subproblems.
Unlike divide and conquer, subproblems are not independent.
Subproblems may share subsubproblems,
However, solution to one subproblem may not affect the solutions to other subproblems of the same problem. (More on this later.)
DP reduces computation by: Solving subproblems in a bottom-up fashion.
[Read More]
Introduction to Dynamic programming
What is dynamic programming?
Dynamic programming is kind of careful brute force where you try all possibilities but carefully.
Before going into the definition, lets first take a problem, so that we can define the dynamic programming.
Why was it named this way?
Check out the history here.
Problem - Nth Fibonacci Number
Dynamic programming is a method for efficiently solving a broad range of search and optimization problems which exhibit the characteristics of overlappling subproblems and optimal substructure.
[Read More]
Types of algorithms
Algorithm types we will consider include:
- Simple recursive algorithms
- Backtracking algorithms
- Divide and conquer algorithms
- Dynamic programming algorithms
- Greedy algorithms
- Branch and bound algorithms
- Brute force algorithms
- Randomized algorithms
Important ds question
a) Write a function that validates whether a tree is a BST or not.
b) Write a function to find the common ancestor of two given nodes of binary tree.
c) In a BST, value of two of the nodes got exchanged by mistake. Write a function to correct the BST.
d) In a array, there exists a majority element(a no. repeated atleast n/2 + 1 times) and rest of the no.
[Read More]
Longest common subsequence : Dynamic programming
m: A B C B D A B
yn: B D C A B A
The LCS is B C B A
For arbitrary inputs, it is NP-hard. For constant inputs, DP gives the solution in polynomial time ex: O(n^k). Also, there can be more than one LCSs and the solution for that is exponential time ex: O(k^n).
Here we maintain an LCS matrix of m*n size. Initially, for i= 0 or j = 0, LCSij = 0.
[Read More]
GCD and LCM
Finding the GCD
GCD for 60 and 40 is 20: 60 = 40 * 1 + 20
40 = 20 * 2 + 0
LCM for 60 and 40 is 120:
60*40 / 20
Recursive solution for GCD
int gcd(a, b): if b==0: return a return gcd(b, a%b) Iterative solution for GCD
int gcd(a, b): while (b != 0): //swap a with b, and b with a%b temp = a%b a = b b = t return a Finding the LCM
[Read More]
External Sorting - How do you sort a million 32-bit integers in 2MB RAM?
A million 32-bit integers is 4MB. We cannot fit them all in the given RAM size and we have to do external sorting. It is an open ended question which is interactive.
Let’s assume that -
The list of numbers is on disk (as we cannot fit them all in RAM).
There is a sort function which sorts the given list effectively i.e. we do not have to worry about a particular sorting algorithm.
[Read More]