This is one of the classical problems of computer science. There is a rat trapped in a maze. There are multiple paths in the maze from the starting point to the ending point. There is some cheese at the exit. The rat starts from the entrance of the maze and wants to get to the cheese.
This problem can be attacked as follows.
Have a m*m matrix which represents the maze.
[Read More]
Representing the Solution Space
This section presents an interface for the nodes of a solution space. By using an interface, we hide the details of the specific problem to be solved from the backtracking algorithm. In so doing, it is possible to implement completely generic backtracking problem solvers.
Although a backtracking algorithm behaves as if it is traversing a solution tree, it is important to realize that it is not necessary to have the entire solution tree constructed at once.
[Read More]
Balancing Scales
Consider the set of scales shown in Figure . Suppose we are given a collection of n weights, , and we are required to place all of the weights onto the scales so that they are balanced.
Figure: A set of scales.
The problem can be expressed mathematically as follows: Let represent the pan in which weight is placed such that
The scales are balanced when the sum of the weights in the left pan equals the sum of the weights in the right pan,
[Read More]
Backtracking Algorithms
In this section we consider backtracking algorithms. As in the preceding section, we view the problem to be solved as a sequence of decisions. A backtracking algorithm systematically considers all possible outcomes for each decision. In this sense, backtracking algorithms are like the brute-force algorithms discussed in the preceding section. However, backtracking algorithms are distinguished by the way in which the space of possible solutions is explored. Sometimes a backtracking algorithm can detect that an exhaustive search is unnecessary and, therefore, it can perform much better.
[Read More]
Divide and Conquer
In this section we consider backtracking algorithms. As in the preceding section, we view the problem to be solved as a sequence of decisions. A backtracking algorithm systematically considers all possible outcomes for each decision. In this sense, backtracking algorithms are like the brute-force algorithms discussed in the preceding section. However, backtracking algorithms are distinguished by the way in which the space of possible solutions is explored. Sometimes a backtracking algorithm can detect that an exhaustive search is unnecessary and, therefore, it can perform much better.
[Read More]
Optimal Substructure
A problem is said to have optimal substructure if the globally optimal solution can be constructed from locally optimal solutions to subproblems. The general form of problems in which optimal substructure plays a roll goes something like this. Let’s say we have a collection of objects called A. For each object o in A we have a “cost,” c(o). Now find the subset of A with the maximum (or minimum) cost, perhaps subject to certain constraints.
[Read More]
Overlapping subproblems in Dynamic programming
A problem is said to have overlapping subproblems if it can be broken down into subproblems which are reused multiple times. This is closely related to recursion. To see the difference consider the factorial function, defined as follows (in Python):
def factorial(n):
if n == 0: return 1
return n*factorial(n-1)
Thus the problem of calculating factorial(n) depends on calculating the subproblem factorial(n-1). This problem does not exhibit overlapping subproblems since factorial is called exactly once for each positive integer less than n.
[Read More]
Types of algorithms
Algorithm types we will consider include:
- Simple recursive algorithms
- Backtracking algorithms
- Divide and conquer algorithms
- Dynamic programming algorithms
- Greedy algorithms
- Branch and bound algorithms
- Brute force algorithms
- Randomized algorithms
Longest common subsequence : Dynamic programming
m: A B C B D A B
yn: B D C A B A
The LCS is B C B A
For arbitrary inputs, it is NP-hard. For constant inputs, DP gives the solution in polynomial time ex: O(n^k). Also, there can be more than one LCSs and the solution for that is exponential time ex: O(k^n).
Here we maintain an LCS matrix of m*n size. Initially, for i= 0 or j = 0, LCSij = 0.
[Read More]
GCD and LCM
Finding the GCD
GCD for 60 and 40 is 20: 60 = 40 * 1 + 20
40 = 20 * 2 + 0
LCM for 60 and 40 is 120:
60*40 / 20
Recursive solution for GCD
int gcd(a, b): if b==0: return a return gcd(b, a%b) Iterative solution for GCD
int gcd(a, b): while (b != 0): //swap a with b, and b with a%b temp = a%b a = b b = t return a Finding the LCM
[Read More]