We have already seen the array implementation of stack, which had one limitation of what happens if data size is bigger than array size, i.e. over flow occurs. To solve this we have to grow our array. But how?
Growing the Array
Approach 1 - Repeated Doubling i.e. Doubling the array every time it is full
We can grow the array by doubling it size, and copying the elements in older array to newer one.
[Read More]
DFS (Depth first search) for graph
What is dfs?
This algorithm explores aggressively and backtracks only when necessary. This is similar to how we play a maze game.
To code DFS iteratively, we can mimic BFS , but use a Stack instead of a Queue. Another way to code this would be to do it recursively. DFS is very naturally phased as recursive example. We will see both of implementation later.
DFS(graph G, vertex s) mark s as explored for every edge (s, v): if v unexplored DFS(G, v) Example : Consider the graph:
[Read More]
Integer Multiplication using Karatsuba algorithm
Normal Integer method
If we consider the integer multiplication, we need 2 integers and output is also integer.
Suppose we have to multiply 5678*1234. In the normal way, we will realize following:
5678 X 1234 \------------ 22712 //first number multiply as is i.e. 4\*5678 17034- //2nd time, shift (left) and then multiply the number as 3\*5678 11356-- //shift twice 5678--- //shift thrice \------------ 7006652 ```So, each time we multiply, depending on the digit of the 2nd number, we shift and then multiply.
[Read More]
Meaning behind Master Method
After establishing what the running time of an algorithm based on three variables a, b, d, we might be wondering what is the meaning behind this method. Where do these logs and exponents come from?
If we look at the amount of work done at a single level of the recursion tree, we get that it is,
where aj is the number of level-j subproblems and the rest is the amount of work per level-j subproblem.
[Read More]
Master Method for Solving Recurrences
Why Master method ? - Master method helps us analysing algorithms with their running time.
So, lets consider the integer multiplication - here.
Now we get equation 1.
The way we reason about recursive algorithm like these is with the help of recurrence. Recurrences
What is recurrence?
T(n) = max number of operations this algorithm needs to multiply 2 n digit numbers. Recurrence is to express T(n) in terms of recursive calls.
[Read More]
Dijkstra's Shortest Path Algorithm
It’s Dijkstra’s algorithm, that lets you find single-source shortest paths.
Problem
Problem - Single source shortest paths
Input -
Directed graph G=(V,E) Source vertex s Output - foreach v∈V, compute L(v)=length of a shortest s-v path in G.
Assumptions:
There is a path from vertex s to vertex v (otherwise, the path length is infinity) Edge lengths are non-negative (there are other algorithms for this called , but most famous being Bellman ford algorithm which uses dynamic programming) Example of shortest path
[Read More]
Operation time complexity for a LINKED LIST
The time complexity of handling the operations in a data structure like a LINKED LIST are as follows:
i. Most of the operations are O(n) [i.e. omega times n]
ii. Remove/Insert operations that handles element between elements require O(1) [i.e. omega times one] this is due to references/pointers on both the ends of the list, hence the elements don’t require to be shifted. However, due to the references/pointers this data structure requires additional memory!
[Read More]
Operation time complexity for an ARRAY
The time complexity of handling the operations in a data structure like an ARRAY are as follows:
i. Almost all the operations are O(1) [i.e. omega times one]
ii. Remove/Insert operations that handles element between elements require O(n) [i.e. omega times n] because the elements need to be shifted.
Note! Hence, an array as a data structure is used where remove/insert operations are not required.
Time complexity of algorithms - In particular Big Omega of algorithms
Big Omega or so called asymptotic upper bound of algorithms such as Sequential statements, If statements and Looping construct
(i) Sequential statements:
Big Omega of this algorithm equals to maximum time complexities of all individual statements (i.e. sum of maximum time complexities of the individual statements)
Therefore, O[g(n)] = O{f1(n)+f2(n)+ … +fk(n)}
Conclusion: Overall complexity of this algorithm equates to O[g(n)]
(ii) If statements:
Big Omega of this algorithm equals to maximum time complexities of either one of the alternative statements i.
[Read More]
Heap Sort in java
The sorting problem
Input: Array of numbers , unsorted. Eg.
Output : Same numbers sorted in some order, say increasing order. Eg.
Heap sort Algorithm
This algorithm assumes understanding of heap - Heap - Introduction. Also, it also uses ExtractMin from Heap.
Algorithm
The essence of heap sort (in-place) is this:
Convert the array to be sorted into a heap. (see BuildHeap, it uses O(n) time) Build the sorted array back to front by repeatedly removing the minimum value in the heap (the value at position 1), putting that value into the sorted array, and rebuilding the heap with one fewer element.
[Read More]