- Brute Force
- Divide and Conquer
- Randomized
- Greedy
- Dynamic Programming
- Local Search
Divide and Conquer
Recipe for Divide and Conquer Algorithms
- Divide the input into smaller sub problems
- Conquer the sub problems recursively
- Combine the solutions of the sub problems into a final solution
Examples
Greedy
Recipe for Greedy Algorithms
- Construct a solution iteratively, via a sequence of myopic decisions, and hope that everything works out in the end
Features of Greedy Algorithms
- Easy to come up with one or more greedy algorithms
- Easy to analyse the running time
- Hard to establish correctness
- Most greedy algorithms are not always correct, use with caution
Examples
Dynamic Programming
Dynamic programming was pioneered in the 1950s by Richard E. Bellman of the RAND corporation.
Recipe for Dynamic Programming
- Identify a relativity small collection of sub problems
- Show how to quickly and correctly solve “larger” sub problems given the solutions to “smaller” sub problems
- Show how to quickly and correctly infer the final solutions from the solutions to all of the sub problems.
Dynamic Programming algorithm runtime
- $f(n)$ = number of sub problems
- $g(n)$ = time per sub problem (including combining previous solutions)
- $h(n)$ = post processing (inferring final solution)
Difference between Divide and Conquer and Dynamic Programming
- Divide and conquer commits to a single way of dividing input into smaller sub problems. Dynamic programming considers multiple ways.
- Divide and conquer sub problems are typically unique while Dynamic programming sub problems can often benefit from memoization.
- Divide and conquer typically speeds up a polynomial-time algorithm. Dynamic programming takes exponential (e.g. $O(k^n)$) run time algorithms and reduces them to polynomial time (e.g. $O(n^2)$).
- Divide and conquer identifies sub problems in order to optimize a run time. Dynamic programming identifies sub problems with an emphasis on correctness.
- Divide and conquer algorithms typically recurse on sub problems of half the size. Dynamic programming will recurse on only slightly smaller sub problems.
- Divide and conquer is a special case of Dynamic programming.
Examples
Local Search
- A popular method of dealing with $NP$-hard problems
let s be any initial solution
whiel there is some solution s' in the neighborhood of s
for which cose(s') < cost(s): replace s by s'
return s
If the probability of reaching a good local optimum on any given run is $p$, then within $O(\frac{1}{p})$ runs such such a solution is likely be found.
Local search with simulated annealing
let $s$ be any starting solution
repeat
randomly choose a solution $s'$ in the neighborhood of $s$
if $\Delta = cost(s') - cost(s)$ is negative:
replace $s$ by $s'$
else:
replace s by $s'$ with probability $e^{-\frac{\Delta}{T}}$
- Difficult to analysis
- Typically no guaranteed runtime or correctness bound
- Not guaranteed to converge quickly
- Locally optimal solutions are generally NOT optimal compared to globally optimal solutions
- Does work well in many real-world scenarios
Search “smoothed analysis”