Designing Algorithms

Prof. Dr. Mirco Schoenfeld

A statement

Computer programming is a creative task.

A reference

A reference

The Art of Computer Programming

Art project

https://xkcd.com/1496/

Today

There is no recipe to developing algorithms.

Deriving an algorithm from specifications is not automatable.

Today

That’s why we use algorithm design paradigms as inspiration for a possible way to approaching a given problem.

These paradigms are often called design patterns. Do not confuse them with Software design patterns, though!

Design patterns

There are a few main patterns:

  • recursion
  • divide and conquer
  • greedy algorithms
  • backtracking
  • dynamic programming
  • and brute force

Divide and Conquer

Divide and Conquer

Divide and Conquer

A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly.

The solutions to the sub-problems are then combined to give a solution to the original problem.

Divide and Conquer

Principle of divide and conquer:

  • split task in multiple smaller sub-tasks
  • use the same algorithm recursively on all sub-tasks
  • recombine solutions to sub-tasks to overall solution

Divide and Conquer: MergeSort

Example of an algorithm following design-and-conquer-pattern: MergeSort

Divide and Conquer: MergeSort

function merge_sort(list m) is
    // Base case. A list of zero or
    // one elements is sorted, by definition.
    if length of m ≤ 1 then
        return m

    // Recursive case. First, divide
    // the list into equal-sized sublists
    // consisting of the first half
    // and second half of the list.
    // This assumes lists start at index 0.
    var left := empty list
    var right := empty list
    for each x with index i in m do
        if i < (length of m)/2 then
            add x to left
        else
            add x to right

    // Recursively sort both sublists.
    left := merge_sort(left)
    right := merge_sort(right)

    // Then merge the now-sorted sublists.
    return merge(left, right)

Divide and Conquer: MergeSort

function merge(left, right) is
    var result := empty list

    while left is not empty and
          right is not empty do
        if first(left) ≤ first(right) then
            append first(left) to result
            left := rest(left)
        else
            append first(right) to result
            right := rest(right)

    // Either left or right may have
    // elements left; consume them.
    // (Only one of the following loops
    // will actually be entered.)
    while left is not empty do
        append first(left) to result
        left := rest(left)
    while right is not empty do
        append first(right) to result
        right := rest(right)
    return result

Divide and Conquer

Developing an algorithm following the
design-and-conquer-pattern can be difficult.

Brute Force

Let’s visit the simplest solution in between:

a.k.a. brute force

Brute Force

c ← first(P)
while c ≠ Λ do
    if c is valid solution for P then
        return c
    c ← next(P, c)
end while

Brute Force

Principles of brute force:

  • obtain all possible candidates
  • check each candidate if it is a valid solution

Brute Force

Using brute force is very easy to implement.

It is also very slow.

Brute Force

Runtime of a brute force approach depends proportionally on number of candidate solutions.

Brute Force

Let’s do a brute force attack.

Consider an encryption key of 256 bits length.

This gives us \(2^{256}\) possible bit patterns to check.

A supercomputer from 2019 had a speed of 100 petaFLOPS
enabling it to check 100 trillion ( \(10^{14}\) ) bit patterns per seconds.

The time needed for that computer to exhaust the \(2^{256}\) possible bit patterns:

\(3.67\times 10^{55}\) years.

Brute Force

You should know if you could potentially use
brute force exhaustive search
or if runtime is just unacceptable…

greedy algorithms

Greedy Algorithms

Greedy Algorithm

Take what you can get!

A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage.

Greedy Algorithms

Principles of greedy algorithms:

  • approach a solution by extending a solution iteratively
  • choose the option that appears best at the current step

In many problems, a greedy strategy does not produce an optimal solution.

Greedy Algorithms: Change making

A well-known problem solved with a greedy method is the change-making problem.

Greedy Algorithms: Change making

A change-making example: paid a 1.11€ amount with a 2€ coin.
What is the minimal number of coins to return 89ct?

\[ 89ct = 50ct + 20ct + 10ct + 5ct + 2ct + 2ct \]

Greedy Algorithms: Change making

Input: amount b

make_change(b)
  var count := 0;
  while count is smaller than b
    // make greedy choice
    choose highest valued coin s with count + s <= b
    count += s;

What if…

  • available coins: 5ct, 4ct, 1ct
  • amount \(b\): 8ct
  • greedy solution: 8ct = 5ct + 1ct + 1ct + 1ct
  • optimal solution: 8ct = 4ct + 4ct

The Euro currency is designed such that you can find an optimal solution greedily.

Backtracking

Backtracking

Backtracking

Systematic search technique to entirely
process a given solution space

Backtracking

Räcke (2019)

  • systematically search a labyrinth
  • go back if mouse hits a dead end
  • going back is the backtracking part

Backtracking

Räcke (2019)

Backtracking

procedure backtrack(P, c) is
    if reject(P, c) then return
    if accept(P, c) then return c
    s ← first(P, c)
    while s ≠ NULL do
        backtrack(P, s)
        s ← next(P, s)

A candidate \(c\) could be a path for the mouse.

Backtracking

Observations:

  • backtracking only terminates if solution space is finite
  • and only if candidates aren’t tested more than once
  • complexity depends on solution space
  • often exponential ( \(O(2^n)\) ) or worse

Backtracking

Example application from textbooks: Eight queens puzzle

Backtracking: Travelling Salesman

An application with a more practical relevance:
The Travelling salesman
although backtracking isn’t the best choice here

Räcke (2019)

Backtracking: Travelling Salesman

The problem:

We have \(n\) cities.

And a salesperson who wants to make the shortest round-trip visiting each city exactly once and returns to start.

Räcke (2019)

Backtracking: Travelling Salesman

Input: n cities, roundtrip trip
TSP(trip)
  if (trip visits each city)
    extend trip with route to start;
    return trip and cost;
  else
    foreach (unvisited city c)
      trip’ = trip extended with c
      TSP(trip’);

Backtracking: Travelling Salesman

With \(n\) cities and fixed start and end,
we can possibly do \((n-1)!\) roundtrips.

Here \(n=5\), so
\(4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24\)

Runtime complexity of TSP is \(O(n-1)!\)

Thanks

https://xkcd.com/399/

Acknowledgement

Acknowledgement:

This script is based on a script by Räcke (2019). Additional inspiration is drawn from Tantau (2016).

References

Räcke, Harald. 2019. “Lecture Notes to the Lecture Algorithmen und Datenstrukturen.” TUM. https://wwwalbers.in.tum.de/lehre/2019SS/ad/index.html.de.
Tantau, Till. 2016. Einführung in Die Informatik. Online. https://www.tcs.uni-luebeck.de/de/mitarbeiter/tantau/lehre/lectures/EinfuehrungindieInformatik-2016.pdf.
Back to Lecture Website