Quick Sort

Sorting algorithm that is prevalent in practice. Although it has the same runtime on average as merge sort, it will execute faster due to reduced memory requirements. Additionally, it is far easier to implement than quick sort.

One important consideration is the implementation of Choose Pivot. Choosing a pivot element that represents the median value in the array is ideal. However, the overhead associated with calculating the median renders the approach ineffective. There are mathematical proofs that demonstrate choosing a pivot point at random will ensure O(n log(2, n)) on average.

Asymptotic Complexity

$O(n \lg_{2}n)$ on average

Pseudo Code

sort:
    // side effect: rearranges the values in A
    A = input array
    n = length of A

    if n <= 1:
        return

    pivot = choose_pivot(n)
    swap A[0] and A[pivot]

    pivot_index = partition(A[0])

    // The array is never copied, all swaps happen in place.
    // Recursive calls pass a reference to a portion of the array.
    sort(A[0 thru pivot_index - 1])
    sort(A[pivot_index + 1 thru n])

partition:
    // side effects:
    // - All items in A that are less than the value at A[0] are before A[return value]
    // - All items in A that are greater than the value at A[0] are moved after A[return value]
    // - The value at A[0] is moved to A[return value]
    // Assumption: A[0] is the value to partition aroundk
    returns: new position of the value at A[0] after the partition
    A = input array
    n = number of elements to partition around

    index = 1
    for i = 1 to n:
        if A[i] < A[0]:
            swap A[i] and A[index]
            increment index

    if index - 1 > 0:
        swap A[0] and A[index]

    return index

choose_pivot:
    returns: ideal index to partition on
    n = number of elements in the partition

    return uniformly random number between 0 and n inclusive

Source Code

Full Repo

Relevant Files:

Click here for build and run instructions