Just How Fast Is Your Algorithm?
Methods of analysing the run-time complexity of algorithms.
When designing and using algorithms, a common question to have is which algorithm is more efficient at accomplishing a certain task.
There are multiple strategies for determining efficiency, and in this essay we will look at a couple of approaches.
Empirical measurements
Perhaps the simplest way of determining the efficiency of an algorithm is by measuring how much time it takes an implementation to run for a given input-size and then compare it to other algorithms.
Most programming languages have some sort of timing function that you can use to time how long it takes to run a function. Python notebooks have the built in %%timeit
magic command which you can use to time the execution of a code cell:
For example, we can time the execution of this implementation of insertion sort on a input of 500 random elements.
%%timeit
def insertion_sort(array):
for i in range(len(array)):
for j in range(len(array[:i])):
if array[j] > array[i]:
array[j], array[i] = array[i], array[j]
return array
insertion_sort(to_sort)
Outputs: 100 loops, best of 3: 7.57 ms per loop
.
We see that the best average time per loop is 7.57 ms for 100 loops. Averaging is useful as it muddies out
While this is dead-simple to implement, empirical validation has several drawbacks.
Firstly, it can be difficult to determine how fast an algorithm is when the dataset becomes very large; too large to run on a laptop. This is problematic if you're trying to figure out if your algorithm is fast enough to be run on all your data on a super computer.
Just because your algorithm runs quickly on small datasets it doesn't gurantee that it will scale well to large datasets.
Secondly, it's difficult to compare run-times of different algorithms found by different people as the results are dependent on machine-specific hardware, programming languages, and even compiler versions for the same programming language.
Since we would really like a robust, universal way of determining the run-time of an algorithm, and be able to tell if an algorithm is fast enough before actually running it, empirical validation is seldom used in practice.
Instead, we use analytical reasoning where we look at what an algorithm does, and how it scales to large datasets by counting the number of operations performed rather than measuring the time it takes to run directly.
This solves the problem of reproducability with empirical measurements, it also provides a way of determining if an algorithm will be too expensive to run, even on a super computer, before running it.
Analytical reasoning
Another method assessing the efficiency of an algorithm is by analysing it analytically, but that comes at the cost of being more difficult than just measuring the time it takes to run some code.
One most common method of analytically determining the efficiency of an algorithm is by calculating the asymptotic complexity with big-O.
Big-O
Big-O is concerned with the asymptotic time complexity of an algorithm, or in other words the upper bound for how many steps it will take in comparison to how much data youโre processing. Formally, big-O is defined as:
If the above seems like mathematical garble, you might want to read the piece on formal logic (coming soon) which explains what all those symbols mean, and more importantly why formal logic is useful.
In any case, what the above equation says is that there exists two constants
Or in simpler terms, if after a certain point, the number of steps it takes the algorithm to run is proportional to another function
As an example, letโs suppose you have an algorithm that runs in
Mathematically, this can be written as:
Or more generally as:
Below are some graphs of common running time complexities.
An efficient algorithm is one that has a low running time for large
Because big-O is unconcerned by small values, itโs ideal to analytically determine the efficiency of algorithms. With that said, there are other methods of analysing efficiency.
In practice, you generally want to avoid algorithms that run
Furthermore, searching problems tend to be
Big-O matrix multiplication
To see Big-O in action, let's try to compute the Big-O asymptotic runtime of a naive matrix multiplication algorithm.
We assume the hardware on which we do the matrix multiplication has an integer multiplication operation in its ALU which lets us wrap the time it takes to perform a single multiplication into our fundamental constant.
Assume two matrices
Furthermore, due to the way matrix multiplication is defined, every element
Since
All the elements in the matrix
Here, we see that the we have to compute
When analyzing matrices, it's a common practice to assume there are the same number of rows and columns. Using this assumption here, we find that the running time for
Alternatives to big-O
Big-O is not the only way of analytically determining the complexity of an algorithm.
Another method is by using the tilde (
Using tilde notation,
For example, suppose we have a function that takes
The main difference between tilde notation and big-O notation is that tilde notation preserves the coefficient of the most significant exponential.
These are just two asymptotic notation schemes. There exists many more. An overview can be found on OEIS's wiki.
Conclusion
We have discussed different methods of determining the efficiency of an algorithm, and have found why analytical approaches such as the popular big-O notation is preferable to empirical measurements.
Furthermore, we have looked at alternatives to big-O specifically tilde notation which preserves the coefficient of the leading exponent.