Basics of Asymptotic Complexity - Big O, Theta and Omega
Asymptotic complexity can seem mystifying at first, but it’s a great tool for comparing algorithms and finding where next to improve your code. It’s a good idea to be able to derive the complexity yourself, but knowing enough to understand general cases is a start.
Asymptotic Complexity
Asymptotic complexity is used to describe how the time and storage requirements of an algorithm, or operation on a data structure, increase with size of input (or size of data structure). It’s often used to compare two or more algorithms without running real world data through them; especially useful when doing so is time consuming, or the code hasn’t been written yet.
It’s possible to derive the asymptotic complexity of any code you write, but if you’re (correctly) following a tried and tested algorithm then you can easily look it up. Actually deriving complexity is outside the scope of this post, the intention right now is to help you understand enough to decode the information from somewhere like BigOCheatSheet.
For those unaware, an asymptote is a mathematical concept, where the value of one thing gets incrementally closer to another without ever actually equalling it. The thing is a series of numbers, for example the numbers generated by repeatedly halving. If you started with 1 and halved it to 0.5, then to 0.25, 0.125, etc. you would get incredibly close to 0. But, the value you computed would never actually be 0; instead, it would be asymptotically close to it. Asymptotic complexity is therefore an estimation of the actual complexity for some code.
The actual complexity of an algorithm could be shown using a precise formula. However because we’re concerned with asymptotic complexity, rather than actual complexity, this means we only have to pay attention to the most dominant terms in the formula. This can initially be confusing when deriving complexity for yourself, because parts of the formula seem to just disappear with no affect on others. This is different from formulas in maths where we account for removing terms, by performing the same operation on all the others.
If you are ever asked to derive a complexity for yourself, and end up with something like n log n + n²
, then the asymptotic complexity is actually n²
, because that’s the dominant term. The n²
is dominant, because as n
gets larger, squaring it produces much bigger numbers than n log n
. For a quick idea of how dominant common terms are, refer to the graph at the top of BigOCheatSheet; the steeper the line, the more dominant the term.
Big O, OMEGA (Ω) AND THETA (Θ)
Once you know the above, then the Big O and the related omega and theta are pretty simple; they just tell you in what circumstances the estimation applies.
Big O is the aspect of an algorithm’s complexity that represents the worst case performance. People will often just refer to Big O instead of asymptotic complexity more widely; which saves syllables, but also the limit of an algorithm is generally most useful. If somebody just refers to “complexity”, then it’s fairly safe to assume they mean Big O.
Omega is used when referring to the best case performance of an algorithm. Which is useful when you’re certain about the data your algorithm will encounter. Theta is a special case where O and omega are close enough to be indiscernible.
Not So Complex
The knowledge required to compare two algorithms, given their complexities, isn’t that complex in itself. There’s a lot of value in understanding just the basics of asymptotic complexity, it’s one of those concepts that allows you to communicate and debate an idea better; all without actually writing any code!
If you’re considering building a system using your own data structures and algorithms, be sure to pick ones that best suit your complexity requirements. If you start coding with these in mind, then it should also encourage you to keep the code simple enough to analyse later; which will come easier with this background.