Big O


🔗 a linked post to samwho.dev » — originally shared here on

Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs.

In this post I'm going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don't worry if these words mean nothing to you right now. I'm going to talk about them in detail, as well as visualise them, throughout this post.

I have a minor in computer science, and I remember sitting through many explanations of the importance of Big O notation, yet it hasn’t really mattered much in my career until recently.

If you have heard of Big O but aren’t clear on how it works, give this post a shot. It contains a lot of great visualizations to help drive the point home.

Continue to the full article