What’s up with big O?

Jennifer Yoo
2 min readJan 5, 2021

Big O Notation is a mathematical expression of the relationship between input size and the time in relation to that input. In other words, it classifies algorithms depending on how they respond to the input size. It’s a basic system of generalizing code and its performance to other code. It is otherwise known as Time Complexity.

Big O Notation is a common concept brought up during technical interviews. Two different sets of code with the same function can differ in terms of speed, memory, readability and sometimes brevity. The ideal code will perform faster, take up less memory, and be easy to decipher.

Big O Notation can be categorized into various operations. Below is the list from most to least efficient.

O(1)

O(1) is the ideal notation. This includes most single operations such as pushing an element to an array or basic arithmetic expressions.

O(log n)

Logarithmic growth is the inverse of exponentiation. This roughly measures to the number of times you can divide that number by 2 before you get a value that is less than or equal to one. Most sorting algorithms have O(log n) time complexity. Some recursive functions also have O(log n) time complexity.

O(n)

O(n) is a linear growth. This includes loops since iterations have a one to one relationship between the data set and the time it takes to complete the function.

O(n)²

O(n)² is exponential growth. The best example is a nested loop. O(n)² should be avoided if possible. Inputs will increase exponentially in a nested loop that can potentially freeze your browser.

Developers write code that aim to be most efficient and performant. However, writing code with nested loops because it just works happens once in a while. Time complexity is important to keep in mind and allows developers to think about efficiency of code.

--

--