A self-taught guide to big O 👨💻👨💻
Being a self-taught developer can be rough sometimes; not having the proper CS background, and trying to get used to a whole new phrase syntax and use
Being a self-taught developer can be rough sometimes; not having the proper CS background and trying to get used to a whole new phrase syntax and use of language. It's just like learning a couple of new foreign languages all at once (The Only pro is at least our tongue isn't getting thrust and bitten while at it )
If you are a fan of knowing the basis and origin of things before fully understanding and using them comfortably then you are in the right place. Have Always shy away when the term Big O gets mentioned, But as fate would have It; I came across it again completing a J.S interview course by Andrei Neagoie.
There was nowhere to dash to this time; trust me I tried. So here we are “Big Oooo”
The main purpose of Big O analysis is for writing Good and Scalable codes. Good codes are measured mainly on how readable, scalable, not over-complicated, and the code doing well what it intended to do.
What is the Big O notation?
Big O notation is mainly concerned about the scalability of a code and program. When we talk about scalability, we focus on how much the function or algorithm slows down as the amount of input grows larger.
Let's say you have an application of 100 users. You use a loop function to loop through your list of users to get each of their names. That function will get the job done in a matter of milliseconds. The bigger the number of users the slower our run-time gets. Depending on the PC, RAM, Speed, and Space available been used.
Big-O notation (also called "asymptotic growth" notation) is a relative representation of the complexity of an algorithm. It shows how an algorithm scales based on input size. We use it to talk about how things scale.
Big O complexity can be visualized with this graph:
Big O notation compared to algorithms?
It is difficult to determine the exact runtime of an algorithm. It depends on the speed of the computer processor. We use Big O Notation to talk about how quickly the runtime grows depending on input size. In Big O Notation, the size of the input is represented as n.
So we can say things like the runtime grows “on the order of the size of the input” ( O(n) ) or “on the order of the square of the size of the input” ( O(n 2) ). Our algorithm may have steps that seem expensive when n is small but are eclipsed eventually by other steps as n gets larger.
For Big O Notation analysis, we care more about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as n gets very large.
Rule To Follow For Big O analysis
- Worst Case
- Remove Constant
- Different Terms For Input
- Drop Non-Dominants
Worst-Case
The worst-case analysis gives the maximum number of operations assuming that the input is in the worst possible state, while the big o notation expresses the max number of operations done in the worst case. This means when checking for the Big O of an Operation or Program You have to imagine the worst-case scenario, the highest possible number of input Big O is concerned with the worst-case scenario. In the worst case, we need to traverse through all n of the elements, so this has complexity O(n)
Remove Constant
With Big O, we only care about the growth rate. How does this algorithm scale as the input size n gets very large? Well, multiplying by a constant factor doesn't make much of a difference for large numbers, so it's left out. For example,
O(3n)=O(n)
O(12logn)=O(logn)
Different Terms For Input (use differing variables for differing inputs)
You may be wondering: what if our algorithm depends on the sizes of two different inputs? In that case, use a different variable for each input, like
O(s + t) or O(n*m)
Drop the non-dominant terms
Big O notation for function here is O(n² + n). Since big O is not concerned with non-dominant terms, we drop the n (quadratic wins since it is worse than linear time). Throwing out non-dominant terms is the fourth rule to follow when analyzing the run time of an algorithm. In the end, O(n² + n) gives us O(n²)
Common Runtime Complexities In Big O
1. O(1) - Constant Runtime
In this case, your algorithm runs the same time, regardless of the given input data set. Constant Time: Given an input of size n, it only takes a single step for the algorithm to accomplish the task. An example of this is returning the first element in the given data like in the example below.
function returnFirst(elements) {
return elements[0]
}
The runtime is constant no matter the size of the input given.
2. O(n) - Linear Runtime
Linear runtime occurs when the runtime grows in proportion with the size of the input data set. n is the size of the input data set. Linear Time: Given an input of size n, the number of steps required is directly related (1 to 1) A good example of this is searching for a particular value in a data set using an iteration like in the example below.
function constainsValue(elements, value) {
for (let element in elements) {
if (element === value) return true;
}
return false
}
We see that the time taken to loop through all elements in the array grows with an increase in the size of the array. But what if the element is found before it reaches the last element in the array? Does the runtime complexity change?
Remember that the Big O notation considers the worst-case scenario. In this instance, it's the case where the loops run through all elements in the array. So that is what determines the runtime complexity of the algorithm.
3. O(n^2 ) - Quadratic Runtime
O(n^2 ) denotes an algorithm whose runtime is directly proportional to the square of the size of the input data set. Quadratic Time: Given an input of size n, the number of steps it takes to accomplish a task is square of n. An example of this is a nested iteration or loop to check if the data set contains duplicates.
function constainsDuplicate(elements) {
for (let element in elements) {
for (let item in elements){
if (element === item) return true;
}
}
return false
}
Deeper nested iterations will produce runtime complexities of O(n^3 ), O(n^4 ) etc
4. O(log n) - Logarithmic runtime
In this case, the runtime it takes for the algorithm to run will plateau no matter the size of the input data set. Logarithmic time: given an input of size n, the number of steps it takes to accomplish the task are decreased by some factor with each step. A common example of this is a search algorithm like the binary search. The idea of a binary search is not to work with the entire data. Rather, reduce the amount of work done by half with each iteration. The number of operations required to arrive at the desired result will be log base 2 of the input size.
For further information on this runtime complexity, you can check some of the resources at the end of the article.
5. O(n log n) - Linearithmic runtime
Here, the runtime of the algorithm depends on running a logarithm operation n times. Most sorting algorithms have a runtime complexity of O(n log n)
6. O(2^n ) - Exponential runtime
This occurs in algorithms where for each increase in the size of the data set, the runtime is doubled. For a small data set, this might not look bad. But as the size of the data increase, the time taken to execute this algorithm increases rapidly. Exponential Time: Given an input of size n, the number of steps it takes to accomplish a task is a constant to the n power (a pretty large number) A common example of this is a recursive solution for finding Fibonacci numbers.
function fibonacci(num) {
if (num <= 1) return 1;
return fibonacci(num - 2) + fibonacci(num - 1)
}
7 O(n!) - Factorial runtime
In this case, the algorithm runs in factorial time. The factorial of a non-negative integer (n!) is the product of all positive integers less than or equal to n. This is a pretty terrible runtime.
Any algorithm that performs permutation on a given data set is an example of O(n!)
Difference between O(1) vs O(n) space complexities
Let's consider a traversal algorithm for traversing a list. O(1) denotes constant space use: the algorithm allocates the same number of pointers irrespective of the list size. And O(n) this denotes linear space use: the algorithm space use grows together with respect to the input size n. This will happen if let's say for some reason the algorithm needs to allocate 'N' pointers (or other variables) when traversing a list.
Common Big O interview Questions:
- What is Big O notation?
- What is Worst Case?
- What the heck does it mean if an operation is O(log n)?
- Why do we use Big O notation to compare algorithms?
- What exactly would an O(n2) operation do?
- Explain the difference between O(1) vs O(n) space complexities
Big O helps us develop the skill to see time and space optimizations, as well as the wisdom to judge if those optimizations are worthwhile.
Hopefully, this post was helpful in breaking down Big O notation for you. Here are some additional resources you can check out (and that I checked out while writing this post) to learn more about Big O.