Big O notation is just an intimidating way of saying runtime analysis. To anybody who has ever programmed, the idea of a runtime analysis is not all that surprising. There's an intuitive component to it. When you write code, you want to understand how complicated it's going to be or how frustratingly slow it might be. In that context, Big O is a way of describing the worst case of that analysis. When looking at a piece of code, you use Big O to describe the worst case complexity of the code. There are a lot of quirks that can crop up when assigning Big O notation to a given piece of code, but I wanted to share some tips that have helped me learn the concept.

### Tip 1: Understand the input

Always start by thinking about the nature of the input. As a self-taught, intuitive programmer, I often don't think about the nature or the complexity of an input; it can seem like a given. However, the input determines the runtime complexity of the code.

When analyzing or writing code, consider what the input is and what the operations you perform are. Is the input a list or a dictionary? Does the input go into a for loop? Is it the same input for both parts of the loop? Think critically about the nature of the input.

### Tip 2: Step through the code

A lot of times, I feel the temptation to look at a piece of code as a unified block and just say "This is what this does". This doesn't translate well to Big O analysis, where there can be subtle wrinkles that change the complexity of an operation.

Instead of thinking about code as a block, I now try to regard it as a set of operations on the input. To really do this on a tactical level, I suggest stepping through the code line by line, similar to how you would with a debugger. On each line, clarify what the input is and what the operations in the line do to yield the output. In doing so, you'll see which operations add most to the complexity, and which can be disregarded. You'll avoid putting yourself in a position where you overlook hidden complexity by moving too fast.

This is easier said than done for the self-taught programmer, but it's worth forcing yourself to start with pieces of code that are too easy, so that you'll be ready to do it on tougher pieces of code.

### Tip 3: Identify patterns

Many times, there are common design patterns that yield specific complexities. Make an effort to identify these patterns, or how they are expressed in problems. Some of the ones I've identified are

- Recursion (i.e. Fibonacci math problems), which has a complexity of $O(b^d)$, where b is branches, or the number of times the function is called, and d is depth, or how many iterations the entire code block must go through for completion based on the input.
- Balanced binary search tree, which has a complexity of $O(log(N))$
- Sorting algorithms, which have a complexity of $O(N(logN))$

As a similar side, learn common patterns for how to work with logs and exponents, which are a frequent component of doing Big O Notation.