Big O Notation and Time Complexity - Easily Explained (2024)

The big O notation¹ is used to describe the complexity of algorithms.

On Google and YouTube, you can find numerous articles and videos explaining the big O notation. But to understand most of them (like this Wikipedia article), you should have studied mathematics as a preparation. ;-)

That' s why, in this article, I will explain the big O notation (and the time and space complexity described with it) only using examples and diagrams – and entirely without mathematical formulas, proofs and symbols like θ, Ω, ω, ∈, ∀, ∃ and ε.

You can find all source codes from this article in this GitHub repository.

¹ also known as "Bachmann-Landau notation" or "asymptotic notation"

Contents hide

1 Types of Complexity

1.1 Computational Time Complexity

1.2 Space Complexity

2 Complexity Classes

2.1 O(1) – Constant Time

2.2 O(n) – Linear Time

2.4 O(log n) – Logarithmic Time

2.5 O(n log n) – Quasilinear Time

2.6 Big O Notation Order

2.7 Other Complexity Classes

3 Summary

Types of Complexity

Computational Time Complexity

Computational time complexity describes the change in the runtime of an algorithm, depending on the change in the input data's size.

In other words: "How much does an algorithm degrade when the amount of input data increases?"

Examples:

  • How much longer does it take to find an element within an unsorted array when the size of the array doubles? (Answer: twice as long)
  • How much longer does it take to find an element within a sorted array when the size of the array doubles? (Answer: one more step)

Space Complexity

Space complexity describes how much additional memory an algorithm needs depending on the size of the input data.

This does not refer to the memory required for the input data itself (i.e., that twice as much space is naturally needed for an input array twice as large), but the additional memory needed by the algorithm for loop and helper variables, temporary data structures, and the call stack (e.g., due to recursion).

Complexity Classes

We divide algorithms into so-called complexity classes. A complexity class is identified by the Landau symbol O ("big O").

In the following section, I will explain the most common complexity classes, starting with the easy-to-understand classes and moving on to the more complex ones. Accordingly, the classes are not sorted by complexity.

O(1) – Constant Time

Pronounced: "Order 1", "O of 1", "big O of 1"

The runtime is constant, i.e., independent of the number of input elements n.

In the following graph, the horizontal axis represents the number of input elements n (or more generally: the size of the input problem), and the vertical axis represents the time required.

Since complexity classes can only be used to classify algorithms, but not to calculate their exact running time, the axes are not labeled.

Big O Notation and Time Complexity - Easily Explained (1)

O(1) Examples

The following two problems are examples of constant time:

  • Accessing a specific element of an array of size n: No matter how large the array is, accessing it via array[index] always takes the same time².
  • Inserting an element at the beginning of a linked list: This always requires setting one or two (for a doubly linked list) pointers (or references), regardless of the list's size. (In an array, on the other hand, this would require moving all values one field to the right, which takes longer with a larger array than with a smaller one).

² This statement is not one hundred percent correct. Effects from CPU caches also come into play here: If the data block containing the element to be read is already (or still) in the CPU cache (which is more likely the smaller the array is), then access is faster than if it first has to be read from RAM.

O(1) Example Source Code

The following source code (class ConstantTimeSimpleDemo in the GitHub repository) shows a simple example to measure the time required to insert an element at the beginning of a linked list:

public static void main(String[] args) { for (int n = 32; n <= 8_388_608; n *= 2) { LinkedList<Integer> list = createLinkedListOfSize(n); long time = System.nanoTime(); list.add(0, 1); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static LinkedList<Integer> createLinkedListOfSize(int n) { LinkedList<Integer> list = new LinkedList<>(); for (int i = 0; i < n; i++) { list.add(i); } return list;}Code language: Java (java)

On my system, the times are between 1,200 ns and 19,000 ns, unevenly distributed over the various measurements. This is sufficient for a quick test. But we don't get particularly good measurement results here, as both the HotSpot compiler and the garbage collector can kick in at any time.

The test program TimeComplexityDemo with the ConstantTime class provides better measurement results. The test program first runs several warmup rounds to allow the HotSpot compiler to optimize the code. Only after that are measurements performed five times, and the median of the measured values is displayed.

Here is an extract of the results:

--- ConstantTime (results 5 of 5) ---ConstantTime, n = 32 -> fastest: 31,700 ns, median: 44,900 nsConstantTime, n = 16,384 -> fastest: 14,400 ns, median: 40,200 nsConstantTime, n = 8,388,608 -> fastest: 34,000 ns, median: 51,100 nsCode language: plaintext (plaintext)

The effort remains about the same, regardless of the size of the list. The complete test results can be found in the file test-results.txt.

O(n) – Linear Time

Pronounced: "Order n", "O of n", "big O of n"

The time grows linearly with the number of input elements n: If n doubles, then the time approximately doubles, too.

"Approximately" because the effort may also include components with lower complexity classes. These become insignificant if n is sufficiently large so they are omitted in the notation.

In the following diagram, I have demonstrated this by starting the graph slightly above zero (meaning that the effort also contains a constant component):

Big O Notation and Time Complexity - Easily Explained (2)

O(n) Examples

The following problems are examples for linear time:

  • Finding a specific element in an array: All elements of the array have to be examined – if there are twice as many elements, it takes twice as long.
  • Summing up all elements of an array: Again, all elements must be looked at once – if the array is twice as large, it takes twice as long.

It is essential to understand that the complexity class makes no statement about the absolute time required, but only about the change in the time required depending on the change in the input size. The two examples above would take much longer with a linked list than with an array – but that is irrelevant for the complexity class.

O(n) Example Source Code

The following source code (class LinearTimeSimpleDemo) measures the time for summing up all elements of an array:

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long sum = 0; long time = System.nanoTime(); for (int i = 0; i < n; i++) { sum += array[i]; } time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

On my system, the time degrades approximately linearly from 1,100 ns to 155,911,900 ns. Better measurement results are again provided by the test program TimeComplexityDemo and the LinearTime algorithm class. Here is an extract of the results:

--- LinearTime (results 5 of 5) ---LinearTime, n = 512 -> fastest: 300 ns, median: 300 nsLinearTime, n = 524,288 -> fastest: 159,300 ns, median: 189,400 nsLinearTime, n = 536,870,912 -> fastest: 164,322,600 ns, median: 168,681,700 nsCode language: plaintext (plaintext)

You can find the complete test results again in test-results.txt.

What is the Difference Between "Linear" and "Proportional"?

A function is linear if it can be represented by a straight line, e.g. f(x) = 5x + 3.

Proportional is a particular case of linear, where the line passes through the point (0,0) of the coordinate system, for example, f(x) = 3x.

As there may be a constant component in O(n), it's time is linear.

O(n²) – Quadratic Time

Pronounced: "Order n squared", "O of n squared", "big O of n squared"

The time grows linearly to the square of the number of input elements: If the number of input elements n doubles, then the time roughly quadruples. (And if the number of elements increases tenfold, the effort increases by a factor of one hundred!)

Big O Notation and Time Complexity - Easily Explained (3)

O(n²) Examples

Examples of quadratic time are simple sorting algorithms like Insertion Sort, Selection Sort, and Bubble Sort.

O(n²) Example Source Code

The following example (QuadraticTimeSimpleDemo) shows how the time for sorting an array using Insertion Sort changes depending on the size of the array:

public static void main(String[] args) { for (int n = 32; n <= 262_144; n *= 2) { int[] array = createRandomArrayOfSize(n); long time = System.nanoTime(); insertionSort(array); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createRandomArrayOfSize(int n) { ThreadLocalRandom random = ThreadLocalRandom.current(); int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = random.nextInt(); } return array;}private static void insertionSort(int[] elements) { for (int i = 1; i < elements.length; i++) { int elementToSort = elements[i]; int j = i; while (j > 0 && elementToSort < elements[j - 1]) { elements[j] = elements[j - 1]; j--; } elements[j] = elementToSort; }}Code language: Java (java)

We can obtain better results with the test program TimeComplexityDemoand the QuadraticTime class. Here is an excerpt of the results, where you can see the approximate quadrupling of the effort each time the problem size doubles:

QuadraticTime, n = 8,192 -> fastest: 4,648,400 ns, median: 4,720,200 nsQuadraticTime, n = 16,384 -> fastest: 19,189,100 ns, median: 19,440,400 nsQuadraticTime, n = 32,768 -> fastest: 78,416,700 ns, median: 79,896,000 nsQuadraticTime, n = 65,536 -> fastest: 319,905,300 ns, median: 330,530,600 nsQuadraticTime, n = 131,072 -> fastest: 1,310,702,600 ns, median: 1,323,919,500 nsCode language: plaintext (plaintext)

You can find the complete test results intest-results.txt.

O(n) vs. O(n²)

At this point, I would like to point out again that the effort can contain components of lower complexity classes and constant factors. Both are irrelevant for the big O notation since they are no longer of importance if n is sufficiently large.

It is therefore possible that, for example, O(n²) is faster than O(n) – at least up to a certain size of n.

The following diagram compares three fictitious algorithms: one with complexity class O(n²) and two with O(n), one of which is faster than the other. It is good to see how up to n = 4, the orange O(n²) algorithm takes less time than the yellow O(n) algorithm. And even up to n = 8, less time than the cyan O(n) algorithm.

Above a sufficiently large n (that is n = 9), O(n²) is and remains the slowest algorithm.

Big O Notation and Time Complexity - Easily Explained (4)

Let's move on to two, not-so-intuitive complexity classes.

O(log n) – Logarithmic Time

Pronounced: "Order log n", "O of log n", "big O of log n"

The effort increases approximately by a constant amount when the number of input elements doubles.

For example, if the time increases by one second when the number of input elements increases from 1,000 to 2,000, it only increases by another second when the effort increases to 4,000. And again by one more second when the effort grows to 8,000.

Big O Notation and Time Complexity - Easily Explained (5)

O(log n) Example

An example of logarithmic growth is the binary search for a specific element in a sorted array of size n.

Since we halve the area to be searched with each search step, we can, in turn, search an array twice as large with only one more search step.

(The older ones among us may remember searching the telephone book or an encyclopedia.)

O(log n) Example Source Code

The following example (LogarithmicTimeSimpleDemo) measures how the time for binary search changes in relation to the array size.

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

We get better measurement results with the test program TimeComplexityDemoand the class LogarithmicTime. Here are the results:

LogarithmicTime, n = 32 -> fastest: 77,800 ns, median: 107,200 nsLogarithmicTime, n = 2,048 -> fastest: 173,500 ns, median: 257,400 nsLogarithmicTime, n = 131,072 -> fastest: 363,400 ns, median: 413,100 nsLogarithmicTime, n = 8,388,608 -> fastest: 661,100 ns, median: 670,800 nsLogarithmicTime, n = 536,870,912 -> fastest: 770,500 ns, median: 875,700 nsCode language: plaintext (plaintext)

In each step, the problem size n increases by factor 64. The time does not always increase by exactly the same value, but it does so sufficiently precisely to demonstrate that logarithmic time is significantly cheaper than linear time (for which the time required would also increase by factor 64 each step).

As before, you can find the complete test results in the filetest-results.txt.

O(n log n) – Quasilinear Time

Pronounced: "Order n log n", "O of n log n", "big O of n log n"

The effort grows slightly faster than linear because the linear component is multiplied by a logarithmic one. For clarification, you can also insert a multiplication sign: O(n × log n).

This is best illustrated by the following graph. We see a curve whose gradient is visibly growing at the beginning, but soon approaches a straight line as n increases:

Big O Notation and Time Complexity - Easily Explained (6)

O(n log n) Example

Efficient sorting algorithms like Quicksort, Merge Sort, and Heapsort are examples for quasilinear time.

O(n log n) Example Source Code

The following sample code (class QuasiLinearTimeSimpleDemo) shows how the time for sorting an array with Quicksort³ grows in relation to the array size:

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

The test program TimeComplexityDemowith the class QuasiLinearTimedelivers more precise results. Here is an extract:

QuasiLinearTime, n = 256 -> fastest: 12,200 ns, med.: 12,500 nsQuasiLinearTime, n = 4,096 -> fastest: 228,600 ns, med.: 234,200 nsQuasiLinearTime, n = 65,536 -> fastest: 4,606,500 ns, med.: 4,679,800 nsQuasiLinearTime, n = 1,048,576 -> fastest: 93,933,500 ns, med.: 95,216,300 nsQuasiLinearTime, n = 16,777,216 -> fastest: 1,714,541,900 ns, med.: 1,755,715,000 nsCode language: plaintext (plaintext)

The problem size increases each time by factor 16, and the time required by factor 18.5 to 20.3. You can find the complete test result, as always, in test-results.txt.

³ More precisely: Dual-Pivot Quicksort, which switches to Insertion Sort for arrays with less than 44 elements. For this reason, this test starts at 64 elements, not at 32 like the others.

Big O Notation Order

Here are, once again, the complexity classes, sorted in ascending order of complexity:

  • O(1) – constant time
  • O(log n) – logarithmic time
  • O(n) – linear time
  • O(n log n) – quasilinear time
  • O(n²) – quadratic time

And here the comparison graphically:

Big O Notation and Time Complexity - Easily Explained (7)

I intentionally shifted the curves along the time axis so that the worst complexity class O(n²) is fastest for low values of n, and the best complexity class O(1) is slowest. To then show how, for sufficiently high values of n, the efforts shift as expected.

Other Complexity Classes

Further complexity classes are, for example:

  • O(nm) – polynomial time
  • O(2n) – exponential time
  • O(n!) – factorial time

However, these are so bad that we should avoid algorithms with these complexities, if possible.

I have included these classes in the following diagram (O(nm) with m=3):

Big O Notation and Time Complexity - Easily Explained (8)

I had to compress the y-axis by factor 10 compared to the previous diagram to display the three new curves.

Summary

Time complexity describes how the runtime of an algorithm changes depending on the amount of input data. The most common complexity classes are (in ascending order of complexity): O(1), O(log n), O(n), O(n log n), O(n²).

Algorithms with constant, logarithmic, linear, and quasilinear time usually lead to an end in a reasonable time for input sizes up to several billion elements. Algorithms with quadratic time can quickly reach theoretical execution times of several years for the same problem sizes⁴. You should, therefore, avoid them as far as possible.

⁴ Quicksort, for example, sorts a billion items in 90 seconds on my laptop; Insertion Sort, on the other hand, needs 85 seconds for a million items; that would be 85 million seconds for a billion items – or in other words: two years and eight months!

If you liked the article, please share it using one of the share buttons at the end or leave me a comment.

Do you want to be informed when new articles are published on HappyCoders.eu? Then click here to sign up for the HappyCoders.eu newsletter.

Big O Notation and Time Complexity - Easily Explained (2024)

FAQs

Big O Notation and Time Complexity - Easily Explained? ›

In Big O notation, "O" represents the order of the function, and "f(n)" represents the function describing the algorithm's time complexity

complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements.
https://en.wikipedia.org › wiki › Computational_complexity
in terms of the input size "n." The notation "O(f(n))" signifies that the algorithm's time complexity grows no faster than a specific function of "n." Here, "f(n)" is a mathematical ...

What is the Big O notation in simple terms? ›

Big-O notation is the language we use for talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity). Big-O notation can express the best, worst, and average-case running time of an algorithm.

What is time complexity simplified? ›

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm.

What is time and space complexity for dummies? ›

Time complexity affects how fast or slow an algorithm performs, which is crucial for time-sensitive applications. On the other hand, space complexity affects how much memory an algorithm uses, which is critical for applications running on memory-constrained devices.

What is the Big O notation for kids? ›

In mathematics and computer science, Big O notation is a way of comparing rates of growth of different functions. It is often used to compare the efficiency of different algorithms, which is done by calculating how much memory is needed, and how much time it takes to complete.

Which best describes Big O notation? ›

Big O notation has two main areas of application: In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion.

What is the worst-case time complexity? ›

It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time.

What is complexity in layman's terms? ›

When you see the word complexity, think of something with a lot of pieces, something not easy to put into words or understand. Things that can have complexity include: the events leading up to the American Civil War, a broth made with many ingredients, your relationship with your parents.

Why time complexity is needed? ›

Time complexity helps us not just look at how much power an algorithm needs to run, but understand the range of outcomes that could happen when we run it. In short, time complexity helps an engineer pick the right tool for the problem the algorithm is trying to solve.

Which time complexity is best? ›

The Big O chart above shows that O(1), which stands for constant time complexity, is the best. This implies that your algorithm processes only one statement without any iteration. Then there's O(log n), which is good, and others like it, as shown below: O(1) - Excellent/Best.

What is time and space complexity in simple terms? ›

Time complexity is a function that describes how long an algorithm takes in terms of the quantity of input it receives. Space complexity is a function that describes how much memory (space) an algorithm requires to the quantity of input to the method.

What is the Big O notation in programming? ›

Big O notation is a powerful tool used in computer science to describe the time complexity or space complexity of algorithms. It provides a standardized way to compare the efficiency of different algorithms in terms of their worst-case performance.

What is the formula for time complexity? ›

Let's use T(n) as the total time in function of the input size n , and t as the time complexity taken by a statement or group of statements. T(n) = t(statement1) + t(statement2) + ... + t(statementN); If each statement executes a basic operation, we can say it takes constant time O(1) .

What is the most common Big O notation? ›

Here are some common types of time complexities in Big O Notation.
  • O(1) - Constant time complexity.
  • O(n) - Linear time complexity.
  • O(log n) - Logarithmic time complexity.
  • O(n^2) - Quadratic time complexity.
Oct 12, 2015

What is the difference between Big O notation and small O notation? ›

In short, they are both asymptotic notations that specify upper-bounds for functions and running times of algorithms. However, the difference is that big-O may be asymptotically tight while little-o makes sure that the upper bound isn't asymptotically tight.

What is the expression for Big O notation? ›

Definition of Big-O Notation:

In simpler terms, f(n) is O(g(n)) if f(n) grows no faster than c*g(n) for all n >= n0 where c and n0 are constants.

Is Big O the worst-case? ›

Big O, also known as Big O notation, represents an algorithm's worst-case complexity. It uses algebraic terms to describe the complexity of an algorithm. Big O defines the runtime required to execute an algorithm by identifying how the performance of your algorithm will change as the input size grows.

Top Articles
Latest Posts
Article information

Author: Edmund Hettinger DC

Last Updated:

Views: 6526

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Edmund Hettinger DC

Birthday: 1994-08-17

Address: 2033 Gerhold Pine, Port Jocelyn, VA 12101-5654

Phone: +8524399971620

Job: Central Manufacturing Supervisor

Hobby: Jogging, Metalworking, Tai chi, Shopping, Puzzles, Rock climbing, Crocheting

Introduction: My name is Edmund Hettinger DC, I am a adventurous, colorful, gifted, determined, precious, open, colorful person who loves writing and wants to share my knowledge and understanding with you.