Big O Notation | OCR A Level Computer Science Revision Notes 2017 (2024)

Big O Notation

What is Big O Notation?

  • Big O Notation is used to express the complexityof an algorithm

  • Describes thescalabilityof an algorithm as it'sorder(O), referred to in Computer Science asBig O Notation

  • Named after the German mathematicianEdmund Landau

  • Has two simple rules

    • Remove all terms except the one with the largest factor or exponent

    • Remove any constants

  • There are several classifications used in Big O Notation, ranging from fast and efficient to slow and inefficient. These are all covered below

Constant

  • O(1), read as Big-O One, describes an algorithm that takes constant time, that is, requires the same number of instructions to be executed, regardless of the size of the input data

  • For example, to calculate the length of a string ‘s’ the following code could be used:

    • totalLength = len(s)

  • No matter how long the string is, it will always take the same number of instructions to find the length

Linear

  • O(n), read as Big-O n, describes an algorithm that takes linear time, that is, as the input size increases, the number of instructions to be executed grows proportionally

  • For example, a linear search of 1000 items will take 10 times longer to search than an array of 100 items

  • Implementing linear time requires use of iteration i.e. for loops and while loops

  • In the example of sumNumbers1, a basic (for i = 1 to n) loop is used

  • Below is an example of an algorithm that uses multiple for loops

function loopx = inputfor i = 1 to xprint(x)next ia = 0for j = 1 to xa = a + 1next jprint(a)b = 1for k = 1 to xb = b + 1next kprint(b)endfunction

Linear Time: Number of Instructions Executed

x

3x

5

3x + 5

1

3

5

8

10

30

5

35

100

300

5

305

10,000

30,000

5

30,005

1,000,000

3,000,000

5

3,000,005

  • The constant term (5) contributes less and less to the overall time complexity as x increases. 3x is the significant contributing term and causes the overall time complexity to grow linearly in a straight line as x increases

Polynomial

  • O(n2), read as Big-O n-squared, describes an algorithm that takes squared time to complete. This means the performance is proportional to the square of the number of data inputs

  • For example, a bubble sort is a polynomial time complexity algorithm. n passes are made, and for each pass n comparisons are made hence n*n or O(n2)

  • Implementing polynomial time requires the use of nested iteration i.e. for loops or while loops, where each loop executes n times

  • An example of a polynomial algorithm is shown below:

function timesTablex = 12for i = 1 to xfor j = 1 to xprint(i, “ x “, j, “ = “, i*j)next jnext ifor k = 1 to xprint(k)next kprint(“Hello World!”)endfunction
  • The above algorithm outputs the times tables from the 1 times table to the 12 times table. An arbitrary for loop (for k = 1 to 10) and output (print(“Hello World!”)) has been added for demonstration purposes as shown below

  • The total number of instructions executed by this algorithm are as follows:

    • +1 for (x = 12)

    • +x * x for the nested for loop

    • +x for the non-nested for loop

    • +1 for (print(“Hello World!”))

  • The total number of instructions executed would therefore be: x*x + x + 2 or, x2 + x + 2

  • As x increases the total number of instructions executed also increases as shown in the table below

Polynomial Time: Number of Instructions Executed

x

x2

2

x2 + x + 2

1

1

2

4

10

100

2

112

100

10,000

2

10,102

10,000

100,000,000

2

100,010,002

1,000,000

1,000,000,000,000

2

1,000,001,000,002

  • The constant term (2) and linear term (x) contribute very little as x increases, making x2 the significant contributing term

    • To put this algorithm into perspective, assuming it took one second per instruction, printing up to the 1,000,000 times table would take roughly 32,000 years

  • Loops can be nested any number of times. In the example above, there are two loops nested. Each loop nested increases the polynomial time as shown below. Regardless of the number of loops nested, all are still referred to as polynomial time!

Big-O of Nested Loops

Number of loops nested

Big-O time

2

O(n2)

3

O(n3)

4

O(n4)

5

O(n5)

6

O(n6)

  • Below is an example of a six nested algorithm. Note that the algorithm does nothing useful except demonstrate this nesting

for a = 1 to nfor b = 1 to nfor c = 1 to nfor d = 1 to nfor e = 1 to nfor f = 1 to nnext fnext enext dnext cnext bnext a

Exponential

  • Exponential time, read as Big-O 2-to-the-n, describes an algorithm that takes double the time to execute for each additional data input value. The number of instructions executed becomes very large, very quickly for each input value

  • Few everyday algorithms use exponential time due to its sheer inefficiency. Some problems however can only be solved in exponential time

Exponential time: Number of Instructions Executed

x

2x

1

2

10

1024

100

1,267,650,600,228,229,401,496,703,205,376

Exam Tip

You do not need to know any specific examples of exponential time complexity algorithms, but you do need to know how these algorithms compare to other time complexities.

Logarithmic

  • Logarithmic time, read as Big-O log-n, describes an algorithm that takes very little time as the size of the input grows larger

  • Logarithmic time in computer science is usually measured in base 2. A logarithm is the value the base must be raised to to equal another number, for example:

    • 23 = 8, log2(8) = 3, i.e. to get to the value 8, 2 must be raised to the power of 3

  • A binary search is an example of logarithmic time complexity. Doubling the size of the input data has very little effect on the overall number of instructions executed by the algorithm, as shown below:

Logarithmic Time: Number of Instructions Executed

x

log2(x)

1 (20)

8 (23)

3

1,024 (210)

10

1,048,576 (220)

20

1,073,741,824 (230)

30

  • As shown above, increasing the number of input values to over one billion did very little to the number of instructions taken to complete the algorithm

Comparison of Complexity

  • All algorithms have a best case, average case and worst case complexities.

  • These are defined as the best possible outcome, the average outcome and the worst possible outcome in terms of the number of instructions required to complete the algorithm

  • Best case complexity is where the algorithm performs most efficiently such as a linear search or binary search finding the correct item on the first element O(1), or a bubble sort being supplied with an already sorted list O(n). A bubble sort must always make at least one pass, hence O(n)

  • Average case complexity is where the algorithm performs neither at its best or worst on any given data. A linear search may find the item in the middle of the list and therefore have a Big-O notation of O(n/2), or a bubble sort sorting only half a list O(n2 / 2)

  • Worst case complexity is where an algorithm is at its least efficient such as a linear search finding the item in the last position, or not at all O(n), or a bubble sort sorting a reversed list O(n2), i.e. a list whose items are ordered highest to lowest

  • Big-O notation is almost always measured by the worst case performance of an algorithm. This allows programmers to accurately select appropriate algorithms for problems and have a guaranteed minimum performance level

  • The graphical representation of constant, linear, polynomial, exponential and logarithmic times is shown below

Big-O Time Complexity Graph

Big O Notation | OCR A Level Computer Science Revision Notes 2017 (1)

Figure 1: Graph of Time Complexity for Big-O Notation Functions

  • From the graph, exponential time (2n) grows the quickest and therefore is the most inefficient form of algorithm, followed by polynomial time (n2), linear time (n), logarithmic time (logn) and finally constant time (1)

  • Further time complexities exist such as factorial time (n!) which performs worse than exponential time and linear-logarithmic time (nlogn) which sits between polynomial and linear time. Merge sort is an example of an algorithm that uses O(nlogn) time complexity

  • The Big-O time complexity of an algorithm can be derived by inspecting its component parts

  • It is important to know that when determining an algorithm's Big-O that only the dominant factor is used.

  • This means the time complexity that contributes the most to the algorithm is used to describe the algorithm. For example:

    • 3x2 + 4x + 5, which describes three nested loops, four standard loops and five statements is described as O(n2)

    • x2 is the dominating factor. As the input size grows x2 contributes proportionally more than the other factors

    • Coefficients are also removed from the Big-O notation. The 3 in 3x2 acts as a multiplicative value whose contribution becomes smaller and smaller as the size of the input increases.

      • For an arbitrarily large value such as 10100, which is 10 with one hundred 0’s after it, multiplying this number by 3 will have little contribution to the overall time complexity. 3 is therefore dropped from the overall time complexity

Exam Tip

  • The most likely type of Big-O calculation question you will face in an exam is deriving a constant, linear or polynomial time complexity

  • A very challenging question may ask to derive a logarithmic time complexity which will require familiarity with logarithmic algorithms from your course to recognise them

  • The simplest way of deriving time complexity is to count the number of loops and nested loops in an algorithm.

function sumNumbers1(n)sum = 0for i = 1 to nsum = sum + nnext ireturn sumendfunction
  • In the example above for sumNumbers1, a single for loop is present. As loops are O(n) and single statements are O(1), the overall Big-O notation is O(n), linear time, as the loop dominates constant time

function sumNumbers2(n)sum = n * (n+1)/2return sumendfunction
  • In the example above for sumNumbers2, a single statement is present with no loops. Statements are O(1) therefore making the overall Big-O notation O(1), constant time

function loopx = inputfor i = 1 to xprint(x)next ia = 0for j = 1 to xa = a + 1next jprint(a)b = 1for k = 1 to xb = b + 1next kprint(b)endfunction
  • In the example above for function loop, there are three loops present 3*O(n) and five statements 5*O(1). As the input size grows, the quantity of loops or statements becomes negligible, meaning coefficients eventually become factored out or removed. This leaves O(n) and O(1). As O(n) dominates O(1), the overall Big-O notation of this algorithm becomes O(n), linear time

function timesTablex = 12for i = 1 to xfor j = 1 to xprint(i, “ x “, j, “ = “, i*j)next jnext ifor k = 1 to xprint(k)next kprint(“Hello World!”)endfunction
  • In the example above for function timesTable, there is a single nesting of two loops O(n2), a single loop O(n) and a single statement O(1). As polynomial time dominates linear and constant time and that no coefficients are involved, the overall Big-O notation is O(n2)

Worked Example

Dexter is leading a programming team who are creating a computer program that will simulate an accident and emergency room to train hospital staff.

Two of Dexter's programmers have developed different solutions to one part of the problem. Table 3.1 shows the Big O time complexity for each solution, where n = the number of data items.

Solution A

Solution B

Time

O(n)

O(n)

Space

O(kn) (where k > 1)

O(log n)

Table 3.1

  1. Explain, with reference to the Big O complexities of each solution, which solution you would suggest Dexter chooses.
    [4]

I would recommend solution B [1]. As both have the same time complexity we would need to look at both algorithms' space complexity [1]. A’s space does not scale well [1] when the number of items is increased whereas B’s space does scale well [1].

Remember to relate your answer to the context of the question to achieve marks.

Big O Notation | OCR A Level Computer Science Revision Notes 2017 (2024)

FAQs

What is Big O notation in computer science A level? ›

The time complexity is measured using a notation called big-o notation, which shows the effectiveness of the algorithm. It shows an upper limit for the amount of time taken relative to the number of data elements given as an input.

Is Big O the worst-case? ›

Big O, also known as Big O notation, represents an algorithm's worst-case complexity. It uses algebraic terms to describe the complexity of an algorithm. Big O defines the runtime required to execute an algorithm by identifying how the performance of your algorithm will change as the input size grows.

What is time complexity in computer science A level? ›

Time of Complexity The time complexity of an algorithm is how much time it requires to solve a particular problem. The time complexity is measured using a notation called big-o notation, it shows the effectiveness of the algorithm.

What is logarithmic time and space complexity? ›

Logarithmic Space (O(log n)): Logarithmic space complexity means that the amount of memory used by an algorithm decreases logarithmically as the problem size reduces in each step. This often occurs in divide-and-conquer algorithms or binary search, where the problem size is halved in each recursive step.

What is the Big O notation formula? ›

Definition of Big-O Notation:

Given two functions f(n) and g(n), we say that f(n) is O(g(n)) if there exist constants c > 0 and n0 >= 0 such that f(n) <= c*g(n) for all n >= n0.

Is Big O notation difficult? ›

Big O can get very complex, and it can be hard to conceptualize. The few things I was able to learn so far are really just the tip of the iceberg. However, once you understand why and how it works, it's easier to understand some of the more complex ideas.

Which Big O notation is fastest? ›

The common algorithmic runtimes from fastest to slowest are:
  • constant: Θ(1)
  • logarithmic: Θ(log N)
  • linear: Θ(N)
  • polynomial: Θ(N^2)
  • exponential: Θ(2^N)
  • factorial: Θ(N!)

What is the disadvantage of Big O? ›

The big- O notation loses some information since it ignores constant multipliers and additive terms. If we have two algorithms, and one of them is a hundred times faster, they still have the same estimate of the running time in the big- O notation.

How to prove Big O? ›

In a formal big-O proof, you first choose values for k and c, then show that 0 ≤ f(x) ≤ cg(x) for every x ≥ k. So the example from the previous section would look like: Claim 51 3x2 + 7x + 2 is O(x2). Proof: Consider c = 4 and k = 100.

What are the 4 categories of complexity? ›

According to project management experts Remington and Pollack, there are four types of complexity that determine the selection of projects. These include structural, technical, temporal, and directional complexity.

What are the 3 levels of complexity? ›

Let's look at each of those in turn.
  • Structural complexity. This is the 'easiest' level of complexity and it involves the scale of the work on the project. ...
  • Emergent complexity. ...
  • Socio-political complexity.
Jan 8, 2014

What is the worst-case time complexity? ›

It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time.

Is logn faster than n? ›

Typically, O(log n) is quicker than O(n) particularly when dealing with large data sets (n). The reason behind this is that the growth rate of logarithmic time complexity is significantly slower compared to linear time complexity.

What does n stand for in Big O notation? ›

To understand what Big O notation is, we can take a look at a typical example, O(n²), which is usually pronounced “Big O squared”. The letter “n” here represents the input size, and the function “g(n) = n²” inside the “O()” gives us an idea of how complex the algorithm is with respect to the input size.

What is polynomial time in big O? ›

An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, T(n) = O(nk) for some positive constant k.

What is big O and small o in computer science? ›

In short, they are both asymptotic notations that specify upper-bounds for functions and running times of algorithms. However, the difference is that big-O may be asymptotically tight while little-o makes sure that the upper bound isn't asymptotically tight.

What is the Big O notation symbol? ›

A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.

What are the levels of Big O notation? ›

Types of Big O Notations
  • O(1): Constant complexity.
  • O(logn): Logarithmic complexity.
  • O(n): Linear complexity.
  • O(nlogn): Loglinear complexity.
  • O(n^x): Polynomial complexity.
  • O(X^n): Exponential time.
  • O(n!): Factorial complexity.
Dec 14, 2022

What is computer science Big Omega notation? ›

The difference between Big O notation and Big Ω notation is that Big O is used to describe the worst case running time for an algorithm. But, Big Ω notation, on the other hand, is used to describe the best case running time for a given algorithm.

Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated:

Views: 6528

Rating: 4.9 / 5 (59 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.