Brain Dump

Asymptotic Notation

Tags
comp-sci

Is a way to measure the performance of an algorithm as a measure of its [see page 8, asymptotic growth]. The main observation with this approach is that as \( n \) grows the contribution of the smaller order terms quickly become irrelevant.

Asymptotic notation hides constant factors and small-order terms revealing asymptotic runtimes.

For example as \( n \) gets bigger the affect of \( +10n \) in \( 2n^2 + 10n \) can be mostly ignored.

Overview

The main idea behind asymptotic notation is that we define a set of functions \( \theta(n), \omega(n^2) \), etc. that contains all functions with a certain runtime performance. Membership in such a set suffices to claim that a function has such performance \( f(n) \in \theta(g(n)) \), however it can equivalently be written as \( f(n) = \theta(g(n)) \).

See the example with theta notation.

Table 1: Overview of the various asymptotic runtime notations (see see page 16, [here]).
NotationMeaningAnalogy
\( f(n) = \mathcal{O}(g(n)) \)\( f \) grows at most as fast as \( g \)\( f \leq g \)
\( f(n) = \Omega(g(n)) \)\( f \) grows at least as fast as \( g \)\( f \geq g \)
\( f(n) = \theta(g(n)) \)\( f \) grows as fast as \( g \)\( f = g \)
\( f(n) = o(g(n)) \)\( f \) grows slower than \( g \)\( f < g \)
\( f(n) = \omega(g(n)) \)\( f \) grows faster than \( g \)\( f > g \)

Observe that the inequalities of asymptotic notation can be chained. For example \( n = \mathcal{O}(n) \) but \( n = \mathcal{O}(n^2) \) as well because \( f(n) = n \) grows slower than \( f(n) = n^2 \) as well. That is to say \( \mathcal{O}(n) \subseteq \mathcal{O}(n^2) \).

Table 2: Common runtimes (see see page 17, [here]).
NotationName
\( \theta(1) \)Constant Time
\( \theta(\log n) \)Logarithmic Time
\( \theta(n) \)Linear Time
\( \theta(n^2) \)Quadratic Time
\( \theta(n^3) \)Cubic Time
\( \theta(n^k) \)Polynomial Time
\( \theta(2^n) \)Exponential Time

Note: Asymptotic notation [see page 21, isn't restricted] to simply runtime performance. Any measure that can be represented mathematically and scales with the input can be represented in this notation.

Composability of Notation

Any function composed of multiple sub-functions can define its runtime in terms of the runtimes of the asymptotic runtimes of the sub-functions. For example \( 2n^2 + 3n + 4 \) can be shown to have a runtime of \( \theta(2n^2) + \theta(3n) + 4 \) which is \( \theta(n^2) + \theta(n) + \theta(2) \) which is simply \( \theta(n^2) \).

For two non-negative functions \( f(n) \), \( g(n) \) we can [see page 21, simplify]:

  • Summation: \( f(n) + g(n) = \theta(\max \).
  • Multiplication: \( f(n) g(n) = \theta(f(n) . g(n)) \).

You can think of the notation \( \theta(n^2) \), for example in the expression \( \theta(n^2) + 2n \), as a placeholder for an anonymous function (from the set \( \theta(n^2) \) of all functions that grow quadratically).

You can [see page 22, validate] a statement such as \( 2n^2 + \theta(n) = \theta(n^2) \) if no matter how the anonymous functions are chosen on the left of the equal sign, there is a way to choose the anonymous function on the right of the equal sign to make the equation valid.

Index

n

n) g(n 1