Algorithms - Runtime Analysis: Difference between revisions

From Dev Wiki
Jump to navigation Jump to search
m (Formatting)
m (Formatting)
 
Line 17: Line 17:


Specific Asymptotic Bound Terms:
Specific Asymptotic Bound Terms:
* '''Big O''' ('''O(f(n))''') - Pronounced "Big Oh", it's the '''Worst Case'''.
* '''Big O''' ( '''O(f(n))''' ) - Pronounced "Big Oh", it's the '''Worst Case'''.
** This is the Asymptotic Upper Bound of runtime. Aka, the maximum possible time to run a given algorithm.
** This is the Asymptotic Upper Bound of runtime. Aka, the maximum possible time to run a given algorithm.
* '''Big Ω''' ('''Ω(f(n))''') - Pronounced "Big Omega", it's the '''Best Case'''.
* '''Big Ω''' ( '''Ω(f(n))''' ) - Pronounced "Big Omega", it's the '''Best Case'''.
** This is the Asymptotic Lower Bound of runtime. Aka, the minimum possible time to run a given algorithm.
** This is the Asymptotic Lower Bound of runtime. Aka, the minimum possible time to run a given algorithm.
* '''Big θ''' ('''θ(f(n))''') - Pronounced "Big Theta".
* '''Big θ''' ( '''θ(f(n))''' ) - Pronounced "Big Theta".
** This is when we find an expression that simultaneously fits an algorithm for both Upper and Lower bound.
** This is when we find an expression that simultaneously fits an algorithm for both Upper and Lower bound.
* '''Little O''' - A Upper Bound of runtime that technically describes the equation, but isn't the closest possible fit.
* '''Little O''' - A Upper Bound of runtime that technically describes the equation, but isn't the closest possible fit.

Latest revision as of 00:19, 9 September 2019

Code running time is essentially the amount of time a piece of code will take to complete. We can use Primitive Operations to calculate this amount of time, which can be simplified down to a set of equations.

We simplify down to equations because the actual runtime will vary based on things like hardware, and size of input data.

Generally speaking, the hardware/software environment will only improve running time by a constant factor. However, size of input data can make runtime grow much larger, depending on the design and implementation of algorithm. It turns out that implementation of code can have more impact on speed than simply upgrading to new hardware, which is why the study of Algorithms exists.

Algorithm Complexity

When analyzing complexity, we generally refer to the Order of Growth, aka, examining how an algorithm behaves with unbounded or absurdly large data sets. When dealing with Order of Growth, runtime equations generally simplify to whatever the leading term is, as all other terms become insignificant in comparison.

As result, equations can generally be categorized onto the following graph: <TODO: graph here>

By using this, we can find an Asymptotic Bound g(n), which mostly closely describes Orders of Growth for the given equation f(n).

Fitting to Asymptotic Bounds

Generally speaking, we look minimum runtime (known as "Best Case"), maximum runtime (known as "Worst Case"), and sometimes average case runtime.

Specific Asymptotic Bound Terms:

  • Big O ( O(f(n)) ) - Pronounced "Big Oh", it's the Worst Case.
    • This is the Asymptotic Upper Bound of runtime. Aka, the maximum possible time to run a given algorithm.
  • Big Ω ( Ω(f(n)) ) - Pronounced "Big Omega", it's the Best Case.
    • This is the Asymptotic Lower Bound of runtime. Aka, the minimum possible time to run a given algorithm.
  • Big θ ( θ(f(n)) ) - Pronounced "Big Theta".
    • This is when we find an expression that simultaneously fits an algorithm for both Upper and Lower bound.
  • Little O - A Upper Bound of runtime that technically describes the equation, but isn't the closest possible fit.
  • Little Ω - A Lower Bound of runtime that technically describes the equation, but isn't the closest possible fit.

More often than not, average runtime can be difficult to determine, and often may vary based on the input data itself. Best case runtime can be good to know, but the concern is usually to prevent an block of code from taking too long to run. Thus, Worst Case runtime is often the focus of analysis.

Officially Calculating the Worst Case

Given any runtime equation f(n), we want to determine if the Asymptotic Bound g(n) is a valid Big O.

To calculate this plug this into the equation f(n) ≤ c * g(n) and simplify. If we can find any arbitrary set of values for c and n where the equation holds true, then this is a valid Big O for f(n), aka a valid O(f(n)).

For example, let's take the worst case values from Primitive Operations and prove that the Big O falls under n.

f(n) ≤ c * g(n) where [f(n)=7n + 5] and [g(n)=n]
7n + 5 ≤ c * n
5 ≤ cn - 7n
5/n ≤ c - 7

Now we can give n any arbitrary value, so long as it lets us solve for a valid c.
If n = 5, then:

5/5 ≤ c - 7
1 ≤ c - 7
8 ≤ c

Thus, with n=5, c≥8we have found once instance of a valid c and n such that the inequality holds true. So O(7n + 5) = n.

Officially Calculating the Best Case

Similar to above, but we instead use the equation [f(n) ≥ c * g(n)]. Using the best case values from Primitive Operations, can can prove that Big Ω falls under n.

f(n) ≥ c * g(n) where [f(n)=5n + 5] and [g(n)=n]
5n + 5 ≥ c * n
5 ≥ cn - 5n
5/n ≥ c - 5

Now we can give n any arbitrary value, so long as it lets us solve for a valid c.<br>
If n = 5, then:

5/5 ≥ c - 5
1 ≥ c - 5
6 ≥ c

Thus, with n=5, c≤6we have found once instance of a valid c and n such that the inequality holds true. So Ω(5n + 5) = n.

Officially Calculating the Big Theta

Through the two above examples, we have calculated the O(f(n)) and Ω(f(n)) for a given piece of code. Coincidentally, they both turned out to equal n. Thus Big θ can also be determined to be n.

The only way to calculate this is through calculating both Big O and Big Ω. If the two values are not identical, then the algorithm does not have a Big θ.

Quickly Calculating the Asymptotic Bounds

The above examples were quick enough for a relatively simple problem. But for more larger algorithms, things can get complicated fast.

A quicker, less formal way to calculate the Big O, Big Ω and Big θ is to simply take the leading term of the equation, drop all other terms, and drop any constants.

For example, O(f(n))=7n + 5 simplifies to O(f(n))=7n which simplifies to O(f(n))=n.
Similarly, Ω(f(n))=5n + 5 simplifies to Ω(f(n))=5n which simplifies to Ω(f(n))=n.
Thus, both Big O and Big Ω equal n, giving us Big θ of n as well.

Further Examples

Let's pretend we have an algorithm with a Best Case cost of 3n + 7 and a Worst Case cost of 5n^2 + 2n - 12. We can quickly calculate:

Ω(f(n))=3n + 7 -> Ω(f(n))=3n Ω(f(n))=n
O(f(n))=5n^2 + 2n - 12 -> O(f(n))=5n^2 -> O(f(n))=n^2

Note that in this example, Ω(n)≠O(n^2), so θ(f(n)) does not exist.