## Orders Of Growth

< L'Hopital's Rule | Home Page | Derivatives >

When dealing with limits as , it is the lowest order term (i.e. the term with the smallest power) which matters the most, since higher powers of are very small when is close to 0. On the other hand, as , what is known as the *asymptotic growth* of a function. In this case, it is the highest order term which matters the most. This module deals with limits of both types and provides a more formal notion of how quickly a function grows or shrinks.

## Hierarchy of functions going to infinity

First, consider the monomial , where is a fixed, positive integer. Looking at the graphs of these monomials, it becomes clear that as , :

What happens when the functions involved are not polynomials? For example, how does the growth of the exponential compare to a polynomial? Or factorial and exponential?

In general, one can compare the asymptotic growth of the functions and by considering the limit . If this limit is , then dominates. If the limit is 0, then dominates. And if the limit is a constant, then and are considered equal (asymptotically).

**Example**

Compare exponential growth and polynomial growth.

Fix an integer , and compute the limit (using l'Hopital repeatedly):

(note that is fixed, so is a constant). Thus, the exponential dominates the monomial (and thus any polynomial as well).

**Example**

Compare the asymptotic behavior of (for some fixed integer ) and .

Again, using L'Hopital's rule repeatedly gives

since is a constant here. This shows any polynomial beats any constant power of logarithm.

**Example**

Compare the asymptotic growth of the factorial and the exponential.

First, when is not an integer, the factorial is defined by

This definition shares properties with the traditional definition of factorial: , and , and they coincide when is an integer. It is not critical to know this integral definition, but know that for an integer .

When is an integer, note that is multiplied with itself times. On the other hand, has factors, most of which are bigger than (at least when is bigger than, say, 5). So as increases to , only gains another factor of , but gains a factor of . This explains why grows faster than (and similarly for any exponential function).

Thus we have the following hierarchy of growth, from greatest to smallest:

- Factorial: .
- Exponential: for any (usually ).
- Polynomial: for any .
- Logarithmic: and other related functions.

When a pair of functions are of a similar type, such as and , one must compare these using the limit of the ratio of the functions, as above.

## Hierarchy of functions going to 0

As above, first consider monomials . As , the inequality for monomials is the reverse of what it was for . That is, as , we find that . Intuitively, small numbers become even smaller when you raise them to higher powers.

It is important to keep track of whether a limit is going to 0 or , since in the first case, the lowest order terms dominate, and in the second case the highest order terms dominate.

**Example**
Compute and .

As , the higher powers of go to 0 quickly, leaving the lowest order terms in the limit.

As , it is the highest order terms (the terms) which dominate, so ignoring the lower order terms leaves in the limit.

## Big-O notation

When dealing with limits as or , it is best to have a formal notation for the approximations which result from dropping higher or lower order terms. Big-O notation, pronounced "big oh", provides this formality.

Big-O notation,

The function is in , as if

for some constant and all sufficiently close to 0. Put another way, a function is in , for close to 0, if approaches 0 at least as fast as a constant multiple of .

More generally, a function is in , as if

for some constant and sufficiently close to 0.

Big-O notation, as , can be thought of as a more specific way of saying higher order terms. Just as Taylor series could include a different number of terms before indicating the rest is higher order terms (depending on the situation), the same is true for big-O.

**Example**

Express using big-O notation as .

By taking the Taylor series for , we find that as .

We could also say

as .

The definition for big-O as is almost identical, except the bound needs to apply for all sufficiently large.

Big-O notation,

The function is in , as if

for some constant and all sufficiently large. In other words, a function is in , as , if approaches infinity no faster than a constant multiple of .

More generally, is in , as if

for some constant and all sufficiently large.

**Example**

The monomial is in as for any (fixed) . This is a restatement of the above fact that the exponential dominates polynomials as .

**Example**

Show that as . Hint: use the binomial series.

Recall that the binomial series

requires that . So we must do a little algebra to get the square root to be of this form. Factoring out a will do the trick:

since all the terms involving become constants when multiplied by .

**Example**

Justify the following statements as :

- is in but is not in .
- is in but is not in .
- is in but is not in .
- is in but is not in .
- is not in for any .
- is in for all .

The first four of these can be justified by looking at the Taylor series and then finding the monomial of the lowest power (remember that as , the dominant term is that of the lowest power).

To see that is not in for any , we must show that given any constant and , there exists some such that

(this is the negation of the definition of big oh). Solving the above inequality for , one finds that if

then , and so is not in for any .

Finally, to see that is in , it suffices to show that

for all sufficiently small. Taking natural log of both sides (this preserves the inequality since log is an increasing function) gives

Rearranging this (recall that multiplying an inequality by a negative number reverses the inequality) gives

But recall from a previous module that as . This ensures that no matter the (fixed) value of , for sufficiently small we will have

and hence

as desired.

**Example**

Justify the following statements as :

- is in as well as for any .
- is in but is not in .
- is in but is not in .
- is in but is not in for any .
- is in as well as for all .
- is in for all .

- Note that as , . Thus is bounded, and so it is in .
- Using the binomial series, as in a previous example, shows that

Thus, is in , but no power smaller than .

- For large , is very small, and so . Therefore,

is in but not in , since logarithms are smaller than polynomials.

- Similarly, for large . Hence is in but cannot be bounded by any polynomial.
- A handy property of logarithms tells us that

which is in since it is itself a multiple of .

- Fix . It suffices to show that for large enough , we have

Taking the logarithm of both sides gives

We saw earlier that any positive power of beats the logarithm for large values of . So (since ) this inequality holds for large , and so is in .

## Application: Error Analysis

When approximating a function by the first few terms of its Taylor series, there is a trade-off between convenience (the ease of the computation) and accuracy. Big-O notation can help keep track of this error.

**Example**
Consider the approximation, for close to 0,

So the error of approximating can be bounded by , for some when is small. Determining a good , in general, is tricky, and there is more on error bounds in Taylor Remainder Theorem.

## Application: Computational Complexity

In computer science, an algorithm is a sequence of steps used to solve a problem. For example, there are algorithms to sort a list of numbers and algorithms to find the prime factorization of a number. Computational complexity is a measure of how efficient an algorithm is. Basically, a computer scientist wants to know roughly how the number of computations carried out by the computer will grow as the input (e.g. the length of the list to be sorted, or the size of the number to be factored) gets larger.

### Example: Multiplication

Consider how much work is required to multiply two -digit numbers using the usual grade-school method. There are two phases to working out the product: multiplication and addition.

First, multiply the first number by each of the digits of the second number. For each digit in the second number this requires basic operations (multiplication of single digits) plus perhaps some "carries", so say a total of operations for each digit in the second number. This means that the multiplication phase requires basic operations.

The addition phase requires repeatedly adding digit numbers together a total of times. If each addition requires at most operations (including the carries), and there are additions that must be made, it comes to a total of operations in the addition phase.

Adding these totals up gives about total operations. Thus, the total number of basic operations that must be performed in multiplying two digit numbers is in (since the constant coefficient does not matter).

The reason the constant coefficient does not really matter when thinking about computational complexity, is that a faster computer can only improve the speed of a computation by a constant factor. The only way to significantly improve a computation is to somehow drastically cut the number of operations required to perform the operation. The next example shows an example of how important algorithmic improvements can be on computational complexity.

### Example: Sorting

In sorting algorithms, the most basic operation is the comparison. For a sorting algorithm, one wants to know how many comparisons of two numbers will be made, on average.

One common sorting algorithm, which is used by most people who are sorting items by hand, is called Insertion Sort. It turns out that the number of comparisons for Insertion Sort, on average, is as , where is the length of the list of numbers to be sorted.

A more sophisticated sorting algorithm, called Mergesort, uses comparisons on average. This may not seem like a significant improvement over Insertion Sort, but consider the number of comparisons used to sort a list of 1000000 integers: Insertion Sort would use on the order of comparisons on average, where as Mergesort uses on the order of comparisons. To put that in perspective, if Mergesort took a half second to complete the computation, Insertion Sort would take over ten hours!

## Big O in Other Areas of Mathematics

### Stirling's formula

Stirling's formula gives an asymptotic approximation for :

In a slightly more precise form, it can be written

### Prime number theorem

In number theory, one very important function, , is defined to be the number of primes less than or equal to . The Prime Number Theorem says that

## EXERCISES

- Compute the following limits.

- Simplify the following asymptotic expression as

- Simplify the following asymptotic expression as

- Here are some rules for positive functions with :

Using these, show that simplifies to

- Which of the following are in as ?

< L'Hopital's Rule | Home Page | Derivatives >