## Numerical OD Es

The previous few modules have discretized functions, derivatives, and integrals. This module shows how discrete methods can be used to approximate solutions to problems in the continuous realm. One such application was Newton's method for approximating roots of functions, seen back in Linear Approximations. This module deals with approximating solutions of continuous differential equations.

In certain situations, differential equations can be solved exactly. For example, a separable differential equation

is solved by rearranging and integrating, as seen in Antidifferentiation. A linear first order differential equation

is solved by an application of the product rule, as seen in More differential equations. However, there are many differential equations of the form

which cannot be solved exactly by the above methods. This is the situation where techniques known as *numerical ordinary differential equations* can be used to approximate a solution. There are three methods covered in this module:

- Euler's method
- Midpoint method
- Runge-Kutta method

## Euler's method

*Euler's method* uses a difference equation to approximate the solution of an initial value problem. More specifically, given the differential equation , and initial value , Euler's method approximates for some .

One way to visualize the problem is to imagine a river where the water is going in different directions at different locations. If one drops something that floats (perhaps a very small rock, or a duck) at different locations in the river, where would the object end up further down the river?

To compute the approximation, first a positive integer is chosen, and the axis is split into intervals, giving the sequence of time points , where . The time step is . Then there is a corresponding sequence where is the initial condition and each subsequent is an approximation of , given by the update rule

This recurrence is sensible by considering the discretization of the original differential equation:

and then rearranging.

This process is using a linearization at each point to get to the next point , then this is repeated to get the next point, and so on:

Remember that a linearization is only good when the change in input, in this case, is small. Therefore, a bigger value of gives a better approximation, but requires more computation.

**Example**

Let with . Use Euler's method with left as a variable to approximate . What happens as ?

Note that

(this is independent of ). Iterating this gives . As increases, the approximation gets better and better, and one finds that

Call this limit and take the of both sides. Then we find that

Therefore, . So , which matches the value from solving the problem by separation of variables.

**Example**

Consider the differential equation

Use Euler's method to estimate , where , the initial conditions are and , and the step size is .

Here, we have

So, using the initial conditions, we have

Then we have that

And . So

Thus,

And thus we have .

### Taylor series perspective

If we think of as a function of , and expand the Taylor series of about , we have

so Euler's method can be seen as taking the first order Taylor approximation and using it to form the recursion relation. The above equation shows that the error for a single step is in , so the error over all steps is

## Midpoint method

Another method for solving the differential equation

is known as the *midpoint method*. There is still an update rule which is similar to the one used in Euler's rule, but the function is evaluated at a different point. The idea is to consider the point where Euler's rule would have taken us, and find the midpoint of that with our starting point. Use that midpoint as the point to evaluate in the update rule.

As described, it is a little bit complicated, but with some extra notation and a diagram it becomes clearer. Let . So is the quantity which would be added to in the update rule for Euler's rule:

Then the eponymous midpoint which is used in the update rule is

So the update rule for the midpoint method is

The midpoint method is a bit more complicated than Euler's method, but it has a benefit. The midpoint method is a second order approximation, and as a result the error turns out to be for an individual step, and hence for the full approximation process. Therefore, it is a more accurate method of approximation.

## Runge-Kutta method

The final method for solving the above differential equation is called the *Runge-Kutta method.* It is a fourth order approximation. Its error for an individual step is and for the whole process is . This makes it the most accurate model of the three in this module. It is difficult to describe in an intuitive manner other than to say it is a sort of average of Euler's method, the midpoint method, and other methods. The update rule is as follows:

Note that is the increase used in Euler's method, and is the increase used in the Midpoint method.

## Comparison of methods

We already know that Euler's method is the most basic, then the Midpoint method, and finally Runge-Kutta. We should expect, then, that Runge-Kutta should give the best approximation, followed by the Midpoint method, followed by Euler's method.

**Example**

Use each of the three methods to approximate the solution of

with initial conditions

(we already know that because we have solved this differential equation several times already), using a step size of .

It may seem ill advised to use a step size equal to the distance from the start time to the end time, but this is just for the sake of comparison. Note that in this example,

**Euler's method** gives

which is a pretty rough estimate of . From this computation we have that for the midpoint method.

The **Midpoint method** gives

so we see that we have gotten another term closer to .

For **Runge-Kutta**, we need to compute several different s. From the computations we did in Euler's method and the Midpoint method, we have that

Computing the other values, we have

And

Putting these together (and rearranging the fractions a little bit), we have that

which has two more terms of , making it a very good approximation, despite only having one step.

## EXERCISES

- Given with initial condition , approximate using the midpoint method with size step .