Sorry for the confusion: for dvx/dt=-k/m*x-b/m*vx; at evolution page

The left hand side means dx/dt , the variable at the left hand side means x.

And the variables at right hand side means: x,vx, k,b,m

What I means was that: you can not write the above equation as

dvx/dt=ax; //at

**evolution** page

and define ax=-k/m*x-b/m*vx; //at

**fixed relation** page

The following information are copied from

http://calculuslab.deltacollege.edu/ODE/7-C-3/7-C-3-h.html### Numerical Methods for Solving Differential Equations

### The Runge-Kutta Method

### Theoretical Introduction

In the last lab you learned to use Heuns's Method to generate a numerical solution to an initial value problem of the form:

*y '* = *f*(*x, y*)

*y*(*x*_{o}) = *y*_{o}

We discussed the fact that Heun's Method was an improvement over the rather simple Euler Method, and that though it uses Euler's method as a basis, it goes beyond it, attempting to compensate for the Euler Method's failure to take the curvature of the solution curve into account. Heun's Method is one of the simplest of a class of methods called **predictor-corrector algorithms**. In this lab we will address one of the most powerful predictor-corrector algorithms of all—one which is so accurate, that most computer packages designed to find numerical solutions for differential equations will use it by default—the **fourth order Runge-Kutta Method**. (For simplicity of language we will refer to the method as simply the *Runge-Kutta Method* in this lab, but you should be aware that Runge-Kutta methods are actually a general class of algorithms, the fourth order method being the most popular.)

The Runge-Kutta algorithm may be very crudely described as "Heun's Method on steroids." It takes to extremes the idea of correcting the predicted value of the next solution point in the numerical solution. *(It should be noted here that the actual, formal derivation of the Runge-Kutta Method will ***not** be covered in this course. The calculations involved are complicated, and rightly belong in a more advanced course in differential equations, or numerical methods.) Without further ado, using the same notation as in the previous two labs, here is a summary of the method:

*x*_{n+1} = *x*_{n} + *h*

*y*_{n+1} = *y*_{n} + (1/6)(*k*_{1} + 2*k*_{2} + 2*k*_{3} + *k*_{4})

where*k*_{1} = *h* *f*(*x*_{n}, *y*_{n})

*k*_{2} = *h* *f*(*x*_{n} + *h*/2, *y*_{n} + *k*_{1}/2)

*k*_{3} = *h* *f*(*x*_{n} + *h*/2, *y*_{n} + *k*_{2}/2)

*k*_{4} = *h* *f*(*x*_{n} + *h*, *y*_{n} + *k*_{3})

Even though we do not plan on deriving these formulas formally, it is valuable to understand the geometric reasoning that supports them. Let's briefly discuss the components in the algorithm above.

First we note that, just as with the previous two methods, the Runge-Kutta method iterates the *x*-values by simply adding a fixed step-size of *h* at each iteration.

The *y*-iteration formula is far more interesting. It is a weighted average of four values—*k*_{1}, *k*_{2}, *k*_{3}, and *k*_{4}. Visualize distributing the factor of 1/6 from the front of the sum. Doing this we see that *k*_{1} and *k*_{4} are given a weight of 1/6 in the weighted average, whereas *k*_{2} and *k*_{3} are weighted 1/3, or twice as heavily as *k*_{1} and *k*_{4}. (As usual with a weighted average, the sum of the weights 1/6, 1/3, 1/3 and 1/6 is 1.) So what are these *k*_{i} values that are being used in the weighted average?

*k*_{1} you may recognize, as we've used this same quantity on both of the previous labs. This quantity, *h* *f*(*x*_{n}, *y*_{n}), is simply Euler's prediction for what we've previously called Δ*y*—the vertical jump from the current point to the next Euler-predicted point along the numerical solution.

*k*_{2} we have never seen before. Notice the *x*-value at which it is evaluating the function *f*. *x*_{n} + *h*/2 lies halfway across the prediction interval. What about the *y*-value that is coupled with it? *y*_{n} + *k*_{1}/2 is the current *y*-value plus half of the Euler-predicted Δ*y* that we just discussed as being the meaning of *k*_{1}. So this too is a halfway value, this time vertically halfway up from the current point to the Euler-predicted next point. To summarize, then, the function *f* is being evaluated at a point that lies halfway between the current point and the Euler-predicted next point. Recalling that the function *f* gives us the *slope* of the solution curve, we can see that evaluating it at the halfway point just described, i.e. *f*(*x*_{n} + *h*/2, *y*_{n} + *k*_{1}/2), gives us an estimate of the slope of the solution curve at this halfway point. Multiplying this slope by *h*, just as with the Euler Method before, produces a prediction of the *y*-jump made by the actual solution across the whole width of the interval, only this time the predicted jump is not based on the slope of the solution at the left end of the interval, but on the estimated slope halfway to the Euler-predicted next point. Whew! Maybe that could use a second reading for it to sink in!

*k*_{3} has a formula which is quite similar to that of *k*_{2}, except that where *k*_{1} used to be, there is now a *k*_{2}. Essentially, the *f*-value here is yet another estimate of the slope of the solution at the "midpoint" of the prediction interval. This time, however, the *y*-value of the midpoint is not based on Euler's prediction, but on the *y*-jump predicted already with *k*_{2}. Once again, this slope-estimate is multiplied by *h*, giving us yet another estimate of the *y*-jump made by the actual solution across the whole width of the interval.

*k*_{4} evaluates *f* at *x*_{n} + *h*, which is at the extreme right of the prediction interval. The *y*-value coupled with this, *y*_{n} + *k*_{3}, is an estimate of the *y*-value at the right end of the interval, based on the *y*-jump just predicted by *k*_{3}. The *f*-value thus found is once again multiplied by *h*, just as with the three previous *k*_{i}, giving us a final estimate of the *y*-jump made by the actual solution across the whole width of the interval.

In summary, then, each of the

*k*_{i} gives us an estimate of the size of the

*y*-jump made by the actual solution across the whole width of the interval. The first one uses Euler's Method, the next two use estimates of the slope of the solution at the midpoint, and the last one uses an estimate of the slope at the right end-point. Each

*k*_{i} uses the earlier

*k*_{i} as a basis for its prediction of the

*y*-jump.

This means that the Runge-Kutta formula for *y*_{n+1}, namely:

*y*_{n+1} = *y*_{n} + (1/6)(*k*_{1} + 2*k*_{2} + 2*k*_{3} + *k*_{4})

is simply the *y*-value of the current point plus a weighted average of four different *y*-jump estimates for the interval, with the estimates based on the slope at the midpoint being weighted twice as heavily as the those using the slope at the end-points.

As we have just seen, the Runge-Kutta algorithm is a little hard to follow even when one only considers it from a geometric point of view. In reality the formula was not originally derived in this fashion, but with a purely analytical approach. After all, among other things, our geometric "explanation" doesn't even account for the weights that were used. If you're feeling ambitious, a little research through a decent mathematics library should yield a detailed analysis of the derivation of the method.