This image, is licensed under CC BY-SA 4.0

Floating in C++ - part 7 - Runge-Kutta

The introduction to this article series can be found here.

In the last article, I mentioned how there are many algorithms for solving ODEs, but showed only the implementation for explicit and implicit Euler. The reason behind my decision was that those algorithms, and many others as well, are part of a larger family of algorithms: The Runge-Kutta methods 🗄️.

Again, Wikipedia does a great job of describing how to put up such a system. TLDR: The Runga-Kutta methods approximate the cauchy problem through a set of equations. By putting those equations together we obtain the Runge-Kutta Matrix, and thanks to that, approximating the Cauchy problem is equivalent to solving a linear system.

The equations of the Runge-Kutta methods are uniquely identified by a set of coefficients, weights, and nodes. Those are commonly grouped together in a so-called Butcher tableau.

I’ve collected some tableaux found in textbooks and online. A nice list can also be found here. The most famous tableaux are surely Forward Euler, Backward Euler, and the midpoint method since they are very simple to explain and implement by hand.

Explicit Methods

The Runga-Kutta methods can be split into three main families: explicit, semi-implicit, and implicit. Then there are adaptive and non-adaptive methods…​ in fact one can put together different approximation techniques, prove that they converge to the solutions, and rediscover a new family of approximation methods.

The easiest family of methods to implement is the explicit method. It consists of a triangular matrix system, therefore is no need for a particularly robust matrix system for solving those equations. I did not implement the semi-implicit and implicit methods as I believe that a third-party library with a robust matrix system is required to get good results.

If we are only interested in one-dimensional systems, the code will look like

std::array<double, N> K = {};
for(std::size_t i = 0; i != N; ++i){
	double kb = 0;
	for(std::size_t j = 0; j != i; ++j){ // avoid j>i since K[j] == 0
		kb += K[i] * B[{i,j}];
	}
	double t_i = std::fma(a[i], h, t);
	K[i] = f(t_i, y + h*kb);
}

double kc = 0;
for(std::size_t i = 0; i != N; ++i){
	kc += K[i] * c[i];
}
y += h*kc;
t += h;

whereas for a multidimensional system, the code will look similar to this

calcnum::matrix<yN, N> K;
for(std::size_t i = 0; i != N; ++i){
	double t_i = std::fma(a[i], h, t);
	auto Ki = f(t_i, y + h*K*Bt[i] );
	set_col(K, i, Ki);
}
y += h*K*c;
t += h;

The code looks extremely compact and similar to the equations that can be found in most textbooks. That is thanks to the language that allows to provide custom multiplication and sum operators, auto, function objects, function and operator overloading. If we strip those features from C++ (ie we nearly go back to C), we would need to write something like

calcnum::matrix<yN, N> K;
for(std::size_t i = 0; i != N; ++i){
	double t_i = std::fma(get_elemen(a, i), h, t);
	calcnum::vector<yN> hKB_i = mult_mat_by_scalar(h, mult_mat_by_vec(K, get_row_vec(Bt, i), yN, N), yN);
	calcnum::vector<yN> y_i = sum_vec(y, hKB_im yN);
	calcnum::vector<yN> Ki = apply_fun(f, t_i, y_i, yN);
	set_col(K, i, Ki, yN);
}
y = sum_vec(y, mult_mat_by_scalar(h, mult_mat_by_vec(K,c,yN), yN, N), yN);
t += h;

In fact, in C it’s a common convention to pass pointers to matrices and vectors, and not return them by value. So the code would resemble even less the mathematical formulas. And it’s always confusing to remember which parameter stands for the number of rows, and which stands for the number of columns. The type system of C++ makes it so much easier to avoid this kind of error, but unfortunately, dynamic languages like Octave, which provide a very good syntax, can not take advantage of this information. The information about the sizes of matrices is only used and checked when the interpreter is running the required expression.

There seems to be a linter for Matlab 🗄️, which I’ve never tried it. From what can be gathered from the examples it does not seem able to verify the correctness of the code, but it provides suggestions on how to improve it.

I guess that a linter that checks for matrix and vector sizes could be possible, even if it might not get every use case, it could avoid many common errors.

Going back to C++ and numeric calculus: My implementation can be found on GitHub: I left both the one-dimensional and the multidimensional explicit Runge-Kutta iterator.

Adaptive Methods

The Runga-Kutta methods can also be categorized by other criteria, and not only between explicit, semi-implicit, and implicit.

An important set of methods are the adaptive ones. Adaptive methods are iterative methods that analyze how well they are approximating the problem and decide to take smaller or larger steps.

This is important in the case of stiff equations. Unless the step size is very small, the approximated solution will be completely different from the exact solution. Since we would like our algorithms to finish, using a minuscule step is not a viable solution, but also using a step size that is too big would not be helpful either.

A first naive idea is to calculate the solution with two different step sizes, for example, h and 0.5*h. On one hand, if the solutions are similar, then the equation is probably not stiff, and we can use a larger step. On the other hand, if the solutions are not similar, we should probably use a smaller step.

The implementation can be found, like all other algorithms, on GitHub.

Fehlberg Adaptive Methods

One of the most expensive parts of the Runge-Kutta method is calculating the K matrix, and, without further assumptions, for every step this matrix is different. If we take a closer look the the equations or at our implementations, we may notice that it does not depend on the weights at all. This means that we could reuse the same Matrix if we could identify two Butcher tableaux that differ only in the weights. And that’s the idea behind the Runge-Kutta-Fehlberg method.

Compared to the previous adaptive methods, the algorithm will have a better performance because we will need roughly a third of matrix computations.

What is a good solution for solving the same problem with different weights? Supposing that both methods give us an approximation of the solution, we have information about the actual error. By using that said information, it is possible to have better control of the complete error, by resizing the size of the steps.

Since the two methods have a different estimation order, when we have a difference under a certain threshold, we can use a larger step. If not, we should probably use a smaller step.

while(t < tmax){
	auto err = std::numeric_limits<double>::infinity();
	for(int i = 0; i != nitmax; ++i){
		const auto hmin2 = std::min(hmin, tmax-t);
		h = (i == nitmax-1) ? hmin2 : calcnum::fclamp(h*q, hmin2, hmax);

		cauchy_probN<1> cpN_sub = {
		    cpN.f,
		    t,
		    y
		};

		calcnum::rk_exp_iter<rk_size> rk(cpN_sub, fehl_b.a, Bt, fehl_b.c1, h);
		++rk;

		const auto hKc_diff = h *rk->K*cdiff;
		err = calcnum::norm_inf(hKc_diff);
		q = std::pow( toll/err , 1/d(fehl_b.order+1) );

		if(err <= toll){ // set y, t and exit inner loop
			y = rk->y + hKc_diff;
			t = rk->t;
			break;
		}
	}
}

As shown in the example, at every step a solution to the Runge-Kutta system gets approximated. Through the matrix K and the weights it is possible to have an approximation of the local error between two methods.

Compared to the first adaptive method

while(t < tmax){
	auto err = std::numeric_limits<double>::infinity();
	for(int i = 0; i != nitmax; ++i){
		const auto hmin2 = std::min(hmin, tmax-t); // adapt hmin to avoid overflow
		REQUIRE(hmin2 > 0);
		h = (i == nitmax-1) ? hmin2 : calcnum::fclamp(h*q, hmin2, hmax);
		cauchy_probN<1> cpN_sub = {
		    cpN.f,
		    t,
		    y
		};

		calcnum::rk_exp_iter<rk_size> rk(cpN_sub, fehl_b.a, Bt, fehl_b.c1, h);
		calcnum::rk_exp_iter<rk_size> rk2(cpN_sub, fehl_b.a, Bt, fehl_b.c1, h/2);
		++rk;
		++rk2; ++rk2;

		err = calcnum::norm_inf(rk->y - rk2->y)/(std::pow(2.0,p)-1);
		q = std::pow(toll/(err*std::pow(2,p)), 1/d(p+1));

		if(err <= toll){ // set y, t and exit inner loop
			y = rk2->y;
			t = rk->t;
			break;
		}
	}
}

We are doing only one-third of the work. It might not look like a big difference, but at every rk` and `rk2, we are solving a matrix system of size rk_size x rk_size. We are solving those systems at every step at least(!) one time in order to see if we need to change our step size.


Do you want to share your opinion? Or is there an error, some parts that are not clear enough?

You can contact me anytime.