Floating in C++ - part 3 - Integration

The introduction to this article series can be found here, whereas here you can find all posts.

A common task in numerical analysis is to approximate the value of an integral. In general, integrals are difficult to calculate, unless the function is a polynomial, therefore there are a lot of different techniques.

Most of those rely on the concept we already used when interpolating functions:

Replace a complex function with a polynomial of lower grade.

The Newton-Cotes formulas are a group of algorithms that use polynomial interpolation in equidistant nodes. The Gaussian quadrature rule uses the nodes of Chebyshev and can give us better results.

Newton-Cotes

The implementation of a Newton-Cotes formula, for example the trapezoidal rule, is trivial, since all formulas are explicit and do not need any memory allocation

double integrate_trapezoidal(std::function<double(double)> f, closed_interval interv){
	const double h = length(interv);
	double result = (f(interv.a) + f(interv.b))/2;
	double x_i = std::fma(i, h, interv.a);
	result += f(x_i);
	return h*result;
}

What we are doing in the piece of code above, is approximating the function on the whole interval with a linear function (a polynomial of degree 1) and calculate its integral. A linear function may not approximate our original function very well, so we can split the interval in multiple subintervals and apply on every subinterval the trapezoidal rule. This is what we have done before when approximating functions with polynomials.

double integrate_trapezoidal(std::function<double(double)> f, closed_interval interv, std::size_t subintervals = 1){
	assert(subintervals > 0);
	const double h = length(interv)/fd(subintervals);
	double result = (f(interv.a) + f(interv.b))/2;
	for(std::size_t i = 1; i != subintervals; ++i){
		double x_i = std::fma(i, h, interv.a);
		result += f(x_i);
	}
	return h*result;
}

The usage of this function is also very simple

double fun_to_integrate(double x){
	return x - std::sin(x);
}

int main(){
	const closed_interval close_int_to_integrate{0,10};
	auto val1 = integrate_trapezoidal(fun_to_integrate, close_int_to_integrate);
	const std::size_t num_of_subintervals = 10;
	auto val2 = integrate_trapezoidal(fun_to_integrate, close_int_to_integrate, num_of_subintervals);
}

There are two types of Newton-Cotes formulae: open formulae and closed ones. The difference between those two are the points where the function is evaluated.

Given the degree of the formula, and the fact that we want to work on an open or closed interval, we can generate an appropriate algorithm.

We can even implement a single function, that will work for any degree and for both open and closed formulae.

template<class intervaltype>
double integrate_newton_cotes(std::function<double(double)> f, const intervaltype& interv, std::size_t degree, std::size_t subintervals = 1){
	assert(degree > 0);
	assert(subintervals > 0);
	double par_sum = 0;

	equidistant_nodes_closed ext_nodes(closed_interval{interv.a, interv.b}, subintervals+1);
	for(auto it = std::begin(ext_nodes); it != std::end(ext_nodes)-1; ++it){
		intervaltype interv2 = {*it, *(it+1)};

		auto nodes = make_node_gen(interv2, degree);
		std::vector<double> data_x;
		data_x.reserve(degree);
		std::copy(std::begin(nodes), std::end(nodes), std::back_inserter(data_x));
			for(std::size_t i = 0; i != data_x.size(); ++i){
			auto weight = integrate(lagr_base_poly(std::begin(data_x), std::end(data_x), i), interv2);
			par_sum += f(data_x[i]) * weight;
		}
	}
	return par_sum;
}

The template is necessary for passing to the same function definition of both open or closed intervals. Based on that, we can create an appropriate "node generator" with the function make_node_gen. The "node generator" will then generate the nodes we need on the interval we want to interpolate the functions with, for example

class equidistant_nodes_closed{
	closed_interval interv;
	std::size_t n;
	double h;
public:
	equidistant_nodes_closed(const closed_interval& interval, std::size_t num_nodes) : interv(interval), n(num_nodes){
		h = length(interv)/(fd(n)-1);
	}
	double operator[](std::size_t i) const {
		assert(i<n);
		return std::fma(i, h, interv.a);
	}
	std::size_t size() const noexcept {
		return n;
	}
	// definition of iterator
	iterator begin() const {
		return iterator(0, *this);
	}
	iterator end() const {
		return iterator(n, *this);
	}
}
inline equidistant_nodes_closed make_node_gen(const closed_interval& interval, std::size_t num_nodes){
	return equidistant_nodes_closed(interval, num_nodes);
}
inline equidistant_nodes_open make_node_gen(const open_interval& interval, std::size_t num_nodes){
	return equidistant_nodes_open(interval, num_nodes);
}

For simplicity, we used equidistant_nodes_closed for iterating on every subinterval. Then on every subinterval we created a subinterval of the same type that was passed on to the function and generate the points where we want to interpolate the function. At this point, we can reuse the lagr_base_poly functions we defined before in the interpolating functions post for generating the interpolation function. By integrating it over the subinterval we will get the weights. We can at last put all together, multiply the function evaluated in all points with the weights, and sum the resulting values together.

As always, the whole source code is available on github

Possible optimizations

As with others algorithms, there are different optimizations possibilities. For example, for degree ⇐ N, with N defined at compile time, we could create a lookup-table, where the weights are saved as constants in order to avoid to calculate them every time. This would probably result in a faster and more precise algorithm. If the values are fixed, we do not need to calculate those values every time, and the values would be as precise as possible, instead of calculating them at runtime, which would generally give us a worse approximation. We can, if the degree is above a specific threshold, use as fallback the generic formula provided above.

Usage of std::fma

Attentive readers may have noticed that sometimes I’ve used the function std::fma.

The result of std::fma(x,y,z) is equivalent to (x*y)+z, but is more precise, since the calculation is done with infinite precision, and rounded only once at the end.

So why not use it every time we need to compute a sum and a multiplication? Unfortunately, there are use cases where using the multiply–accumulate operation may cause an incorrect result.

If you consider the following matrix

[aaaa]

For every value of a that represents a finite number, you want its determinant to be exactly zero.

The determinant is simply calculated by

aaaa

and the result is exactly zero if aa is equal to itself, which is true for all finite floating point numbers (we are therefore supposing that a*a will not overflow).

If we use the multiply–accumulate operation, the value of the determinant would be

aa+c

with c being a rounded rappresentation of -a*a and therefore with a possible different value than a*a! This imprecision comes by the fact that the multiply–accumulate operation is not symmetric to the operation we are performing.

Nevertheless, there are use cases when this higher precision won’t certainly harm, for example, when all addends have the same sign. Therefore, for calculating where the next interval begins, we can safely replace hn = a+h*n with hn = std::fma(h,n,a).

Does the difference matter in practice? For my use cases probably not. For others, it might.

Unfortunately, from this discussion, it looks like that the compiler is allowed to rearrange operations in order to produce more accurate results. Therefore, a multiply and add operation might be fused together even if this would change the result of our program. A solution to this particular problem would be to write STDC FP_CONTRACT OFF, as described on cppreference, or use other compiler-specific switches. If in doubt, you can use the online compiler explorer for verifying what kind of optimisations the compiler is doing for you.