# Floating in C++ - part 4 - Testing

The introduction to this article series can be found here, whereas here you can find all posts.

As I’ve written, for most functions, I’ve tried to ensure that the implementation is correct through multiple tests.

Unfortunately, testing with floating point can be tricky. First of all, because of rounding issues, most of the time it is not possible to compare float values directly. There are some cases where it makes sense to compare values directly, but most of the time, we have to remember that we want to know is if a value is approximately equal to another. What does exactly "approximately" mean? It actually depends on the value itself. The difference between `149756293874562387562345` and `149756293874562387562346` is exactly the same as the difference between `0` and `1`. While in the first case the difference is probably negligible, in the second case such difference is probably too big to be ignored.

## Comparing Values

The CATCH test framework gives us some utility classes to ease testing with floating points, instead of writing

``````auto v1 = 1.2;
auto v2 = 4.9;
REQUIRE(approx_equal(v1, v2));``````

We can write

``````auto v1 = 1.2;
auto v2 = 4.9;
REQUIRE(v1 == Approx(v2));``````

It seems like a minor difference, but if the test case fails, the second version gives us a lot more information than the first version:

``````  REQUIRE( approx_equal(v1, v2) )
with expansion:
false``````

versus

``````  REQUIRE( v1 == Approx(v2) )
with expansion:
1.2 == Approx( 4.9 )``````

This makes writing and evaluating tests a lot easier and reduces the need for debugging and manually printing additional information.

## Calculating the Rate of Convergence of an Algorithm

When validating an algorithm, it makes little sense to compare only the final result.

For example, when implementing the Newton algorithm, I wrote some test cases to ensure that I implemented it correctly. It’s easy to make some minor errors, but still getting the approximation of the searched root. Such an error could probably lie there forever if we wouldn’t verify other properties of the algorithm.

For example, I also wanted to test if the convergence was right a well. By providing such a routine, I am able to roll out a wider range of possible programming errors than only by looking at the final result.

Unfortunately, it is not possible to verify the convergence exactly, here is why.

### Calculating the Convergence Rate for Newton and similar Algorithms

The convergence rate, under some conditions, is given by

${\epsilon }_{n+1}=\frac{1}{2}\cdot \frac{f\text{'}\text{'}\left({\xi }_{n}\right)}{\text{'}\left({x}_{n}\right)}\cdot {\epsilon }_{n}^{2}$

with ${\xi }_{n}$ in between ${x}_{n}$ and $a$ ($a$ is the searched root), or by

${\epsilon }_{n+1}\le M\cdot {\epsilon }_{n}^{2},M=\frac{1}{2}\cdot {\mathrm{sup}}_{x\in I}\frac{f\text{'}\text{'}\left(x\right)}{f\text{'}\left(x\right)}$

In general, we have convergence $\alpha$ if we have

${\epsilon }_{n+1}={C}_{n}\cdot {\epsilon }_{n}^{\alpha }$

Now we would like to calculate the $\alpha$ value, to see if we have implemented our method correctly.

It holds

$log\left({\epsilon }_{n+1}\right)=log\left({C}_{n}\right)+\alpha log\left({\epsilon }_{n}\right)$

and therefore

$\frac{log\left({\epsilon }_{n+1}\right)-log\left({C}_{n}\right)}{log\left({\epsilon }_{n}\right)}=\alpha$

The formula seems good enough for calculating the convergence, but unfortunately, we do not know the value of C_n at every iteration, and this will apparently destroy our calculation. If we’d have infinite precision, this would not be a problem. When converging, ${\epsilon }_{n}$ is getting smaller and smaller, and therefore $log\left({\epsilon }_{n+1}\right)$ bigger and bigger, as a consequence the value $log\left({C}_{n}\right)$ will be less relevant when we are getting closer to the solution.

Unfortunately, we do not have infinite precision at our disposal, so let’s look at some results:

Given the formula `x*x*x - 3*x - 2` with starting point `7` and the exact solution in `2`, we will obtain following convergence rates: `0.3021593135833694`, `-2.097253583835919`, ```3.276447563870071, `2.244519263375285```, `2.086949578597226`, `2.040810993837212`, `inf`

We can see that the first two values are completely wrong, and, after the third, the convergence value is converging to the value of two. In the last iteration, we got an inf as result, let’s see why, by looking at absolute errors:

`2.777777777777778`, `1.361655773420479`, `0.5233916718252152`, `0.119881116678628`, `0.008555361768362246`, `4.838221596425996e-5`, `1.560483742224505e-9`, `0`

During the last iteration, our approximation is so precise, that we are not able to detect any error. We can’t, therefore, calculate the rate of convergence as long as we want. With quadratic convergence, we might have only a couple of iterations for determining if the convergence rate is correct or not.

The first thing to do would be to strip the `inf` and `nan` values from the convergence array. It is a safe operation since we can’t obtain any information from those values.

The first values are completely wrong, but we cannot ignore them because we do not know yet if I’ve implemented my method correctly, and since we do not know ${C}_{n}$ at every iteration, it could be that those values are completely fine. Therefore taking and comparing those values singularly makes little sense. It is also difficult to say that we should ignore the first values, particularly if we might have only three or four iterations.

I decided to calculate the mean and standard deviation, to detect statistically if the convergence might have the value of 2, which seems obvious in this example, but we want to automate the procedure for our test cases.

The mean is `1.3089388549045406` and the variance is `1.7560889645768585`. So the value of 2 is clearly not an outlier, since it holds $1.3089388549045406-1.5\cdot 1.7560889645768585<2<1.3089388549045406+1.5\cdot 1.7560889645768585$

Even if we have a method that tells us that my implementation looks correct, since the convergence could be 2, it is not really satisfying. The mean value is very distant from the rate of convergence value. We could weight the values and give the last values a bigger weight, but with other functions, it might be that the last iterations converge faster, and we could obtain a worse result. I preferred to keep things simple, so I continued to use the mean and standard deviation values.

Let’s look at another test case, to verify if our method for analyzing the rate of convergence is good enough.

Given the formula `std::exp(x) - x*x - std::sin(x) - 1`, with starting point at `1.2`, and the exact solution in `1.27970133100099630500239591776735167562639703793577…​`

Our convergence array is `1.88371507408849`, `1.93447306972867`, `1.966070460676042`, `inf`

It actually looks good: all values are very near and converging to the value of 2, except for the last one.

Let’s look at the mean (1.9280862014977345) and standard deviation (0.03392340872559621): Unfortunately, even if the convergence looked better than in the first case, our test would say that the rate of convergence is unlikely to be 2.

In our first test case, since during the first iteration we did not have a good convergence, the value of the standard deviation became very big. As a side effect, the value of 2 was not an outlier, even if the mean was very distant.

In the latter case, the mean is very near to the value of 2 from the first iteration, and monotonically increasing, but the standard deviation is very small, too small actually.

I thought, therefore, if there was another method for calculating the convergence. I could not think of any method that would give us a precise result, but something popped into my mind:

From

$log\left({\epsilon }_{n+1}\right)=log\left({C}_{n}\right)+\alpha log\left({\epsilon }_{n}\right)$

It also holds

$log\left({\epsilon }_{n+1}\right)-log\left({\epsilon }_{n}\right)=log\left({C}_{n}\right)-log\left({C}_{n-1}\right)+\alpha \left(log\left({\epsilon }_{n}\right)-log\left({\epsilon }_{n-1}\right)$

and therefore, by defining ${D}_{n}=log\left({C}_{n}\right)-log\left({C}_{n-1}\right)$, we have

$\frac{log\left({\epsilon }_{n+1}\right)-log\left({\epsilon }_{n}\right)-{D}_{n}}{log\left({\epsilon }_{n}\right)-log\left({\epsilon }_{n-1}\right)}=\alpha$

The formula resembles the first one, but we have a cancellation factor. On one hand, if we are lucky, the ${D}_{n}$ value would be small also for the first iterations, and it surely converges to 0 when ${\epsilon }_{n}$ is converging to 0 too.

I have therefore no proof that this method would work better in all cases, but even if, calculating mean and standard deviation will not give a satisfying result for all cases, too.

Lets calculate the convergence of the previous functions with the new formula:

Given the first formula (`x*x*x - 3*x - 2`), we will obtain following convergence rates: `1.341085487134746`, `1.541458483301888`, `1.791212841179266`, `1.960338694045138`, `1.998363224714784`, `inf`

We can see that the convergence rate is approaching again the value of 2. The standard deviation is `0.25122205670960196`, and the mean is `1.7264917460751643`. This is good enough and the values are also nicer than with the other method. Notice that we have a value less than before, since we need to calculate the difference between all errors.

For `std::exp(x) - x*x - std::sin(x) - 1` we have only two values: `1.991910129623269`, `1.999883517435983`

The mean is `1.995896823529626` and the standard deviation `0.003986693906357419`

Even if I have no proof that the second method for calculating the rate of convergence is better than the first one, on those test cases it provided better results. This does not prove that the second method is always better, but it is a starting point for automating this type of validation. The disadvantage of the second approach is that we have a value to analyze less, on the bright side, we have the property that ${D}_{n}$ tends to 0 when converging, whereas for ${C}_{n}$ this is in general not true.

There are surely test cases where we would get the wrong result, but for now, I have nothing better for testing the convergence, also because there will always be test cases where we get the wrong result. Wikipedia provides a nice example. It takes six iterations to reach a point where the convergence appears to be quadratic, but we might not be able to iterate so many times. If we do not have enough precision, even by inspecting all errors manually and by applying common sense, we would only see a linear method and think that our implementation is wrong.

### Calculating Convergence for Integration Methods

When calculating the convergence for Newton, the error at a given step depended from the error of the step before. This is not true for all approximation methods: for example, when integrating the trapezoidal rule, we have following error formula

$e=\frac{-1}{12}\cdot {h}^{2}\cdot |f\text{'}\left(a\right)-f\text{'}\left(b\right)|+O\left({h}^{3}\right)$

and we call the convergence quadratic.

In general, we have convergence alpha if

$e={h}^{\alpha }\cdot C$

and for calculating the convergence, we would use

${e}_{n}={h}_{n}^{\alpha }\cdot C+O\left({h}_{n}^{\alpha +1}$ $log\left({e}_{n}\right)=\alpha \cdot log\left({h}_{n}\right)+C$ $\frac{log\left({e}_{n}\right)-C}{log}\left({h}_{n}\right)=\alpha$

or, similar to before

$log\left({e}_{n}\right)-log\left({e}_{n-1}\right)=\alpha \cdot \left(log\left({h}_{n}\right)-log\left({h}_{n-1}\right)\right)$

and therefore

$\frac{log\left({e}_{n}\right)-log\left({e}_{n-1}\right)}{log\left({h}_{n}\right)-log\left({h}_{n-1}\right)}=\alpha$

In this case, with the second formula, the value of C is completely gone, even if the formulas are $O\left({h}_{n}^{\alpha +1}\right)$. Similar to before, with different data we might have scenarios where an algorithm will fail the convergence test with the first formula, but not with the second one.

Notice also that supposing that ${h}_{n}=\frac{b-a}{N}$ and $N={2}^{n}$ (and normally we are able to decide the interval size), it holds

$\left(log\left({h}_{n}\right)-log\left({h}_{n-1}\right)\right)=log\left(\frac{{h}_{n}}{{h}_{n-1}}\right)=c$

which is (apart from rounding issues) a constant value.

We can then, through the mean and standard deviation, verify as we did earlier if the rate of convergence of an implemented method seems to be the correct one.

## Other Techniques for Verifying the Correctness of a Numeric Algorithm

There are surely other techniques that I did not test yet.

With the method of least squares, if the standard deviation is small, it is probably possible to obtain some useful information.

For some groups of algorithms, it might even be possible to create a test suite that is sufficient for guaranteeing that the implementation (or at least the convergence or other properties) are correct.