## Interval arithmetic

In a recent post I wrote about the relations between mathematics and simulations. In doing so I forgot about one fascinating theme in this area, that of computer-assisted proofs. Here I am thinking about the technique known as interval arithmetic. I got interested in this subject years ago and went as far as to order a book on the subject for the institute library. However I never applied the technique myself. I was reminded of all this since I wanted to say something about the Lorenz system and strange attractors at the end of the course on dynamical systems I am giving at the moment. Then I remembered that there was a well-known result of Warwick Tucker related to the existence of the Lorenz attractor which made use of interval arithmetic. The basic idea of this technique is simple enough. Say I want to calculate the value of a function at a certain point on the computer. A conventional calculation gives an object $y$ which is a rational number together with a certain idea of how accurate this number is. It does not, however, give any rigorous inequality for that number, due to possible errors in the calculation, in particular rounding error. Doing the same calculation by interval arithmetic gives an interval $[y_1,y_2]$. It constitutes a proof that the desired value of the function is contained in the interval defined by the rational numbers $y_1$ and $y_2$. All intermediate steps in the calculation are exact. This kind of approach could be used to give a proof that a certain function has a zero, in other words an existence proof. It could also be used to prove inequalities satisfied by the point where that zero is. The technique can be implemented in solvers for differential equations and thus used to prove results about dynamical systems. For me a computer-assisted proof of this type has the same philosophical satatus as a proof done by hand by a mathematician. There might be a mistake in the computer programme and the input data given to the programme might have been incorrect but this is on the same level as the mistakes a mathematician makes in a manual calculation. This type of proof has a very different status from the result of a numerical simulation done by a probably reliable but not strictly controlled programme.

Why is this technique not more popular? I can think of several reasons. The first is a lack of interest in rigorous proofs among many potential users. The second is that in practise the intervals obtained may be too large to solve the problems of interest. The third is that the calculations may be too slow. If the second or third reason is the main problem then it should be possible to improve the situation by using better algorithms and more computing power.