

mpfl  an experiment in stochastic lazy floatingpoint arithmetic
What approaches are available when we need to do reliable floatingpoint
arithmetic? I'm not thinking here of problems like "compute 100 million
decimal digits of pi". What use are all those digits? And anyway, in such a
calculation it is impossible in principle to guarantee that all digits are
correct. Rather, I'm thinking of problems of the type "determine rigorously
whether x<y or not", where x and y are computed numbers. Such a
calculation can in principle be solved by a computation which terminates in
finite time, assuming x and y are not equal. Problems of this type occur in
computational number theory and computational geometry. Related problems
include computing the floor of a computed real, and the continued fraction of
an irrational (which needs the floor function).
Here is a typical approach, such as has traditionally been used in computer algebra systems, for getting results which are correct with (usually) high probability:
 Set a precision level for the whole computation. Record
the result. Repeat the whole computation at a higher precision. If the
result doesn't change, it's probably correct.
It's clear that the computational complexity will increase as we move from
demanding a result that is probably correct, to one that is guaranteed
correct. The main question in this field is: can we write a generalpurpose
software package which gives guaranteed results with an acceptably short
computation time? Here are some approaches:
 Error analysis: perform (by hand) a complete analysis of the propagation
of roundoff error through the computation. Set the precision level of each
step so that the final error is acceptably small. This method was used by
some entries in the
Manydigits friendly
competition. But this is hardly feasible for a generalpurpose software
package.
 Exact real arithmetic: there have been many attempts at this  see the
links at xrc.
Essentially, these methods use lazy evaluation where the whole directed
acyclic graph (DAG) representing the computation is stored, and intermediate
results are reevaluated as needed. Each step is performed with a bounded
error, so the final error is bounded. The main problems in practice concern
the storage required to store the DAG, and the fact the error bounds are
typically too pessimistic. In many cases, the minimum possible "graininess"
of 1 bit (the smallest possible increase in precision per step) is larger than
is actually needed to satisfy error bounds. In large calculations, this can
result in unacceptable inefficiencies. Also, transcendental functions are
difficult to make efficient in these methods.
 iRRAM
 RealLib
Experience shows that some of these approaches perform better on some types
of problems, but no one method is best overall.
mpfl  the idea
The present method is an experimental stochastic variation of exact
real arithmetic. The novelty is a way to avoid the 1bit graininess mentioned
above. It is assumed that arbitrary precision floatingpoint arithmetic with
correct rounding is available (such as provided by mpfr). The ingredients are:
 The complete DAG is stored, as in my xrc.
 The intermediate value at each node is an interval with floatingpoint
endpoints. These always strictly bound the correct result, but the intervals
can be refined automatically as necessary.
 When evaluation is initiated, the increase in precision as we move away
from the output node is probabilistic, controlled by a parameter alpha. The
optimum value (i.e. that minimizing computation time) for alpha will depend on
the problem, but the final results are independent of alpha. In essence, if
alpha<1, then 1 bit is added with probability alpha. If
alpha>1, then floor(alpha) bits are added. This allows tuning  for
example, for the logistic map problem, we know that alpha=1 should be optimal,
because the Liapunov exponent is such that one bit is lost on average per iteration.
 The results of evaluationinitiating functions like mpfl_cmp,
mpfl_floor are guaranteed correct (subject to there being no bugs!).
 mpfl_out_str tries to print the requested number of digits correctly, but
does not guarantee correctness.
The stochastic component is of course a heuristic. Why might this be
reasonable? Why not compute errors bounds exactly, as iRRAM does? One
answer is already given above  error bounds are always pessimistic. Another
answer is that computing error bounds itself takes time. One might compare
the way packet protocols are implemented  at the lowest level, packets are
just transmitted, regardless of whether they might suffer collisions or
otherwise get lost. So at this level the system is stochastic, and
transmission is not reliable. It is made reliable by being wrapped in
another layer (e.g. TCP), which detects when packets are lost and resends them
if necessary. My mpfl software has a similar twolevel design  the lower
level is not reliable, but the upper layer (the only layer with which the user
interacts), is reliable. (More accurately, the lower layer actually is
reliable, but not necessarily precise enough. The upper layer ensures
sufficient precision.)
mpfl  download
not yet available
mpfl  installation
 Make sure you have
GMP,
mpfr, and
mpfi
installed.
 Edit the top few lines of Makefile and tests/Makefile.
 make
 make check
mpfl  available functions
const char* mpfl_get_version(void);
void mpfl_set_alpha(const double);
double mpfl_get_alpha(void);
mpfl_t mpfl_const_pi(void);
mpfl_t mpfl_set_d(const double);
mpfl_t mpfl_abs(const mpfl_t);
mpfl_t mpfl_neg(const mpfl_t);
mpfl_t mpfl_sqrt(const mpfl_t);
mpfl_t mpfl_exp(const mpfl_t);
mpfl_t mpfl_expm1(const mpfl_t);
mpfl_t mpfl_log(const mpfl_t);
mpfl_t mpfl_log1p(const mpfl_t);
mpfl_t mpfl_sin(const mpfl_t);
mpfl_t mpfl_cos(const mpfl_t);
mpfl_t mpfl_tan(const mpfl_t);
mpfl_t mpfl_atan(const mpfl_t);
mpfl_t mpfl_add(const mpfl_t, const mpfl_t);
mpfl_t mpfl_sub(const mpfl_t, const mpfl_t);
mpfl_t mpfl_mul(const mpfl_t, const mpfl_t);
mpfl_t mpfl_div(const mpfl_t, const mpfl_t);
mpfl_t mpfl_d_mul(const double, const mpfl_t);
mpfl_t mpfl_d_sub(const double, const mpfl_t);
mpfl_t mpfl_add_d(const mpfl_t, const double);
mpfl_t mpfl_sub_d(const mpfl_t, const double);
mpfl_t mpfl_mul_d(const mpfl_t, const double);
mpfl_t mpfl_div_d(const mpfl_t, const double);
int mpfl_cmp_d(mpfl_t x, const double d);
int mpfl_cmp(mpfl_t x, mpfl_t y);
double mpfl_floor(mpz_t y, mpfl_t x);
size_t mpfl_out_str(FILE *stream, int base, size_t digits, mpfl_t x);
The design is such that it is easy to add new functions:
 Add a new opcode to "enum op", and a prototype, in mpfl.h
 Add a new _CASE to the switch in eval in mpfr.c using the appropriate cpp macro
 Add a new _OP function to mpfr.c using the appropriate cpp macro
At the moment there is no way to free mpfl_t objects. This will be added
in a later version, using reference counting.
mpfl  example program
Coding is similar to xrc, but differs from mpfr in that variables do not
have to be initialized, and functions return a pointer to their output, rather
than return the result in a argument. This example is the classic problem:
determine whether exp(pi*sqrt(163)) is integral or not. Here is how the code
might look with different approaches.
 double: (result inconclusive)
#include <stdio.h>
#include <math.h>
int main() {
double x=exp(M_PI*sqrt(163));
printf("xfloor(x)=%g\n",xfloor(x));
return 0;
}
 mpfr:
#include <mpfr.h>
int main() {
mpfr_t x,pi,f;
mpfr_set_default_prec(500);
mpfr_init_set_ui(x,163);
mpfr_init(pi);
mpfr_init(f);
mpfr_const_pi(pi);
mpfr_sqrt(x,x);
mpfr_mul(x,pi,x);
mpfr_exp(x,x);
mpfr_floor(f,x);
mpfr_sub(x,x,f);
printf("xfloor(x)>0?: %d\n",mpfr_cmp_ui(x,0));
return 0;
}
 xrc:
#include <xr.h>
int main() {
xr_t x,z;
x=xr_exp(xr_mul(xr_pi(),xr_sqrt(xr_init(163,1))));
z=xr_near_int(x);
printf("cmp(x,nearint(x))=%d\n",xr_cmp(x,z));
return 0;
}
 mpfl:
#include <mpfl.h>
int main() {
mpfl_t x;
x=mpfl_exp(mpfl_mul(mpfl_const_pi(),mpfl_sqrt(mpfl_set_d(163))));
printf("cmp(floor(x),x)=%d\n",mpfl_cmp(mpfl_floor(x),x));
return 0;
}

This page was last modified 2006 Jan 20 (Friday) 12:26
by

