Bulirsch–Stoer algorithm
Encyclopedia
In numerical analysis
, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations
which combines three powerful ideas: Richardson extrapolation
, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method, to obtain numerical solutions to ordinary differential equations
(ODEs) with high accuracy and comparatively little computational effort. It is named after Roland Bulirsch and Josef Stoer. It is sometimes called the Gragg–Bulirsch–Stoer (GBS) algorithm because of the importance of a result about the error function of the modified midpoint method, due to William B. Gragg.
of the stepsize h, performing the numerical calculation with various values of h, fitting a (chosen) analytic function to the resulting points, and then evaluating the fitting function for h = 0, thus trying to approximate the result of the calculation with infinitely fine steps.
Bulirsch and Stoer recognized that using rational function
s as fitting functions for Richardson extrapolation in numerical integration is superior to using polynomial functions because rational functions are able to approximate functions with poles rather well (compared to polynomial functions), given that there are enough higher-power terms in the denominator to account for nearby poles. While a polynomial interpolation or extrapolation only yields good results if the nearest pole is rather far outside a circle around the known data points in the complex plane, rational function interpolation or extrapolation can have remarkable accuracy even in the presence of nearby poles.
The modified midpoint method by itself is a second-order method, and therefore generally inferior to fourth-order methods like the fourth-order Runge–Kutta method
. However, it has the advantage of requiring only one derivative evaluation per substep (asymptotically for a large number of substeps), and, in addition, as discovered by Gragg, the error of a modified midpoint step of size H, consisting of n substeps of size h = H/n each, and expressed as a power series in h, contains only even powers of h. This makes the modified midpoint method extremely useful to the Bulirsch–Stoer method as the accuracy increases two orders at a time when the results of separate attempts to cross the interval H with increasing numbers of substeps are combined.
, in their discussion of the method, say that rational extrapolation in this case is nearly never an improvement over polynomial interpolation . Furthermore, the modified midpoint method is a modification of the regular midpoint method to make it more stable, but because of the extrapolation this does not really matter .
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis ....
, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations
Numerical ordinary differential equations
Numerical ordinary differential equations is the part of numerical analysis which studies the numerical solution of ordinary differential equations...
which combines three powerful ideas: Richardson extrapolation
Richardson extrapolation
In numerical analysis, Richardson extrapolation is a sequence acceleration method, used to improve the rate of convergence of a sequence. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century. In the words of Birkhoff and Rota, ".....
, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method, to obtain numerical solutions to ordinary differential equations
Ordinary differential equation
In mathematics, an ordinary differential equation is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable....
(ODEs) with high accuracy and comparatively little computational effort. It is named after Roland Bulirsch and Josef Stoer. It is sometimes called the Gragg–Bulirsch–Stoer (GBS) algorithm because of the importance of a result about the error function of the modified midpoint method, due to William B. Gragg.
Underlying ideas
The idea of Richardson extrapolation is to consider a numerical calculation whose accuracy depends on the used stepsize h as an (unknown) analytic functionAnalytic function
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions, categories that are similar in some ways, but different in others...
of the stepsize h, performing the numerical calculation with various values of h, fitting a (chosen) analytic function to the resulting points, and then evaluating the fitting function for h = 0, thus trying to approximate the result of the calculation with infinitely fine steps.
Bulirsch and Stoer recognized that using rational function
Rational function
In mathematics, a rational function is any function which can be written as the ratio of two polynomial functions. Neither the coefficients of the polynomials nor the values taken by the function are necessarily rational.-Definitions:...
s as fitting functions for Richardson extrapolation in numerical integration is superior to using polynomial functions because rational functions are able to approximate functions with poles rather well (compared to polynomial functions), given that there are enough higher-power terms in the denominator to account for nearby poles. While a polynomial interpolation or extrapolation only yields good results if the nearest pole is rather far outside a circle around the known data points in the complex plane, rational function interpolation or extrapolation can have remarkable accuracy even in the presence of nearby poles.
The modified midpoint method by itself is a second-order method, and therefore generally inferior to fourth-order methods like the fourth-order Runge–Kutta method
Runge–Kutta methods
In numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.See the article...
. However, it has the advantage of requiring only one derivative evaluation per substep (asymptotically for a large number of substeps), and, in addition, as discovered by Gragg, the error of a modified midpoint step of size H, consisting of n substeps of size h = H/n each, and expressed as a power series in h, contains only even powers of h. This makes the modified midpoint method extremely useful to the Bulirsch–Stoer method as the accuracy increases two orders at a time when the results of separate attempts to cross the interval H with increasing numbers of substeps are combined.
, in their discussion of the method, say that rational extrapolation in this case is nearly never an improvement over polynomial interpolation . Furthermore, the modified midpoint method is a modification of the regular midpoint method to make it more stable, but because of the extrapolation this does not really matter .
External links
- ODEX.F, implementation of the Bulirsch–Stoer algorithm by Ernst Hairer and Gerhard Wanner (for other routines and license conditions, see their Fortran and Matlab Codes page).