Skip to content

Algorithms

julia
using RationalFunctionApproximation, CairoMakie

There are three algorithms for rational approximation offered in this version of the package.

AAA

The most robust method is the continuum variant of the AAA algorithm (see also the arXiv version). Briefly speaking, the AAA algorithm maintains a set of interpolation nodes and a set of test points on the boundary of the domain. The node set is grown iteratively by finding the best current approximation in a least-squares sense on the test nodes, then greedily adding the worst-case test point to the node set.

The convergenceplot function shows the errors of the approximants found during the AAA iteration.

julia
f = x -> cos(11x)
r = approximate(f, unit_interval)
convergenceplot(r)

(The plots in this documentation are made using CairoMakie, but the same functions are made available for Plots.) In the plot above, the markers show the estimated max-norm error of the AAA rational interpolant over the domain as a function of the iteration counter. Each iteration adds one degree to both the numerator and denominator of the rational approximation. The gold halo indicates the final approximation chosen by the algorithm. The red dots indicate that a pole of the rational interpolant lies in the approximation domain. We can verify this fact by using rewind to recover the approximation from iteration 7:

julia
julia> r7 = rewind(r, 7)
Barycentric{Float64, Float64} rational interpolant of type (6, 6) on the domain: Segment(-1.0,1.0)

julia> poles(r7)
6-element Vector{ComplexF64}:
  -0.7778562899438917 - 0.31165321202052915im
  -0.7778562899438916 + 0.3116532120205291im
 -0.28825452504364324 - 0.3250620500101392im
 -0.28825452504364324 + 0.32506205001013927im
  0.19372898617347015 - 0.3095975156373683im
  0.19372898617347015 + 0.30959751563736826im

In most cases, the poles move out of the approximation domain as the iteration proceeds to better accuracy. However, it is possible for the iteration to stagnate if the original function has a singularity very close to the domain.

julia
f = x -> tanh(3000*(x - 1//4))
r = approximate(f, unit_interval)
convergenceplot(r)

This effect is thought to be mainly due to roundoff and conditioning of the problem. In this case, if we use more accurate floating-point arithmetic, we can see that the AAA convergence continues steadily past the previous plateau. In the following, we apply Double64 arithmetic, having used exact rational numbers already in the definition of f:

julia
using DoubleFloats, ComplexRegions
r = approximate(f, Segment{Double64}(-1, 1))
convergenceplot(r)

In the extreme case of a function with a singularity on the domain, the convergence can be substantially affected:

julia
f = x -> abs(x - 1/8)
r = approximate(f, unit_interval)
convergenceplot(r)

In such a case, we might get improvement by increasing the number of allowed consecutive failures via the stagnation keyword argument:

julia
r = approximate(f, unit_interval, stagnation=50)
convergenceplot(r)

Prescribed poles

While finding a rational interpolant is a nonlinear problem, we can compute a linear variant if we prescribe the poles of the rational function. Specifically, given the poles ζ1,,ζn, we can find a polynomial p and residues wk such that

f(z)p(z)+k=1nwkzζk.

When posed on a discrete set of test points, this is a linear least-squares problem. In order to represent the polynomial stably, an Arnoldi iteration is used to find a well-conditioned basis for the test points.

There is no iteration on the degree of the polynomial or rational parts of the approximant. Instead, the discretization of the boundary of the domain is refined iteratively until either the max-norm error is below a specified threshold or has stopped improving.

julia
f = x -> tanh(x)
ζ = 1im * π * [-1/2, 1/2, -3/2, 3/2]
r = approximate(f, Segment(-2, 2), ζ)
PartialFractions{ComplexF64} rational function of type (6, 4) on the domain: Segment(-2.0,2.0)
julia
max_err(r) = println("Max error: ", maximum(abs, check(r, quiet=true)[2]))
max_err(r);
Max error: 1.9844602136220857e-5

To get greater accuracy, we can increase the degree of the polynomial part.

julia
r = approximate(f, Segment(-2, 2), ζ; degree=20)
max_err(r);
Max error: 3.663810827409843e-15

Note that the residues (in the exact function, all equal to one) may be accurate at the poles closest to the domain, but much less so elsewhere.

julia
Pair.(residues(r)...)
4-element Vector{Pair{ComplexF64, ComplexF64}}:
 -0.0 - 1.5707963267948966im => 1.00000000487331 - 6.921820591153586e-11im
  0.0 + 1.5707963267948966im => 1.0000000048733098 + 6.921820146476337e-11im
   -0.0 - 4.71238898038469im => 0.009726848784147339 - 3.138033013139132e-11im
    0.0 + 4.71238898038469im => 0.009726848784147363 + 3.1380330237020535e-11im

Suppose now we approximate |x| using AAA. We can extract the poles of the result.

julia
r = approximate(abs, unit_interval, tol=1e-9)
ζ = poles(r)
62-element Vector{ComplexF64}:
  -0.007325828853075613 - 0.8178910649817696im
  -0.007325828853075613 + 0.8178910649817696im
  -0.007120295429202942 - 1.5238123329577775im
  -0.007120295429202942 + 1.5238123329577775im
 -0.0070778535442707164 + 0.4980534858719184im
  -0.007077853544270716 - 0.49805348587191844im
  -0.006884693016289629 + 4.842062576110964im
  -0.006884693016289628 - 4.842062576110964im
  -0.006130827968583123 - 0.3165630300721328im
  -0.006130827968583122 + 0.3165630300721328im

  2.6368887699438566e-7 + 2.7280239186739025e-5im
   7.231318892586103e-7 - 5.443145747278154e-5im
   7.231318892586104e-7 + 5.443145747278154e-5im
   1.305640172279198e-6 - 0.00010464150704880827im
   1.305640172279198e-6 + 0.00010464150704880827im
  1.4474403412899487e-6 - 0.000343331960531137im
  1.4474403412899487e-6 + 0.000343331960531137im
  1.8029577150828667e-6 - 0.000192799294594875im
  1.8029577150828667e-6 + 0.00019279929459487498im

To what extent might these poles be suitable for a different function that has the same singularity?

julia
s = approximate(x -> exp(abs(x)), unit_interval, ζ; degree=20)
max_err(r);
Max error: 3.146196729250663e-9

Thiele continued fractions (experimental)

The AAA algorithm is remarkably robust, but a potential downside is that constructing an approximant of degree n requires an SVD that takes O(n3) time. Thus, for an iteration up to degree n, the work is O(n4). In many cases, the value of n is small enough not to cause a concern, but there is some motivation to find a more efficient algorithm.

One possibility is to use a continued fraction representation of the rational approximant. The Thiele representation, dating back to the 1800s, uses inverse divided differences to represent a rational interpolant, requiring just O(n) time to add a node to an approximation of degree n. Divided differences are notoriously unstable, but work by Salazar in 2024 (or the arXiv version) indicates that a greedy adaptive approach can reduce or eliminate the instability.

To try greedy Thiele, use method=Thiele as an argument to approximate.

julia
f = x -> cos(41x - 5) * exp(-10x^2)
r = approximate(f, unit_interval; method=Thiele)
convergenceplot(r)

The x-axis of the convergence plot shows the degree of the denominator polynomial. Because the Thiele method alternates between interpolants of type (n,n) and (n+1,n), there are two dots above for each degree.

The primary appeal of the greedy Thiele method is that adding a node to an interpolant of degree n takes O(n) time, compared to O(n3) for the AAA method. Some experiments suggest that the Thiele method is faster in practice, though a systematic comparison is still needed.

julia
f = x -> abs(x-0.5)
@elapsed r = approximate(f, unit_interval; method=Barycentric, max_iter=150, allowed=true)
max_err(r);
Warning: Stopping at estimated error 1.174e-7 after 77 iterations
@ RationalFunctionApproximation ~/work/RationalFunctionApproximation.jl/RationalFunctionApproximation.jl/src/barycentric.jl:296
Max error: 8.83510453896521e-8
julia
@elapsed r = approximate(f, unit_interval; method=Thiele, max_iter=300, allowed=true)
max_err(r);
Warning: Stopping at estimated error 1.955e-6 after 123 iterations
@ RationalFunctionApproximation ~/work/RationalFunctionApproximation.jl/RationalFunctionApproximation.jl/src/thiele.jl:170
Max error: 1.9999599967235326e-6