numerics
Loading...
Searching...
No Matches
Numerical Analysis

Numerical Integration (Quadrature)

Quadrature methods approximate \(I = \int_a^b f(x)\,dx\) when an antiderivative is unavailable or expensive. The choice of method depends on the smoothness of \(f\), the required accuracy, and whether evaluations are costly.

Newton–Cotes Rules

Newton–Cotes methods approximate \(f\) by a polynomial interpolant on equally-spaced nodes and integrate the polynomial exactly.

Trapezoidal Rule

Partition \([a,b]\) into \(n\) panels of width \(h = (b-a)/n\) and approximate \(f\) by a piecewise linear interpolant:

\[ T_n = h\left[\frac{f(a)+f(b)}{2} + \sum_{i=1}^{n-1} f(a+ih)\right]. \]

Error bound ( \(O(h^2)\) globally):

\[ |I - T_n| \leq \frac{(b-a)^3}{12\,n^2}\max_{x\in[a,b]}|f''(x)|. \]

Doubling \(n\) reduces the error by a factor of 4.

Simpson's 1/3 Rule

Approximate \(f\) by a piecewise quadratic on pairs of panels ( \(n\) must be even):

\[ S_n = \frac{h}{3}\left[f(a)+f(b) + 4\sum_{\text{odd }i}f(a+ih) + 2\sum_{\text{even }i}f(a+ih)\right]. \]

Error bound ( \(O(h^4)\) globally):

\[ |I - S_n| \leq \frac{(b-a)^5}{180\,n^4}\max_{x\in[a,b]}|f^{(4)}(x)|. \]

Simpson's rule is exact for polynomials of degree \(\leq 3\) (one order higher than expected from a degree-2 interpolant, due to the symmetric placement of nodes).

Gauss–Legendre Quadrature

Gauss–Legendre optimizes both nodes and weights to maximize the polynomial degree integrated exactly. With \(p\) points it is exact for polynomials up to degree \(2p - 1\):

\[ \int_{-1}^{1} f(x)\,dx \approx \sum_{i=1}^{p} w_i f(x_i), \]

where \(x_i\) are the roots of the \(p\)-th Legendre polynomial \(P_p(x)\) and the weights \(w_i\) are the corresponding Christoffel numbers.

For a general interval \([a,b]\), apply the change of variables \(x = \tfrac{a+b}{2} + \tfrac{b-a}{2}\,t\):

\[ \int_a^b f(x)\,dx = \frac{b-a}{2}\sum_{i=1}^{p} w_i\, f\!\left(\frac{a+b}{2}+\frac{b-a}{2}\,x_i\right). \]

A \(p\)-point Gauss–Legendre rule achieves the accuracy of a degree- \((2p-1)\) fit, whereas a \(p\)-point trapezoidal rule achieves only \(O(h^2) = O(1/p^2)\) accuracy.

\(p\) Polynomial degree exact Nodes
1 1 0
2 3 \(\pm 1/\sqrt{3}\)
3 5 0, \(\pm\sqrt{3/5}\)
4 7 4 non-trivial nodes
5 9 5 non-trivial nodes

Adaptive Simpson

Fixed-panel methods concentrate evaluations uniformly regardless of local behavior. Adaptive methods refine only where the error estimate is large.

Given an interval \([a,b]\) with midpoint \(m = (a+b)/2\), the error estimator is

\[ \text{error} \approx \frac{S(a,b) - S(a,m) - S(m,b)}{15}, \]

where \(S(\cdot,\cdot)\) denotes the single-panel Simpson estimate. The factor \(1/15\) comes from Richardson extrapolation applied to the \(O(h^4)\) error of Simpson's rule: combining the coarse and fine estimates eliminates the leading error term and the remainder is \(1/15\) of the discrepancy.

If \(|\delta| > 15\varepsilon\) the interval is bisected and the process recurses. Evaluations automatically concentrate near singularities and regions of rapid variation.

Romberg Integration

Romberg applies Richardson extrapolation to the trapezoidal rule. The Euler–Maclaurin expansion shows that the trapezoidal error has an asymptotic expansion in even powers of \(h\):

\[ T(h) = I + c_2 h^2 + c_4 h^4 + c_6 h^6 + \cdots \]

Combining estimates at \(h\) and \(h/2\) eliminates the \(c_2 h^2\) term. The general Richardson recurrence builds a triangular table:

\[ R_{i,j} = \frac{4^j R_{i,j-1} - R_{i-1,j-1}}{4^j - 1}. \]

The diagonal entry \(R_{k,k}\) is exact for polynomials of degree up to \(2k-1\) and converges faster than any fixed power of \(h\) for smooth \(f\) (super-algebraic convergence). For analytic \(f\), convergence is exponential.

Method Comparison

Method Error order Evaluations Best for
Trapezoidal \(O(h^2)\) \(n+1\) Baseline, periodic \(f\)
Simpson \(O(h^4)\) \(n+1\) Smooth \(f\), low cost
Gauss–Legendre ( \(p\) pts) Exact to degree \(2p-1\) \(p\) Smooth \(f\), few evaluations
Adaptive Simpson Error-controlled Automatic Irregular or oscillatory \(f\)
Romberg Super-algebraic \(O(2^k)\) Very smooth \(f\), high accuracy

API: num::trapz, num::simpson, num::gauss_legendre, num::adaptive_simpson, num::romberg


Root Finding

Given a continuous function \(f : \mathbb{R} \to \mathbb{R}\), find \(x^*\) such that \(f(x^*) = 0\). Methods differ in convergence speed, the information required (bracket, derivative), and robustness guarantees.

Bisection

The Intermediate Value Theorem guarantees a root in \([a,b]\) when \(f(a)f(b) < 0\). Bisection exploits this by repeatedly halving the bracket.

Error bound: after \(n\) iterations,

\[ |x_n - x^*| \leq \frac{b-a}{2^n}. \]

To reach tolerance \(\varepsilon\): \(n \geq \log_2\!\left(\tfrac{b-a}{\varepsilon}\right)\) iterations are required. Convergence is guaranteed and derivative-free, but the linear rate makes bisection slow for tight tolerances.

API: num::bisection

Newton–Raphson

A first-order Taylor expansion of \(f\) around \(x_k\) and setting the linearization to zero gives

\[ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}. \]

Quadratic convergence near the root: if \(e_k = x_k - x^*\),

\[ |e_{k+1}| \approx \frac{|f''(x^*)|}{2|f'(x^*)|}\,|e_k|^2. \]

The number of correct decimal digits roughly doubles each iteration. The method can diverge from a poor starting guess or when \(f'(x_k) \approx 0\).

API: num::newton

Secant Method

The secant method approximates \(f'\) by a finite difference of the two most recent iterates:

\[ x_{k+1} = x_k - f(x_k)\cdot\frac{x_k - x_{k-1}}{f(x_k) - f(x_{k-1})}. \]

Superlinear convergence with order \(\phi = (1+\sqrt{5})/2 \approx 1.618\) (the golden ratio):

\[ |e_{k+1}| \approx C\,|e_k|^\phi. \]

The method requires no derivative but needs two initial points and can stagnate if \(f(x_k) \approx f(x_{k-1})\).

API: num::secant

Brent's Method

Brent's method combines three strategies in a single algorithm:

  • Bisection — always guaranteed to contract the bracket.
  • Secant step — superlinear acceleration.
  • Inverse quadratic interpolation — fit a quadratic through three points in the \(x\)-direction (treating \(x\) as a function of \(f\)) and evaluate at \(f = 0\).

At each step Brent attempts the faster interpolation method; if the proposed step falls outside the current bracket or is too large, it falls back to bisection. The bracket \([b,c]\) always contains the root and \(b\) is the current best estimate.

Convergence: superlinear in practice with a guaranteed worst-case linear rate from the bisection fallback. No derivative is required. This is the recommended general-purpose method when a bracket is available, and is the basis of scipy.optimize.brentq.

API: num::brent

Method Comparison

Method Convergence Needs bracket Needs \(f'\) Use when
Bisection Linear Yes No Robustness required, speed not critical
Newton Quadratic No Yes Good initial guess, \(f'\) available
Secant Superlinear ( \(\phi \approx 1.618\)) No No \(f'\) unavailable, two initial points known
Brent Superlinear (guaranteed linear) Yes No General purpose