Help on module integrals: NAME integrals - Implementation of several algorithms for numerical integration problems. FILE /home/lilian/teach/mec-cs101-integrals/integrals.py DESCRIPTION - I was mainly interested about techniques to numerically compute 1D *definite* integrals, of the form :math:`\int_{x=a}^{x=b} f(x) \mathrm{d}x` (for :math:`f` continuous). - Some functions can be used to plot a graphical representation of the technic (Riemann sums, trapez rule, 1D Monte-Carlo etc). - There is also a few 2D techniques (Monte-Carlo, but also one based on `Fubini's Theorem `_), for integrals of the forms :math:`\displaystyle \iint_D f(x, y) \mathrm{d}x \mathrm{d}y = \int_{x=a}^{x=b}\left(\int_{y = g(x)}^{y = h(x)} f(x) \mathrm{d}y \right) \mathrm{d}x.` - Similarly, I experimented these two techniques for a generalized :math:`k`-dimensional integral, inspired from `this very good Scipy package (scikit-monaco) `_ and `scipy.integrate `_. - *Date:* Saturday 18 June 2016, 18:59:23. - *Author:* `Lilian Besson `_ for the `CS101 course `_ at `Mahindra Ecole Centrale `_, 2015. - *Licence:* `MIT Licence `_, © Lilian Besson. Examples -------- Importing the module: >>> from integrals import * Let us start with a simple example :math:`x \mapsto x^2`, on :math:`[x_{\min}, x_{\max}] = [0, 3]`. >>> k = 2; xmin = 0; xmax = 3 >>> f = lambda x: x**k We can compute formally its integral: :math:`\int_{x=a}^{x=b} f(x) \mathrm{d}x = \left[ F(x) \right]_{x_{\min}}^{x_{\max}} = F(x_{\max}) - F(x_{\min}) = \frac{3^3}{3} - \frac{0^3}{3} = 27/3 = 9` >>> F = lambda x: x**(k+1) / float(k+1) >>> F(xmax) - F(xmin) 9.0 Riemman sums ~~~~~~~~~~~~ Left Riemann sum, with 8 rectangles, give: >>> riemann_left(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 7.382... .. image:: riemannleft2.png :scale: 80% :align: center >>> riemann_right(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 10.757... .. image:: riemannright2.png :scale: 80% :align: center >>> riemann_center(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 8.964... .. image:: riemanncenter2.png :scale: 80% :align: center Trapezoidal rule ~~~~~~~~~~~~~~~~ >>> trapez(f, xmin, xmax, n=3) 9.5 .. image:: trapezoides2.png :scale: 80% :align: center More examples ~~~~~~~~~~~~~ See below, at least one example is included for each integration method. Currently, there is **228 doctests**, corresponding to about **50 examples of numerically computed integrals**. (and `they all pass, ie each test does exactly what it I expected it to do <../doctest/output.txt>`_) ---------------------------------------------------------------------------------- Small things that still need to be done --------------------------------------- .. todo:: Conclude this main module: more Gaussian quadratures? .. todo:: Make a general :func:`nd_integral` function, letting the user chose the integration method to use for 1D (same as :func:`nd_quad` but not restricted to use :func:`gaussian_quad`). .. todo:: Include more examples in the `tests script `_? References ---------- - The reference book for **MA102** : "Calculus", *Volume II*, by **T. Apostol**, - `Numerical Integration (on Wikipedia) `_, - `scipy.integrate tutorial `_, - `sympy.integrals documentatl `_ (and the `numerical integrals part `_). .. seealso:: I also wrote a complete solution for the other project I was in charge of, `about Matrices and Linear Algebra `_. ---------------------------------------------------------------------------------- List of functions ----------------- FUNCTIONS boole(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using composite Boole's rule. .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \tfrac{2h}{45} \sum_{i=0}^{n}\bigg(7f(x_{i}) + 32f(x_{i+1}) + 12f(x_{i+2}) + 32f(x_{i+3}) + 7f(x_{i+4})\bigg), where :math:`x_i = x_{\min} + i h` for :math:`i=0, 1, \dots, 4n-1, 4n` and :math:`h = \frac{x_{\max} - x_{\min}}{4*n}`. .. admonition:: Example 1 for :func:`boole`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> boole_int = boole(math.cos, 0, math.pi, n=10) >>> boole_int; abs(round(boole_int, 0)) # doctest: +ELLIPSIS 1.612...e-16 0.0 >>> round(100*abs(exact_int - boole_int), 0) # % Error 0.0 - Reference is `Boole's Rule (on MathWorld) `_, and `Boole's Rule (on Wikipedia) `_, - The function :math:`f` is evaluated :math:`4 n` number of times, - This method is exact upto the order 4 (ie. the integral of polynomials of degree :math:`\leq 4` are computed exactly). .. admonition:: Example 2 for :func:`boole`: A second easy example on a degree 3 polynomial: >>> f = lambda x: (x**3) + (x**2) + (7*x) + 4 >>> F = lambda x: (x**4)/4.0 + (x**3)/3.0 + (7 * x**2)/2.0 + (4*x) >>> a, b = -4, 6 >>> exact_int2 = F(b) - F(a); round(exact_int2, 0) 463.0 >>> boole_int2 = boole(f, a, b, n=10) # n = 10 is good enough! >>> boole_int2; abs(round(boole_int2, 6)) # Weird! 463.33333333333337 463.333333 >>> round(100*abs(exact_int2 - boole_int2), 0) # % Error 0.0 .. admonition:: Example 3 for :func:`boole`: On the same harder example: >>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> boole(f, a, b, n=1) # doctest: +ELLIPSIS 373463.255... >>> boole(f, a, b, n=2) # doctest: +ELLIPSIS 374343.342... >>> boole(f, a, b, n=100) # Really close to the exact value. # doctest: +ELLIPSIS 374133.193... dbl_quad(f, a, b, g, h, n=10) Double integrates :math:`f(x, y)`, when :math:`y` is moving between :math:`g(x)` and :math:`h(x)` and when :math:`x` is moving between :math:`a` and :math:`b`, by using two interlaced Gaussian quadratures. - Based on `Fubini's Theorem `_, for integrals of the forms :math:`\displaystyle \iint_D f(x, y) \mathrm{d}x \mathrm{d}y = \int_{x = a}^{x = b}\left(\int_{y = g(x)}^{y = h(x)} f(x) \mathrm{d}y \right) \mathrm{d}x.` .. admonition:: Example 1 for :func:`dbl_quad`: For example, :math:`\int_{x = 0}^{x = 1}\bigg(\int_{y = 0}^{y = 1} \left( x^2 + y^2 \right) \mathrm{d}y \bigg) \mathrm{d}x = 2 \int_{0}^{1} x^2 \mathrm{d}x = 2 \frac{1^3}{3} = 2/3`: >>> f = lambda x, y: x**2 + y**2 >>> a, b = 0, 1 >>> g = lambda x: a >>> h = lambda x: b >>> dbl_quad(f, a, b, g, h, n=1) 0.5 >>> dbl_quad(f, a, b, g, h, n=2) # exact from n=2 points 0.66666666666666674 >>> dbl_quad(f, a, b, g, h, n=40) # more points do not bring more digits 0.66666666666666574 .. admonition:: Example 2 for :func:`dbl_quad`: A second example could be: :math:`\int_{x = 0}^{x = \pi/2}\bigg(\int_{y = 0}^{y = \pi/2} \left( \cos(x) y^8 \right) \mathrm{d}y \bigg) \mathrm{d}x`. >>> f = lambda x, y: math.cos(x) * y**8 >>> a, b = 0, math.pi/2.0 >>> g = lambda x: a >>> h = lambda x: b This integral can be computed mathematically :math:`\int_{x = 0}^{x = \pi/2}\bigg(\int_{y = 0}^{y = \pi/2} \left( \cos(x) y^8 \right) \mathrm{d}y \bigg) \mathrm{d}x = \frac{(\pi/2)^9}{9} \int_{0}^{\pi/2} \cos(x) \mathrm{d}x = \frac{(\pi/2)^9}{9} \approx 6.468988` >>> int2d_exact = (b**9) / 9.0; int2d_exact # doctest: +ELLIPSIS 6.4689... Let see how efficient is the double Gaussian quadrature method: >>> dbl_quad(f, a, b, g, h, n=1) # doctest: +ELLIPSIS 0.2526... >>> dbl_quad(f, a, b, g, h, n=2) # still not very precise for n=2 points # doctest: +ELLIPSIS 4.3509... With :math:`n = 40`, we have :math:`n^2 = 40^2 = 1600` points: >>> int2d_approx = dbl_quad(f, a, b, g, h, n=40); int2d_approx # 13 first digits are perfect # doctest: +ELLIPSIS 6.4689... >>> 100 * abs(int2d_exact - int2d_approx) / int2d_exact # Relative % error, 1e-12 is VERY SMALL # doctest: +ELLIPSIS 6.59...e-13 >>> 100 * abs(int2d_exact - int2d_approx) # Absolute % error, 1e-12 is really good! # doctest: +ELLIPSIS 4.263...e-12 We see again that all these methods are *limited to a precision of 12 to 14 digits*, because we use Python `float` numbers (*IEEE-754* floating point arithmetic). .. admonition:: Example 3 for :func:`dbl_quad`: 3 examples are given here, with *moving* bounds: :math:`g(x)` or :math:`h(x)` are *really* depending on :math:`x`. The first one is :math:`\displaystyle \iint_{(x, y) \in D_1} \cos(y^2) \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x}^{1} \cos(y^2) \;\; \mathrm{d}y \mathrm{d}x = \frac{\sin(1)}{2} \approx 0.4207354924039`: >>> a1, b1 = 0, 1 >>> g1 = lambda x: x; h1 = lambda x: 1 >>> f1 = lambda x, y: math.cos(y**2) >>> exact_dbl_int1 = math.sin(1.0) / 2.0; exact_dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> dbl_int1 = dbl_quad(f1, a1, b1, g1, h1, n=4) >>> dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> 100 * abs(dbl_int1 - exact_dbl_int1) # Not perfect yet, n is too small # doctest: +ELLIPSIS 3.933...e-05 >>> dbl_int1 = dbl_quad(f1, a1, b1, g1, h1, n=100) >>> dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> 100 * abs(dbl_int1 - exact_dbl_int1) # Almost perfect computation (13 digits are correct) # doctest: +ELLIPSIS 0.0 .. note:: Solved with `SymPy `_ integrals module: `This program on bitbucket.org/lbesson/python-demos `_ (written in January 2015), uses SymPy to *symbolically* compute this double integral. The second one is computing the area between the curves :math:`y = x^2` and :math:`y = \sqrt{x}`, for :math:`x \in [0, 1]`: :math:`\displaystyle \text{Area}(D_2) = \iint_{(x, y) \in D_2} 1 \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x^2}^{\sqrt{x}} 1 \;\; \mathrm{d}y \mathrm{d}x = \frac{2}{3} - \frac{1}{3} = \frac{1}{3}`: >>> a2, b2 = 0, 1 >>> g2 = lambda x: x**2; h2 = lambda x: x**0.5 >>> f2 = lambda x, y: 1 >>> exact_dbl_int2 = 1.0 / 3.0 >>> dbl_int2 = dbl_quad(f2, a2, b2, g2, h2, n=100) >>> dbl_int2 # doctest: +ELLIPSIS 0.3333... >>> 100 * abs(dbl_int2 - exact_dbl_int2) # 0.0001% is very good! # doctest: +ELLIPSIS 1.01432...e-05 The last one is :math:`\displaystyle \iint_{(x, y) \in D_3} \frac{\sin(y)}{y} \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x}^{1} \frac{\sin(y)}{y} \;\; \mathrm{d}y \mathrm{d}x = 1 - \cos(1) \approx 0.45969769413186`: >>> a3, b3 = 0, 1 >>> g3 = lambda x: x; h3 = lambda x: 1 >>> f3 = lambda x, y: math.sin(y) / y >>> exact_dbl_int3 = 1 - math.cos(1.0); exact_dbl_int3 0.45969769413186023 >>> dbl_int3 = dbl_quad(f3, a3, b3, g3, h3, n=100) >>> dbl_int3 0.4596976941318604 >>> 100 * abs(dbl_int3 - exact_dbl_int3) # Almost perfect computation (14 digits are correct) # doctest: +ELLIPSIS 1.665...e-14 .. note:: All these examples are coming from the **MA102** `quiz given on January the 29th, 2015 `_. gaussian_quad(f, xmin, xmax, n=10) Integrates between :math:`x_{\min}` and :math:`x_{\max}`, using Gaussian quadrature. - The weigts and roots of the Gauss-Legendre polynomials are computed by SciPy (:func:`scipy.special.orthogonal.p_roots`). - Complexity of my part is :math:`O(n)`, but I don't know how efficient is the `p_roots` function. - I added a cache layer to the `p_roots` function (see :func:`xw_gauss_legendre`). .. admonition:: Example for :func:`gaussian_quad`: Same example as previously: >>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> gaussian_quad(f, a, b, n=1) # doctest: +ELLIPSIS 279755.057... >>> gaussian_quad(f, a, b, n=3) # doctest: +ELLIPSIS 343420.473... >>> gaussian_quad(f, a, b, n=100) # Quite accurate result, see above. # doctest: +ELLIPSIS 374133.206... montecarlo(f, xmin, xmax, n=10000, ymin=None, ymax=None) Compute an approximation of the integral of :math:`f(x)` for :math:`x` from :math:`x_{\min}` to :math:`x_{\max}`. - Each point :math:`(x, y)` is taken in the rectangle :math:`[x_{\min}, x_{\max}] \times [y_{\min}, y_{\max}]`. - :math:`n` is the number of random points to pick (it should be big, like 1000 at least). - What is returned is :math:`\text{area} \approx (\text{Area rectangle}) \times (\text{Estimated ratio})`: .. math:: \text{area} \approx (x_{\max} - x_{\min}) \times (y_{\max} - y_{\min}) \times \frac{\text{Nb points below the curve}}{\text{Nb points}}. .. warning:: The values :math:`y_{\min}` and :math:`y_{\max}` should satisfy :math:`y_{\min} \leq \mathrm{\min}(\{ f(x): x_{\min} \leq x \leq x_{\max} \})` and :math:`\mathrm{\max}(\{ f(x): x_{\min} \leq x \leq x_{\max} \}) \leq y_{\max}`. .. admonition:: Example 1 for :func:`montecarlo`: For example, we are interested in :math:`\int_1^6 x \mathrm{d} x = \frac{6^2}{2} - \frac{1^6}{2} = 17.5`. >>> random.seed(1) # same values all the time >>> xmin, xmax = 1, 6 >>> f = lambda x: x # simple example >>> intf = (xmax**2 / 2.0) - (xmin**2 / 2.0); intf 17.5 >>> ymin, ymax = xmin, xmax Let us start by taking 100 points: >>> n = 100 >>> intf_apporx = montecarlo(f, xmin, xmax, n, ymin, ymax); intf_apporx 18.0 >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 2.8% # doctest: +ELLIPSIS 2.857... The more points we take, the better the approximation will be: >>> n = 100000 >>> intf_apporx = montecarlo(f, xmin, xmax, n, ymin, ymax); intf_apporx # doctest: +ELLIPSIS 17.444... >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 0.32% # doctest: +ELLIPSIS 0.318... .. admonition:: Example 2 for :func:`montecarlo`: We can also let the function compute `ymin` and `ymax` by itself: >>> n = 100000 >>> intf_apporx = montecarlo(f, xmin, xmax, n); intf_apporx # doctest: +ELLIPSIS 17.485... >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 0.08% is really good! # doctest: +ELLIPSIS 0.0844... .. image:: montecarlo.png :scale: 95% :align: center nd_montecarlo(f, Xmin, Xmax, n=10000, ymin=None, ymax=None) Compute an approximation of the k-dimensional integral of :math:`f(\overrightarrow{x})`, on a hypercube (k-dimensional square) :math:`D = [\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]` - The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - Each point :math:`\overrightarrow{x}` is taken in the hypercube :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`. - :math:`n` is the number of random points to pick (it should be big, like 1000 at least). - What is returned is :math:`\text{area} \approx (\text{Volume hypercube}) \times (\text{Estimated ratio})`, ie :math:`\displaystyle \text{area} \approx \prod_{i=1}^{k} \bigg( \text{Xmax}_k - \text{Xmin}_k \bigg) \times \frac{\text{Nb points below the curve}}{\text{Nb points}}`. - See `this trick `_ which explains how to integrate a function on a more complicated domain. The basic concept is to include the knowledge of the domain (inequalities, equalities) in the function f itself. .. admonition:: Example for :func:`nd_montecarlo`: For example, we can compute the volume of a 3D sphere of radius R: :math:`V_R = \frac{4}{3} \pi R^3`, by integrating the function :math:`f : X \mapsto 1` on the cube :math:`[-R, R]^3`. >>> R = 1 >>> f = lambda X: 1 For :math:`R = 1`, :math:`V_R = V_1 \approx 4.18879`: >>> V_3 = (4.0/3.0) * math.pi * (R**3); V_3 # doctest: +ELLIPSIS 4.18879... As previously, the trick is to multiply :math:`f(X)` by 1 if :math:`X` is inside the sphere, or by 0 otherwise: >>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: f(X) * isInside(X) Then we integrate on :math:`[0, R]^3` to get :math:`1/8` times the volume: >>> Xmin = [0, 0, 0]; Xmax = [R, R, R] >>> random.seed(1) # same values all the time >>> (2**3) * nd_montecarlo(F, Xmin, Xmax, n=10) # doctest: +ELLIPSIS 3.2159... >>> (2**3) * nd_montecarlo(F, Xmin, Xmax, n=100) # doctest: +ELLIPSIS 3.9395... The more points we consider, the better the approximation will be (as usual): >>> V_approx1000 = (2**3) * nd_montecarlo(F, Xmin, Xmax, n=1000); V_approx1000 # doctest: +ELLIPSIS 4.19687... >>> 100 * abs(V_3 - V_approx1000) / abs(V_3) # Relative % error, 0.19% is already very good! # doctest: +ELLIPSIS 0.193... >>> V_approx10000 = (2**3) * nd_montecarlo(F, Xmin, Xmax, n=10000); V_approx10000 # doctest: +ELLIPSIS 4.25637... >>> 100 * abs(V_3 - V_approx10000) / abs(V_3) # Relative % error, 1.6% is less accurate. Why? # doctest: +ELLIPSIS 1.613... .. todo:: Compare this *n-dim* Monte-Carlo (:func:`nd_montecarlo`) with the *n-dim* Gaussian quadrature (:func:`nd_quad`). On the three examples in 2D, but also on more "crazy" examples in higher dimension. My guess is that, for the same number of points (:math:`n^k`), Guassian quadrature is slower but more accurate. And for the same computation time, Monte-Carlo gives a better result. nd_quad(f, Xmin, Xmax, n=10) k-dimensional integral of :math:`f(\overrightarrow{x})`, on a hypercube (k-dimensional square) :math:`D = [\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`, by using k interlaced Gaussian quadratures. - Based on the generalized `Fubini's Theorem `_, for integrals of the forms :math:`\displaystyle \int_D f(\overrightarrow{x}) \mathrm{d}\overrightarrow{x} = \int_{x_1=\text{Xmin}_1}^{x_1=\text{Xmax}_1} \int_{x_2=\text{Xmin}_2}^{x_2=\text{Xmax}_2} \dots \int_{x_k=\text{Xmin}_k}^{x_k=\text{Xmax}_k} f(x_1, x_2, \dots, x_k) \mathrm{d}x_k \dots \mathrm{d}x_2 \mathrm{d}x_1`. - The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - Right now, we are taking about :math:`O(n^k)` points, so do not take a too big value for n. - See `this trick `_ which explains how to integrate a function on a more complicated domain. The basic concept is to include the knowledge of the domain (inequalities, equalities) in the function f itself. .. admonition:: Example 1 for :func:`nd_quad`: First example, volume of a 3D sphere: For example, we can compute the volume of a 3D sphere of radius R: :math:`V_R = \frac{4}{3} \pi R^3`, by integrating the function :math:`f : X \mapsto 1` on the cube :math:`[-R, R]^3`. >>> R = 1 >>> f = lambda X: 1 For :math:`R = 1`, :math:`V_R = V_1 \approx 4.18879`: >>> V_3 = (4.0/3.0) * math.pi * (R**3); V_3 # doctest: +ELLIPSIS 4.18879... The trick is to multiply :math:`f(X)` by 1 if :math:`X` is inside the sphere, or by 0 otherwise: >>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: f(X) * isInside(X) Then we integrate on :math:`[0, R]^3` to get :math:`1/8` times the volume (remember that the smaller the integration domain, the more efficient the method will be): >>> Xmin = [0, 0, 0]; Xmax = [R, R, R] >>> (2**3) * nd_quad(F, Xmin, Xmax, n=2) 4.0 >>> (2**3) * nd_quad(F, Xmin, Xmax, n=4) 4.0 >>> (2**3) * nd_quad(F, Xmin, Xmax, n=8) 4.3182389695603307 The more points we consider, the better the approximation will be (as usual): >>> V_approx10 = (2**3) * nd_quad(F, Xmin, Xmax, n=10); V_approx10 # doctest: +ELLIPSIS 4.12358... >>> 100 * abs(V_3 - V_approx10) / abs(V_3) # Relative % error, 1.5% is OK # doctest: +ELLIPSIS 1.55... >>> V_approx40 = (2**3) * nd_quad(F, Xmin, Xmax, n=40); V_approx40 # doctest: +ELLIPSIS 4.18170... >>> 100 * abs(V_3 - V_approx40) / abs(V_3) # Relative % error, 0.16% is good # doctest: +ELLIPSIS 0.16... .. admonition:: Example 2 for :func:`nd_quad`: Second example, volume of a n-ball: We can also try to compute the `volume of a higher dimensional ball `_: :math:`\displaystyle V_{k, R} = \frac{\pi^{k/2}}{\Gamma(k/2 + 1)} R^k`. >>> from math import gamma, pi >>> V = lambda k, R: (pi**(k/2.0))/(gamma( 1 + k/2.0 )) * (R**k) A ball of radius :math:`R = 1` in dimension :math:`k = 5` will have a 5-dim volume of :math:`\frac{8\pi^2}{15} R^{5} \approx 5.263789013914325`: >>> k = 5; R = 1 >>> V_5 = V(k, R); V_5 # Exact value! # doctest: +ELLIPSIS 5.26378... Similarly, the integration domain can be :math:`[0, 1] \times \dots \times [0, 1]`. >>> Xmin = [0]*k; Xmax = [1]*k >>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: 1.0 * isInside(X) >>> V_approx5_3 = (2**k) * nd_quad(F, Xmin, Xmax, n=3) # 3**5 = 243 points, so really not accurate >>> V_approx5_3 # doctest: +ELLIPSIS 4.2634... >>> 100 * abs(V_5 - V_approx5_3) / abs(V_5) # n=3 gives an error of 19%, that's not too bad! # doctest: +ELLIPSIS 19.0049... Exactly as before, we can try to take more points: >>> V_approx5_10 = (2**k) * nd_quad(F, Xmin, Xmax, n=10) # 10**5 = 10000 points! >>> V_approx5_10 # doctest: +ELLIPSIS 5.25263... >>> 100 * abs(V_5 - V_approx5_10) / abs(V_5) # Pretty good! # doctest: +ELLIPSIS 0.211... >>> V_approx5_15 = (2**k) * nd_quad(F, Xmin, Xmax, n=15) # 15**5 = 759375 points! >>> V_approx5_15 # doctest: +ELLIPSIS 5.24665... >>> 100 * abs(V_5 - V_approx5_15) / abs(V_5) # 0.32%, that's great! # doctest: +ELLIPSIS 0.325... The Gaussian quadrature is more efficient with an *even* number of points: >>> V_approx5_16 = (2**k) * nd_quad(F, Xmin, Xmax, n=16) # 16**5 = 1048576 points! >>> V_approx5_16 # doctest: +ELLIPSIS 5.263061... >>> 100 * abs(V_5 - V_approx5_16) / abs(V_5) # 0.01%, that's great! # doctest: +ELLIPSIS 0.013... nd_yminmax(f, Xmin, Xmax, n=10000, threshold=0.005) *Experimental* guess of the values :math:`y_{\min}, y_{\max}` for f, by randomly picking :math:`n` points in the hypercube :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`. - The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - `threshold` is here to increase a little bit the size of the window, to be cautious. Default is 0.5%. - Note: there are more efficient and trustworthy methods, but this one is a simple one. .. warning:: Not sure if the `threshold` is mathematically a good idea... .. admonition:: Example for :func:`nd_yminmax`: One an easy function, just to see if it works: >>> random.seed(1) # same values all the time >>> ymin_exact, ymax_exact = 0, 1 >>> Xmin = [0, 0]; Xmax = [1, 1] >>> F = lambda X: 1 if (sum(x**2 for x in X) <= 1) else 0 >>> ymin, ymax = nd_yminmax(F, Xmin, Xmax, 100) >>> ymin, ymax (0.0, 1.005) >>> 100 * abs(ymin - ymin_exact) # Absolute % error < 0.5% 0.0 >>> 100 * abs(ymax - ymax_exact) # Absolute % error < 0.5% 0.49999999999998934 plot_montecarlo(f, xmin, xmax, n=1000, ymin=None, ymax=None, namef='$f(x)$', figname=None) Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and :math:`n` random points. - Each point :math:`(x, y)` is taken in the rectangle :math:`[x_{\min}, x_{\max}] \times [y_{\min}, y_{\max}]`. - **Warning:** :math:`y_{\min}` and :math:`y_{\max}` should satisfy :math:`y_{\min} \leq \mathrm{\min}(\{ f(x): x_{\min} \leq x \leq x_{\max} \})` and :math:`\mathrm{\max}(\{ f(x): x_{\min} \leq x \leq x_{\max} \}) \leq y_{\max}`. .. admonition:: Example 1 for :func:`plot_montecarlo`: A first example: :math:`\int_0^1 x^3 \mathrm{d}x = \frac{1}{4} = 0.25`. .. image:: montecarlo2.png :scale: 70% :align: center .. admonition:: Example 2 for :func:`plot_montecarlo`: Another example on a less usual function (:math:`f(x) = \frac{1}{1+\mathrm{sinh}(2x) \log(x)^2}`), with :math:`1500` points, and then :math:`10000` points. .. image:: montecarlo3.png :scale: 95% :align: center .. image:: montecarlo4.png :scale: 95% :align: center plot_riemann_center(f, xmin, xmax, namef='$f(x)$', n=10, figname=None) Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles. Example: .. image:: riemanncenter.png :scale: 65% :align: center plot_riemann_left(f, xmin, xmax, namef='$f(x)$', n=10, figname=None) Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles. Example: .. image:: riemannleft.png :scale: 95% :align: center plot_riemann_right(f, xmin, xmax, namef='$f(x)$', n=10, figname=None) Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles. Example: .. image:: riemannright.png :scale: 95% :align: center plot_trapez(f, xmin, xmax, namef='$f(x)$', n=10, figname=None) Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n trapeziums. Example: .. image:: trapezoides.png :scale: 95% :align: center random_point(Xmin, Xmax, k) Pick a random point in the k-dimensional hypercub :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`. By example, a random point taken into :math:`[0, 1] \times [0, 2] \times [0, 3] \times [0, 4]` can be: >>> random.seed(1) # same values all the time >>> random_point([0, 0, 0, 0], [1, 2, 3, 4], 4) # doctest: +ELLIPSIS [0.134..., 1.694..., 2.291..., 1.020...] riemann_center(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann center rectangles: .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times f(x_{\min} + (i + \frac{1}{2}) * h) \right). .. admonition:: Example for :func:`riemann_center`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> center_int = riemann_center(math.cos, 0, math.pi, n=15); center_int # doctest: +ELLIPSIS 2.918...e-16 >>> round(100*abs(exact_int - center_int), 0) # % Error 0.0 riemann_left(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann left rectangles: .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times f(x_{\min} + i h) \right). For this method and the next ones, we take :math:`n` points, and :math:`h` is defined as :math:`h = \frac{x_{\max} - x_{\min}}{n}`, the horizontal size of the rectangles (or trapeziums). .. admonition:: Example for :func:`riemann_left`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0) >>> exact_int # doctest: +ELLIPSIS 1.22...e-16 >>> round(exact_int, 0) 0.0 >>> left_int = riemann_left(math.cos, 0, math.pi, n=15); left_int # doctest: +ELLIPSIS 0.2094... >>> round(100*abs(exact_int - left_int), 0) # Relative % error of 21%, VERY BAD! 21.0 riemann_right(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann right rectangles: .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=1}^{n} \left( h \times f(x_{\min} + i h) \right). .. admonition:: Example for :func:`riemann_right`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> right_int = riemann_right(math.cos, 0, math.pi, n=15); right_int # doctest: +ELLIPSIS -0.2094... >>> round(100*abs(exact_int - right_int), 0) # % Error 21.0 The more rectangles we compute, the more accurate the approximation will be: >>> right_int = riemann_right(math.cos, 0, math.pi, n=2000); right_int # doctest: +ELLIPSIS -0.00157... >>> 100*abs(exact_int - right_int) # Error is less than 0.15 % # doctest: +ELLIPSIS 0.15... >>> round(100*abs(exact_int - right_int), 0) # % Error 0.0 romberg(f, xmin, xmax, n=8, m=None, verb=False) (Inductively) compute the :math:`R(n, m)` value by using **dynamic programming**, to approximate :math:`\int_{x_{\min}}^{x_{\max}} f(x) \mathrm{d}x`. This solution is **way more efficient** that the recursive one! - The integrand :math:`f` must be of class :math:`\mathcal{C}^{2n+2}` for the method to be correct. - Note: a memoization trick is too hard to set-up here, as this value :math:`R(n, m)`) depends on f, a and b. - Time complexity is :math:`O(n m)` and memory complexity is also :math:`O(n m)` (using a dictionary to store all the value :math:`R(i, j)` for all the indeces :math:`0 \leq i \leq n, 0 \leq j \leq m, j \leq i`). .. admonition:: Example 1 for :func:`romberg`: Let us start with a first example from `the Wikipedia page for Romberg's method `_: :math:`\int_{0}^{1} \exp(-x^2) \mathrm{d}x \approx 0.842700792949715`: >>> f = lambda x: (2.0 / math.sqrt(math.pi)) * math.exp(-x**2) >>> erf1 = romberg(f, 0, 1, 5, 5); erf1 # doctest: +ELLIPSIS 0.84270... >>> exact_erf1 = 0.842700792949715 # From Wikipedia >>> 100 * abs(erf1 - exact_erf1) # Absolute % error, 2e-11 is almost perfect! # doctest: +ELLIPSIS 2.070...e-11 We can check that :math:`\int_{0}^{\pi} \sin(x) \mathrm{d}x = 2`, with `n = m = 5`: >>> area = romberg(math.sin, 0, math.pi, 5, 5); area 2.0000000000013207 >>> 100 * abs(area - 2.0) # Absolute % error, 1e-10 is already very good! # doctest: +ELLIPSIS 1.320...e-10 We check that :func:`romberg` is also working for functions that are not always positive (alternating between being positive and being negative): :math:`\int_{0}^{1001 \pi} \sin(x) \mathrm{d}x = \int_{1000 \pi}^{1001 \pi} \sin(x) \mathrm{d}x = \int_{0}^{\pi} \sin(x) \mathrm{d}x = 2`: >>> area2 = romberg(math.sin, 0, 1001*math.pi, 5, 5); area2 # doctest: +ELLIPSIS -148.929... >>> 100 * abs(area2 - 2.0) # Really bad here! # doctest: +ELLIPSIS 15092.968... >>> area3 = romberg(math.sin, 0, 1001*math.pi, 15, 15); area3 # doctest: +ELLIPSIS 1.999... >>> 100 * abs(area3 - 2.0) # Should be better: yes indeed, an absolute error of 3e-09 % is quite good! # doctest: +ELLIPSIS 3.145...e-09 .. admonition:: Example 2 for :func:`romberg`: Now, I would like to consider more examples, they will all be computed with `n = m = 5`: >>> n = m = 5 >>> a = 0; b = 1 First, we can compute an approximation of :math:`\frac{\pi}{4}` by integrating the function :math:`f_1(x) = \sqrt{1-x^2}` on :math:`[0, 1]`: >>> f1 = lambda x: (1-x**2)**0.5 >>> int_f1 = romberg(f1, a, b, n, m); int_f1 # doctest: +ELLIPSIS 0.784... >>> 100 * abs(int_f1 - math.pi / 4.0) # 0.05%, that's really good! # doctest: +ELLIPSIS 0.053... For :math:`f_2(x) = x^2`, :math:`\int_{0}^{1} f_2(x) \mathrm{d}x = \frac{1}{3}`: >>> f2 = lambda x: x**2 >>> int_f2 = romberg(f2, a, b, n, m); int_f2 0.3333333333333333 >>> 100 * abs(int_f2 - 1.0/3.0) 0.0 For :math:`f_3(x) = \sin(x)`, :math:`\int_{0}^{\pi} f_3(x) \mathrm{d}x = 2`: >>> f3 = math.sin; b = math.pi >>> int_f3 = romberg(f3, a, b, n, m); int_f3 2.0000000000013207 >>> 100 * abs(int_f3 - 2.0) # 10e-10 % error, that's almost perfect! # doctest: +ELLIPSIS 1.320...e-10 .. admonition:: Example 3 for :func:`romberg`: For :math:`f_4(x) = \exp(x)`, it is easy to compute any integrals: :math:`\int_{-4}^{19} f_4(x) \mathrm{d}x = \int_{-4}^{19} \exp(x) \mathrm{d}x = \exp(19) - \exp(-4)`: >>> f4 = math.exp >>> n, m = 5, 5 >>> a, b = -4, 19 >>> int_f4 = romberg(f4, a, b, n, m); int_f4 # doctest: +ELLIPSIS 178495315.533... >>> exact_f4 = f4(b) - f4(a); exact_f4 # doctest: +ELLIPSIS 178482300.944... >>> 100 * abs(int_f4 - exact_f4) # Not so good result! n=m=5 is not enough # doctest: +ELLIPSIS 1301458.822... As we can see, the result is not satisfactory with `n = m = 5` and for a function :math:`f` that can take *"big"* and *"small"* values on the integration interval (:math:`[a, b]`). Now what happens if we increase the value of `n` (and keep `m = n`)? >>> n, m = 10, 10 # More points! >>> int_f4_2 = romberg(f4, a, b, n, m); int_f4_2 # doctest: +ELLIPSIS 178482300.944... >>> exact_f4_2 = f4(b) - f4(a) >>> 100 * abs(int_f4_2 - exact_f4_2) # A smaller error! # doctest: +ELLIPSIS 5.960...e-06 Another example on a *"big"* integral and a *"big"* interval (the numerical value of the integral is bigger): >>> a, b = -1000, 20; n, m = 10, 10 # More points! >>> int_f4_3 = romberg(f4, a, b, n, m); int_f4_3 # doctest: +ELLIPSIS 485483299.278... >>> exact_f4_3 = f4(b) - f4(a); exact_f4_3 # doctest: +ELLIPSIS 485165195.409... >>> 100 * abs(int_f4_3 - exact_f4_3) # It is not accurate for big intervals # doctest: +ELLIPSIS 31810386.832... Let compare with this not-so-accurate result (obtained with `n = m = 10`), with the one given with `n = m = 20`: >>> a, b = -1000, 20; n, m = 20, 20 # More points! n=m=20 is really big! >>> int_f4_4 = romberg(f4, a, b, n, m); int_f4_4 # doctest: +ELLIPSIS 485165195.409... >>> 100 * abs(int_f4_4 - exact_f4_3) 0.0 .. admonition:: Example 4 for :func:`romberg`: On the same example as above (for :func:`simpson` and :func:`boole`, to compare the two implementations of the Romberg's method: >>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> romberg(f, a, b, n=0) # really not accurate! # doctest: +ELLIPSIS 477173.613... >>> romberg(f, a, b, n=1) # doctest: +ELLIPSIS 345561.243... >>> romberg(f, a, b, n=2) # doctest: +ELLIPSIS 373463.255... >>> romberg(f, a, b, n=3) # doctest: +ELLIPSIS 374357.311... >>> romberg(f, a, b, n=5) # doctest: +ELLIPSIS 374134.549... At one point, increasing the value of :math:`n` does not change the result anymore (due to the limited precision of `float` computations): >>> romberg(f, a, b, n=8) # Almost the exact value. # doctest: +ELLIPSIS 374133.192... >>> romberg(f, a, b, n=10) # More precise! # doctest: +ELLIPSIS 374133.193... >>> romberg(f, a, b, n=14) # Again more precise! # doctest: +ELLIPSIS 374133.193... >>> romberg(f, a, b, n=20) # Exact value! # doctest: +ELLIPSIS 374133.193... romberg_rec(f, xmin, xmax, n=8, m=None) Compute the :math:`R(n, m)` value **recursively**, to approximate :math:`\int_{x_{\min}}^{x_{\max}} f(x) \mathrm{d}x`. - The integrand :math:`f` must be of class :math:`\mathcal{C}^{2n+2}` for the method to be correct. - Time complexity is :math:`O(2^{n m})` and memory complexity is also :math:`O(2^{n m})`. - **Warning**: the time complexity is increasing very quickly with respect to :math:`n` here, be cautious. .. admonition:: Example for :func:`romberg_rec`: On the same *hard* example as above: >>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> romberg_rec(f, a, b, n=0) # really not accurate! # doctest: +ELLIPSIS 477173.613... >>> romberg_rec(f, a, b, n=1) # alreay pretty good! # doctest: +ELLIPSIS 345561.243... >>> romberg_rec(f, a, b, n=2) # doctest: +ELLIPSIS 373463.255... >>> romberg_rec(f, a, b, n=3) # doctest: +ELLIPSIS 374357.311... >>> romberg_rec(f, a, b, n=8) # Almost the exact value. # doctest: +ELLIPSIS 374133.192... We should not go further (:math:`4^n` is increasing quickly!). With a bigger order, this recursive implementation will fail (because of the tail recursion limit, about 1000 in Python 3)! simpson(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using composite Simpson's rule. .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \tfrac{h}{3} \bigg(f(x_0)+2\sum_{j=1}^{n/2-1}f(x_{2j})+ 4\sum_{j=1}^{n/2}f(x_{2j-1})+f(x_n)\bigg), where :math:`x_j = x_{\min} + j h` for :math:`j=0, 1, \dots, n-1, n` and :math:`h = \frac{x_{\max} - x_{\min}}{n}`. .. admonition:: Example 1 for :func:`simpson`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> simpson_int = simpson(math.cos, 0, math.pi, n=10) >>> simpson_int; abs(round(simpson_int, 0)) # doctest: +ELLIPSIS 9.300...e-17 0.0 >>> round(100*abs(exact_int - simpson_int), 0) # % Error 0.0 - References are `Simpson's Rule (on MathWorld) `_ and `Simpson's Rule (on Wikipedia) `_, - The function :math:`f` is evaluated :math:`n` number of times, - This method is exact upto the order 3 (ie. the integral of polynomials of degree :math:`\leq 3` are computed exactly): >>> f = lambda x: (x**2) + (7*x) + (4) >>> F = lambda x: ((x**3)/3.0) + ((7 * x**2)/2.0) + (4*x) >>> a, b = -1, 12 >>> exact_int2 = F(b) - F(a); round(exact_int2, 0) 1129.0 >>> simpson_int2 = simpson(f, a, b, n=2) >>> simpson_int2; abs(round(simpson_int2, 0)) 1128.8333333333333 1129.0 >>> round(100*abs(exact_int2 - simpson_int2), 0) # % Error 0.0 .. admonition:: Example 2 for :func:`simpson`: One more example (coming from Wikipédia), to show that this method is exact upto the order 3: >>> round(simpson(lambda x:x**3, 0.0, 10.0, 2), 7) 2500.0 >>> round(simpson(lambda x:x**3, 0.0, 10.0, 10000), 7) 2500.0 But not from the order 4: >>> round(simpson(lambda x:x**4, 0.0, 10.0, 2), 7) 20833.3333333 >>> round(simpson(lambda x:x**4, 0.0, 10.0, 100000), 7) 20000.0 .. admonition:: Example 3 for :func:`simpson`: A random example: :math:`\displaystyle \int_{1993}^{2015} \left( \frac{1+12x}{1+\cos^2(x)} \right) \mathrm{d}x`: >>> f = lambda x: (12*x+1) / (1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> simpson(f, a, b, n=2) # doctest: +ELLIPSIS 345561.243... >>> simpson(f, a, b, n=8) # doctest: +ELLIPSIS 374179.344... >>> simpson(f, a, b, n=100) # doctest: +ELLIPSIS 374133.138... The value seems to be 374133.2, `as confirmed by Wolfram|Alpha `_. The same example will also be used for other function, see below. trapez(f, xmin, xmax, n=1000) Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n trapeziums: .. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times \frac{f(x_{\min} + i h) + f(x_{\min} + (i+1) * h)}{2} \right). .. admonition:: Example for :func:`trapez`: A first example on a trignometric function with *nice* bounds: >>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> trapez_int = trapez(math.cos, 0, math.pi, n=15); trapez_int # doctest: +ELLIPSIS 2.281...e-16 >>> round(100*abs(exact_int - trapez_int), 0) # % Error 0.0 xw_gauss_legendre(n) *Experimental* caching of the `xi, wi` values returned by `p_roots`, to be more efficient for higher order Gaussian quadrature. - Higher order quadratures call several times the function :func:`scipy.special.orthogonal.p_roots` with the same parameter `n`, so it is easy to be more efficient, by caching the values `xi, wi` generated by this call. yminmax(f, xmin, xmax, n=10000, threshold=0.005) *Experimental* guess of the values :math:`y_{\min}, y_{\max}` for f, by randomly picking :math:`n` points. - `threshold` is here to increase a little bit the size of the window, to be cautious. Default is 0.5%. - Note: there are more efficient and trustworthy methods, but this one is a simple one. .. warning:: Not sure if the `threshold` is mathematically a good idea... .. admonition:: Example for :func:`yminmax`: Just to try, on an easy function (degree 2 polynomial): >>> random.seed(1) # same values all the time >>> ymin_exact, ymax_exact = -0.25, 12 >>> ymin, ymax = yminmax(lambda x: x**2 + x, -2, 3, 200) >>> ymin, ymax # doctest: +ELLIPSIS (-0.251..., 12.059...) >>> 100 * abs(ymin - ymin_exact) / abs(ymin_exact) # Relative % error < 0.5% # doctest: +ELLIPSIS 0.480... >>> 100 * abs(ymax - ymax_exact) / abs(ymax_exact) # Relative % error < 0.5% # doctest: +ELLIPSIS 0.499... DATA absolute_import = _Feature((2, 5, 0, 'alpha', 1), (3, 0, 0, 'alpha', 0... division = _Feature((2, 2, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0), 8192... print_function = _Feature((2, 6, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0)...