Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
#! /usr/bin/env python # -*- coding: utf-8 -*- Implementation of several algorithms for numerical integration problems.
- I was mainly interested about techniques to numerically compute 1D *definite* integrals, of the form :math:`\int_{x=a}^{x=b} f(x) \mathrm{d}x` (for :math:`f` continuous).
- Some functions can be used to plot a graphical representation of the technic (Riemann sums, trapez rule, 1D Monte-Carlo etc).
- There is also a few 2D techniques (Monte-Carlo, but also one based on `Fubini's Theorem <https://en.wikipedia.org/wiki/Fubini%27s_Theorem>`_), for integrals of the forms :math:`\displaystyle \iint_D f(x, y) \mathrm{d}x \mathrm{d}y = \int_{x=a}^{x=b}\left(\int_{y = g(x)}^{y = h(x)} f(x) \mathrm{d}y \right) \mathrm{d}x.`
- Similarly, I experimented these two techniques for a generalized :math:`k`-dimensional integral, inspired from `this very good Scipy package (scikit-monaco) <https://scikit-monaco.readthedocs.io/>`_ and `scipy.integrate <http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#integration-scipy-integrate>`_.
- *Date:* Saturday 18 June 2016, 18:59:23. - *Author:* `Lilian Besson <https://bitbucket.org/lbesson/>`_ for the `CS101 course <http://perso.crans.org/besson/cs101/>`_ at `Mahindra Ecole Centrale <http://www.mahindraecolecentrale.edu.in/>`_, 2015. - *Licence:* `MIT Licence <http://lbesson.mit-license.org>`_, © Lilian Besson.
Examples -------- Importing the module:
>>> from integrals import *
Let us start with a simple example :math:`x \mapsto x^2`, on :math:`[x_{\min}, x_{\max}] = [0, 3]`.
>>> k = 2; xmin = 0; xmax = 3 >>> f = lambda x: x**k
We can compute formally its integral: :math:`\int_{x=a}^{x=b} f(x) \mathrm{d}x = \left[ F(x) \right]_{x_{\min}}^{x_{\max}} = F(x_{\max}) - F(x_{\min}) = \frac{3^3}{3} - \frac{0^3}{3} = 27/3 = 9`
>>> F = lambda x: x**(k+1) / float(k+1) >>> F(xmax) - F(xmin) 9.0
Riemman sums ~~~~~~~~~~~~
Left Riemann sum, with 8 rectangles, give:
>>> riemann_left(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 7.382...
.. image:: riemannleft2.png :scale: 80% :align: center
>>> riemann_right(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 10.757...
.. image:: riemannright2.png :scale: 80% :align: center
>>> riemann_center(f, xmin, xmax, n=8) # doctest: +ELLIPSIS 8.964...
.. image:: riemanncenter2.png :scale: 80% :align: center
Trapezoidal rule ~~~~~~~~~~~~~~~~ >>> trapez(f, xmin, xmax, n=3) 9.5
.. image:: trapezoides2.png :scale: 80% :align: center
More examples ~~~~~~~~~~~~~ See below, at least one example is included for each integration method. Currently, there is **228 doctests**, corresponding to about **50 examples of numerically computed integrals**. (and `they all pass, ie each test does exactly what it I expected it to do <../doctest/output.txt>`_)
----------------------------------------------------------------------------------
Small things that still need to be done --------------------------------------- .. todo:: Conclude this main module: more Gaussian quadratures? .. todo:: Make a general :func:`nd_integral` function, letting the user chose the integration method to use for 1D (same as :func:`nd_quad` but not restricted to use :func:`gaussian_quad`). .. todo:: Include more examples in the `tests script <tests.html>`_?
References ---------- - The reference book for **MA102** : "Calculus", *Volume II*, by **T. Apostol**, - `Numerical Integration (on Wikipedia) <https://en.wikipedia.org/wiki/Numerical_integration>`_, - `scipy.integrate tutorial <http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#integration-scipy-integrate>`_, - `sympy.integrals documentatl <http://docs.sympy.org/dev/modules/integrals/integrals.html>`_ (and the `numerical integrals part <http://docs.sympy.org/dev/modules/integrals/integrals.html#numeric-integrals>`_).
.. seealso::
I also wrote a complete solution for the other project I was in charge of, `about Matrices and Linear Algebra <http://mec-cs101-matrices.readthedocs.io/en/latest/matrix.html>`_.
----------------------------------------------------------------------------------
List of functions ----------------- """
# ======================================================================== # Basic 1D method, and plotting some of the basic 1D methods
# %% Riemann left rectangles rule
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann left rectangles:
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times f(x_{\min} + i h) \right).
For this method and the next ones, we take :math:`n` points, and :math:`h` is defined as :math:`h = \frac{x_{\max} - x_{\min}}{n}`, the horizontal size of the rectangles (or trapeziums).
.. admonition:: Example for :func:`riemann_left`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0) >>> exact_int # doctest: +ELLIPSIS 1.22...e-16 >>> round(exact_int, 0) 0.0 >>> left_int = riemann_left(math.cos, 0, math.pi, n=15); left_int # doctest: +ELLIPSIS 0.2094... >>> round(100*abs(exact_int - left_int), 0) # Relative % error of 21%, VERY BAD! 21.0 """
r""" Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles.
Example:
.. image:: riemannleft.png :scale: 95% :align: center """ plt.figure() xvalues = np.linspace(xmin, xmax, 1000) yvalues = [f(x) for x in xvalues] plt.plot(xvalues, yvalues, 'r-') # plots the function as a red line # Now plot the rectangles h = (xmax - xmin) / float(n) xi = [xmin + i * h for i in range(0, n)] yi = [f(x) for x in xi] # left rectangles! for x, y in zip(xi, yi): plt.plot([x, x, x + h, x + h, x], [0, y, y, 0, 0], 'b-') # Finally, computing the area area = h * math.fsum(yi) plt.title("Riemman left rectangles for {} with {} rectangles. Area is {:.4g}.".format(namef, n, area)) plt.xlim(xmin, xmax) if figname: plt.savefig(figname) # Default was "riemannleft.png" return area
# %% Riemann center rectangles rule
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann center rectangles:
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times f(x_{\min} + (i + \frac{1}{2}) * h) \right).
.. admonition:: Example for :func:`riemann_center`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> center_int = riemann_center(math.cos, 0, math.pi, n=15); center_int # doctest: +ELLIPSIS 2.918...e-16 >>> round(100*abs(exact_int - center_int), 0) # % Error 0.0 """
r""" Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles.
Example:
.. image:: riemanncenter.png :scale: 65% :align: center """ plt.figure() xvalues = np.linspace(xmin, xmax, 1000) yvalues = [f(x) for x in xvalues] plt.plot(xvalues, yvalues, 'r-') # plots the function as a red line # Now plot the rectangles h = (xmax - xmin) / float(n) xi = [xmin + i * h for i in range(0, n)] yi = [f(x + h / 2.0) for x in xi] # center rectangles! for x, y in zip(xi, yi): plt.plot([x, x, x + h, x + h, x], [0, y, y, 0, 0], 'b-') # Finally, computing the area area = h * math.fsum(yi) plt.title("Riemman center rectangles for {} with {} rectangles. Area is {:.4g}.".format(namef, n, area)) plt.xlim(xmin, xmax) if figname: plt.savefig(figname) # Default was "riemanncenter.png" return area
# %% Riemann right rectangles rule
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n Riemann right rectangles:
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=1}^{n} \left( h \times f(x_{\min} + i h) \right).
.. admonition:: Example for :func:`riemann_right`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> right_int = riemann_right(math.cos, 0, math.pi, n=15); right_int # doctest: +ELLIPSIS -0.2094... >>> round(100*abs(exact_int - right_int), 0) # % Error 21.0
The more rectangles we compute, the more accurate the approximation will be:
>>> right_int = riemann_right(math.cos, 0, math.pi, n=2000); right_int # doctest: +ELLIPSIS -0.00157... >>> 100*abs(exact_int - right_int) # Error is less than 0.15 % # doctest: +ELLIPSIS 0.15... >>> round(100*abs(exact_int - right_int), 0) # % Error 0.0 """
r""" Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n Riemann left rectangles.
Example:
.. image:: riemannright.png :scale: 95% :align: center """ plt.figure() xvalues = np.linspace(xmin, xmax, 1000) yvalues = [f(x) for x in xvalues] plt.plot(xvalues, yvalues, 'r-') # plots the function as a red line # Now plot the rectangles h = (xmax - xmin) / float(n) xi = [xmin + i * h for i in range(0, n)] yi = [f(x + h) for x in xi] # right rectangles! for x, y in zip(xi, yi): plt.plot([x, x, x + h, x + h, x], [0, y, y, 0, 0], 'b-') # Finally, computing the area area = h * math.fsum(yi) plt.title("Riemman right rectangles for {} with {} rectangles. Area is {:.4g}.".format(namef, n, area)) plt.xlim(xmin, xmax) if figname: plt.savefig(figname) # Default was "riemannright.png" return area
# %% Trapezoidal rule
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using n trapeziums:
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \sum_{i=0}^{n-1} \left( h \times \frac{f(x_{\min} + i h) + f(x_{\min} + (i+1) * h)}{2} \right).
.. admonition:: Example for :func:`trapez`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> trapez_int = trapez(math.cos, 0, math.pi, n=15); trapez_int # doctest: +ELLIPSIS 2.281...e-16 >>> round(100*abs(exact_int - trapez_int), 0) # % Error 0.0 """
r""" Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and n trapeziums.
Example:
.. image:: trapezoides.png :scale: 95% :align: center """ plt.figure() xvalues = np.linspace(xmin, xmax, 1000) yvalues = [f(x) for x in xvalues] plt.plot(xvalues, yvalues, 'r-') # plots the function as a red line # Now plot the trapezoides h = (xmax - xmin) / float(n) xi = [xmin + i * h for i in range(0, n)] # Trapezoides ! for x in xi: plt.plot([x, x, x + h, x + h, x], [0, f(x), f(x + h), 0, 0], 'b-') # trapez plt.plot([x, x, x + h, x + h, x], [0, 0.5 * (f(x) + f(x + h)), 0.5 * (f(x) + f(x + h)), 0, 0], 'g:') # Equivalent rectangle # Finally, computing the area area = 0.5 * h * math.fsum(f(x) + f(x + h) for x in xi) plt.title("Trapezoidal rule for {} with {} trapeziums. Area is {:.4g}.".format(namef, n, area)) plt.xlim(xmin, xmax) if figname: plt.savefig(figname) # Default was "trapezoides.png" return area
# ======================================================================== # Random (Monte-Carlo) 1D method
# WARNING To pick real number in an interval [a, b], we must use uniform(a, b), not randrange(a, b) !
r""" *Experimental* guess of the values :math:`y_{\min}, y_{\max}` for f, by randomly picking :math:`n` points.
- `threshold` is here to increase a little bit the size of the window, to be cautious. Default is 0.5%. - Note: there are more efficient and trustworthy methods, but this one is a simple one.
.. warning:: Not sure if the `threshold` is mathematically a good idea...
.. admonition:: Example for :func:`yminmax`:
Just to try, on an easy function (degree 2 polynomial):
>>> random.seed(1) # same values all the time >>> ymin_exact, ymax_exact = -0.25, 12 >>> ymin, ymax = yminmax(lambda x: x**2 + x, -2, 3, 200) >>> ymin, ymax # doctest: +ELLIPSIS (-0.251..., 12.059...) >>> 100 * abs(ymin - ymin_exact) / abs(ymin_exact) # Relative % error < 0.5% # doctest: +ELLIPSIS 0.480... >>> 100 * abs(ymax - ymax_exact) / abs(ymax_exact) # Relative % error < 0.5% # doctest: +ELLIPSIS 0.499... """ # Now we just increase a little bit the size of the window [ymin, ymax] ymin *= (1 + threshold) # ymin becomes bigger! else: ymax *= (1 - threshold) # ymax becomes smaller! else: # Improve the ymin, ymax. # Not really in fact, we take this into account in the Monte-Carlo method. # ymin = min(0, ymin) # ymax = max(0, ymax)
r""" Compute an approximation of the integral of :math:`f(x)` for :math:`x` from :math:`x_{\min}` to :math:`x_{\max}`.
- Each point :math:`(x, y)` is taken in the rectangle :math:`[x_{\min}, x_{\max}] \times [y_{\min}, y_{\max}]`. - :math:`n` is the number of random points to pick (it should be big, like 1000 at least). - What is returned is :math:`\text{area} \approx (\text{Area rectangle}) \times (\text{Estimated ratio})`:
.. math::
\text{area} \approx (x_{\max} - x_{\min}) \times (y_{\max} - y_{\min}) \times \frac{\text{Nb points below the curve}}{\text{Nb points}}.
.. warning:: The values :math:`y_{\min}` and :math:`y_{\max}` should satisfy :math:`y_{\min} \leq \mathrm{\min}(\{ f(x): x_{\min} \leq x \leq x_{\max} \})` and :math:`\mathrm{\max}(\{ f(x): x_{\min} \leq x \leq x_{\max} \}) \leq y_{\max}`.
.. admonition:: Example 1 for :func:`montecarlo`:
For example, we are interested in :math:`\int_1^6 x \mathrm{d} x = \frac{6^2}{2} - \frac{1^6}{2} = 17.5`.
>>> random.seed(1) # same values all the time >>> xmin, xmax = 1, 6 >>> f = lambda x: x # simple example >>> intf = (xmax**2 / 2.0) - (xmin**2 / 2.0); intf 17.5 >>> ymin, ymax = xmin, xmax
Let us start by taking 100 points:
>>> n = 100 >>> intf_apporx = montecarlo(f, xmin, xmax, n, ymin, ymax); intf_apporx 18.0 >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 2.8% # doctest: +ELLIPSIS 2.857...
The more points we take, the better the approximation will be:
>>> n = 100000 >>> intf_apporx = montecarlo(f, xmin, xmax, n, ymin, ymax); intf_apporx # doctest: +ELLIPSIS 17.444... >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 0.32% # doctest: +ELLIPSIS 0.318...
.. admonition:: Example 2 for :func:`montecarlo`:
We can also let the function compute `ymin` and `ymax` by itself:
>>> n = 100000 >>> intf_apporx = montecarlo(f, xmin, xmax, n); intf_apporx # doctest: +ELLIPSIS 17.485... >>> 100 * abs(intf - intf_apporx) / abs(intf) # Relative % error, 0.08% is really good! # doctest: +ELLIPSIS 0.0844...
.. image:: montecarlo.png :scale: 95% :align: center """ ymin, ymax = yminmax(f, xmin, xmax, n) ymin, _ = yminmax(f, xmin, xmax, n) _, ymax = yminmax(f, xmin, xmax, n) # Here we are cautious about the arguments raise ValueError("montecarlo() the argument n has to be a positive integer.") raise ValueError("montecarlo() invalid arguments xmax < xmin or ymax < ymin.")
# This will count the number of points below the curve
raise ValueError("montecarlo() ymin and ymax are not correct: for x = {}, f(x) = {} is NOT in the interval [ymin, ymax] = [{}, {}].".format(x, fx, ymin, ymax)) # Here we need to be cautious, if f(x) is positive or not ! nb_below -= 1
# We take into account the possible error ymin > 0, or ymax < 0 # Rectangle between the x-axis and the lower-limit of the box # Rectangle between the upper-limit of the x-axis and the lower-limit of the box area += ymax * (xmax - xmin) # End of the function montecarlo
r""" Plot the function f from :math:`x_{\min}` to :math:`x_{\max}`, and :math:`n` random points.
- Each point :math:`(x, y)` is taken in the rectangle :math:`[x_{\min}, x_{\max}] \times [y_{\min}, y_{\max}]`. - **Warning:** :math:`y_{\min}` and :math:`y_{\max}` should satisfy :math:`y_{\min} \leq \mathrm{\min}(\{ f(x): x_{\min} \leq x \leq x_{\max} \})` and :math:`\mathrm{\max}(\{ f(x): x_{\min} \leq x \leq x_{\max} \}) \leq y_{\max}`.
.. admonition:: Example 1 for :func:`plot_montecarlo`:
A first example: :math:`\int_0^1 x^3 \mathrm{d}x = \frac{1}{4} = 0.25`.
.. image:: montecarlo2.png :scale: 70% :align: center
.. admonition:: Example 2 for :func:`plot_montecarlo`:
Another example on a less usual function (:math:`f(x) = \frac{1}{1+\mathrm{sinh}(2x) \log(x)^2}`), with :math:`1500` points, and then :math:`10000` points.
.. image:: montecarlo3.png :scale: 95% :align: center
.. image:: montecarlo4.png :scale: 95% :align: center """ if ymin is None and ymax is None: ymin, ymax = yminmax(f, xmin, xmax, n) elif ymin is None: ymin, _ = yminmax(f, xmin, xmax, n) elif ymax is None: _, ymax = yminmax(f, xmin, xmax, n) # Here we are cautious about the arguments if (not isinstance(n, int)) or (n <= 0): raise ValueError("plot_montecarlo() the argument n has to be a positive integer.") if not (xmax >= xmin and ymax >= ymin): raise ValueError("plot_montecarlo() invalid arguments xmax < xmin or ymax < ymin.")
plt.figure() xvalues = np.linspace(xmin, xmax, 1000) yvalues = [f(x) for x in xvalues] plt.plot(xvalues, yvalues, 'r-') # plots the function as a red line
# This will count the number of points below the curve nb_below = 0
for _ in range(0, n): x = random.uniform(xmin, xmax) if not ymin <= f(x) <= ymax: raise ValueError("plot_montecarlo() ymin and ymax are not correct: for x = {}, f(x) = {} is NOT in the interval [ymin, ymax] = [{}, {}].".format(x, f(x), ymin, ymax)) y = random.uniform(ymin, ymax) # Here we need to be cautious, if f(x) is positive or not ! if 0 <= y <= f(x): nb_below += 1 plt.plot([x], [y], 'go') # green circle if the point is > 0 and below elif f(x) <= y <= 0: nb_below -= 1 plt.plot([x], [y], 'g*') # green star if the point is < 0 and above else: plt.plot([x], [y], 'k+') # black plus otherwise # XXX could be better to create three lists, then plot them
observed_probability = float(nb_below) / float(n) area_rectangle = (xmax - xmin) * (ymax - ymin) area = area_rectangle * observed_probability # We take into account the possible error ymin > 0, or ymax < 0 if ymin > 0: # Rectangle between the x-axis and the lower-limit of the box area += ymin * (xmax - xmin) if ymax < 0: # Rectangle between the upper-limit of the x-axis and the lower-limit of the box area += ymax * (xmax - xmin)
plt.title("Monte-Carlo method for {}.\nWith {} points, area is {:.4g}.".format(namef, n, area))
plt.xlim(xmin, xmax) # Improve the ymin, ymax ymin = min(0, ymin) ymax = max(0, ymax) plt.ylim(ymin, ymax) if figname: plt.savefig(figname) # Default was "montecarlo.png"
return area # End of the function plot_montecarlo
# ======================================================================== # Simpson's, Boole's rules
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using composite Simpson's rule.
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \tfrac{h}{3} \bigg(f(x_0)+2\sum_{j=1}^{n/2-1}f(x_{2j})+ 4\sum_{j=1}^{n/2}f(x_{2j-1})+f(x_n)\bigg),
where :math:`x_j = x_{\min} + j h` for :math:`j=0, 1, \dots, n-1, n` and :math:`h = \frac{x_{\max} - x_{\min}}{n}`.
.. admonition:: Example 1 for :func:`simpson`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> simpson_int = simpson(math.cos, 0, math.pi, n=10) >>> simpson_int; abs(round(simpson_int, 0)) # doctest: +ELLIPSIS 9.300...e-17 0.0 >>> round(100*abs(exact_int - simpson_int), 0) # % Error 0.0
- References are `Simpson's Rule (on MathWorld) <http://mathworld.wolfram.com/SimpsonsRule.html>`_ and `Simpson's Rule (on Wikipedia) <https://en.wikipedia.org/wiki/Simpson%27s_rule#Composite_Simpson.27s_rule>`_, - The function :math:`f` is evaluated :math:`n` number of times, - This method is exact upto the order 3 (ie. the integral of polynomials of degree :math:`\leq 3` are computed exactly):
>>> f = lambda x: (x**2) + (7*x) + (4) >>> F = lambda x: ((x**3)/3.0) + ((7 * x**2)/2.0) + (4*x) >>> a, b = -1, 12 >>> exact_int2 = F(b) - F(a); round(exact_int2, 0) 1129.0 >>> simpson_int2 = simpson(f, a, b, n=2) >>> simpson_int2; abs(round(simpson_int2, 0)) 1128.8333333333333 1129.0 >>> round(100*abs(exact_int2 - simpson_int2), 0) # % Error 0.0
.. admonition:: Example 2 for :func:`simpson`:
One more example (coming from Wikipédia), to show that this method is exact upto the order 3:
>>> round(simpson(lambda x:x**3, 0.0, 10.0, 2), 7) 2500.0 >>> round(simpson(lambda x:x**3, 0.0, 10.0, 10000), 7) 2500.0
But not from the order 4:
>>> round(simpson(lambda x:x**4, 0.0, 10.0, 2), 7) 20833.3333333 >>> round(simpson(lambda x:x**4, 0.0, 10.0, 100000), 7) 20000.0
.. admonition:: Example 3 for :func:`simpson`:
A random example: :math:`\displaystyle \int_{1993}^{2015} \left( \frac{1+12x}{1+\cos^2(x)} \right) \mathrm{d}x`:
>>> f = lambda x: (12*x+1) / (1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> simpson(f, a, b, n=2) # doctest: +ELLIPSIS 345561.243... >>> simpson(f, a, b, n=8) # doctest: +ELLIPSIS 374179.344... >>> simpson(f, a, b, n=100) # doctest: +ELLIPSIS 374133.138...
The value seems to be 374133.2, `as confirmed by Wolfram|Alpha <http://www.wolframalpha.com/input/?i=integrate%28%2812*x%2B1%29%2F%281%2Bcos%28x%29**2%29%2C+x%2C+1993%2C+2015%29>`_. The same example will also be used for other function, see below. """ assert n > 0 if n % 2 != 0: n += 1 # if n is odd, n+1 is even! h = (xmax - xmin) / float(n) s = f(xmin) + f(xmax) # for j in range(1, n / 2, 1): # j is even # s += 2 * f(xmin + h * (2*j)) for j in range(2, n - 1, 2): s += 2 * f(xmin + j * h) # for i in range(1, 1 + n / 2, 1): # i is odd # s += 4 * f(xmin + h * (2*i - 1)) for i in range(1, n, 2): s += 4 * f(xmin + i * h) return s * h / 3.0
r""" Compute an approximation of the integral of the function f from :math:`x_{\min}` to :math:`x_{\max}`, by using composite Boole's rule.
.. math:: \displaystyle \int_{x=x_{\min}}^{x=x_{\max}} f(x) \mathrm{d}x \approx \tfrac{2h}{45} \sum_{i=0}^{n}\bigg(7f(x_{i}) + 32f(x_{i+1}) + 12f(x_{i+2}) + 32f(x_{i+3}) + 7f(x_{i+4})\bigg),
where :math:`x_i = x_{\min} + i h` for :math:`i=0, 1, \dots, 4n-1, 4n` and :math:`h = \frac{x_{\max} - x_{\min}}{4*n}`.
.. admonition:: Example 1 for :func:`boole`:
A first example on a trignometric function with *nice* bounds:
>>> exact_int = math.sin(math.pi) - math.sin(0); round(exact_int, 0) 0.0 >>> boole_int = boole(math.cos, 0, math.pi, n=10) >>> boole_int; abs(round(boole_int, 0)) # doctest: +ELLIPSIS 1.612...e-16 0.0 >>> round(100*abs(exact_int - boole_int), 0) # % Error 0.0
- Reference is `Boole's Rule (on MathWorld) <http://mathworld.wolfram.com/BoolesRule.html>`_, and `Boole's Rule (on Wikipedia) <https://en.wikipedia.org/wiki/Boole's_rule>`_, - The function :math:`f` is evaluated :math:`4 n` number of times, - This method is exact upto the order 4 (ie. the integral of polynomials of degree :math:`\leq 4` are computed exactly).
.. admonition:: Example 2 for :func:`boole`:
A second easy example on a degree 3 polynomial:
>>> f = lambda x: (x**3) + (x**2) + (7*x) + 4 >>> F = lambda x: (x**4)/4.0 + (x**3)/3.0 + (7 * x**2)/2.0 + (4*x) >>> a, b = -4, 6 >>> exact_int2 = F(b) - F(a); round(exact_int2, 0) 463.0 >>> boole_int2 = boole(f, a, b, n=10) # n = 10 is good enough! >>> boole_int2; abs(round(boole_int2, 6)) # Weird! 463.33333333333337 463.333333 >>> round(100*abs(exact_int2 - boole_int2), 0) # % Error 0.0
.. admonition:: Example 3 for :func:`boole`:
On the same harder example:
>>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> boole(f, a, b, n=1) # doctest: +ELLIPSIS 373463.255... >>> boole(f, a, b, n=2) # doctest: +ELLIPSIS 374343.342... >>> boole(f, a, b, n=100) # Really close to the exact value. # doctest: +ELLIPSIS 374133.193... """ assert n > 0 N = 4 * n # To divide the interval into certain (multiple of 4) sub intervales h = (xmax - xmin) / float(N) # Bug fixed: float(N) instead of float(n)! return 2 * (h / 45.0) * math.fsum( (7 * f(xmin + (i - 4) * h)) + (32 * f(xmin + (i - 3) * h)) + (12 * f(xmin + (i - 2) * h)) + (32 * f(xmin + (i - 1) * h)) + (7 * f(xmin + i * h)) for i in range(4, N + 1, 4) )
# ======================================================================== # Romberg's method, recursive or not-recursive, with chosen values for (n, m)
r""" Compute the :math:`R(n, m)` value **recursively**, to approximate :math:`\int_{x_{\min}}^{x_{\max}} f(x) \mathrm{d}x`.
- The integrand :math:`f` must be of class :math:`\mathcal{C}^{2n+2}` for the method to be correct. - Time complexity is :math:`O(2^{n m})` and memory complexity is also :math:`O(2^{n m})`. - **Warning**: the time complexity is increasing very quickly with respect to :math:`n` here, be cautious.
.. admonition:: Example for :func:`romberg_rec`:
On the same *hard* example as above:
>>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> romberg_rec(f, a, b, n=0) # really not accurate! # doctest: +ELLIPSIS 477173.613... >>> romberg_rec(f, a, b, n=1) # alreay pretty good! # doctest: +ELLIPSIS 345561.243... >>> romberg_rec(f, a, b, n=2) # doctest: +ELLIPSIS 373463.255... >>> romberg_rec(f, a, b, n=3) # doctest: +ELLIPSIS 374357.311... >>> romberg_rec(f, a, b, n=8) # Almost the exact value. # doctest: +ELLIPSIS 374133.192...
We should not go further (:math:`4^n` is increasing quickly!). With a bigger order, this recursive implementation will fail (because of the tail recursion limit, about 1000 in Python 3)! """ if m is None: # not m was considering 0 as None m = n assert n >= m if n == 0 and m == 0: return ((xmax - xmin) / 2.0) * (f(xmin) + f(xmax)) elif m == 0: h = (xmax - xmin) / float(2**n) N = (2**(n - 1)) + 1 term = math.fsum(f(xmin + ((2 * k) - 1) * h) for k in range(1, N)) return (term * h) + (0.5) * romberg_rec(f, xmin, xmax, n - 1, 0) else: return (1.0 / ((4**m) - 1)) * ((4**m) * romberg_rec(f, xmin, xmax, n, m - 1) - romberg_rec(f, xmin, xmax, n - 1, m - 1))
r""" (Inductively) compute the :math:`R(n, m)` value by using **dynamic programming**, to approximate :math:`\int_{x_{\min}}^{x_{\max}} f(x) \mathrm{d}x`. This solution is **way more efficient** that the recursive one!
- The integrand :math:`f` must be of class :math:`\mathcal{C}^{2n+2}` for the method to be correct. - Note: a memoization trick is too hard to set-up here, as this value :math:`R(n, m)`) depends on f, a and b. - Time complexity is :math:`O(n m)` and memory complexity is also :math:`O(n m)` (using a dictionary to store all the value :math:`R(i, j)` for all the indeces :math:`0 \leq i \leq n, 0 \leq j \leq m, j \leq i`).
.. admonition:: Example 1 for :func:`romberg`:
Let us start with a first example from `the Wikipedia page for Romberg's method <https://en.wikipedia.org/wiki/Romberg%27s_method#Example>`_: :math:`\int_{0}^{1} \exp(-x^2) \mathrm{d}x \approx 0.842700792949715`:
>>> f = lambda x: (2.0 / math.sqrt(math.pi)) * math.exp(-x**2) >>> erf1 = romberg(f, 0, 1, 5, 5); erf1 # doctest: +ELLIPSIS 0.84270... >>> exact_erf1 = 0.842700792949715 # From Wikipedia >>> 100 * abs(erf1 - exact_erf1) # Absolute % error, 2e-11 is almost perfect! # doctest: +ELLIPSIS 2.070...e-11
We can check that :math:`\int_{0}^{\pi} \sin(x) \mathrm{d}x = 2`, with `n = m = 5`:
>>> area = romberg(math.sin, 0, math.pi, 5, 5); area 2.0000000000013207 >>> 100 * abs(area - 2.0) # Absolute % error, 1e-10 is already very good! # doctest: +ELLIPSIS 1.320...e-10
We check that :func:`romberg` is also working for functions that are not always positive (alternating between being positive and being negative): :math:`\int_{0}^{1001 \pi} \sin(x) \mathrm{d}x = \int_{1000 \pi}^{1001 \pi} \sin(x) \mathrm{d}x = \int_{0}^{\pi} \sin(x) \mathrm{d}x = 2`:
>>> area2 = romberg(math.sin, 0, 1001*math.pi, 5, 5); area2 # doctest: +ELLIPSIS -148.929... >>> 100 * abs(area2 - 2.0) # Really bad here! # doctest: +ELLIPSIS 15092.968... >>> area3 = romberg(math.sin, 0, 1001*math.pi, 15, 15); area3 # doctest: +ELLIPSIS 1.999... >>> 100 * abs(area3 - 2.0) # Should be better: yes indeed, an absolute error of 3e-09 % is quite good! # doctest: +ELLIPSIS 3.145...e-09
.. admonition:: Example 2 for :func:`romberg`:
Now, I would like to consider more examples, they will all be computed with `n = m = 5`:
>>> n = m = 5 >>> a = 0; b = 1
First, we can compute an approximation of :math:`\frac{\pi}{4}` by integrating the function :math:`f_1(x) = \sqrt{1-x^2}` on :math:`[0, 1]`:
>>> f1 = lambda x: (1-x**2)**0.5 >>> int_f1 = romberg(f1, a, b, n, m); int_f1 # doctest: +ELLIPSIS 0.784... >>> 100 * abs(int_f1 - math.pi / 4.0) # 0.05%, that's really good! # doctest: +ELLIPSIS 0.053...
For :math:`f_2(x) = x^2`, :math:`\int_{0}^{1} f_2(x) \mathrm{d}x = \frac{1}{3}`:
>>> f2 = lambda x: x**2 >>> int_f2 = romberg(f2, a, b, n, m); int_f2 0.3333333333333333 >>> 100 * abs(int_f2 - 1.0/3.0) 0.0
For :math:`f_3(x) = \sin(x)`, :math:`\int_{0}^{\pi} f_3(x) \mathrm{d}x = 2`:
>>> f3 = math.sin; b = math.pi >>> int_f3 = romberg(f3, a, b, n, m); int_f3 2.0000000000013207 >>> 100 * abs(int_f3 - 2.0) # 10e-10 % error, that's almost perfect! # doctest: +ELLIPSIS 1.320...e-10
.. admonition:: Example 3 for :func:`romberg`:
For :math:`f_4(x) = \exp(x)`, it is easy to compute any integrals: :math:`\int_{-4}^{19} f_4(x) \mathrm{d}x = \int_{-4}^{19} \exp(x) \mathrm{d}x = \exp(19) - \exp(-4)`:
>>> f4 = math.exp >>> n, m = 5, 5 >>> a, b = -4, 19 >>> int_f4 = romberg(f4, a, b, n, m); int_f4 # doctest: +ELLIPSIS 178495315.533... >>> exact_f4 = f4(b) - f4(a); exact_f4 # doctest: +ELLIPSIS 178482300.944... >>> 100 * abs(int_f4 - exact_f4) # Not so good result! n=m=5 is not enough # doctest: +ELLIPSIS 1301458.822...
As we can see, the result is not satisfactory with `n = m = 5` and for a function :math:`f` that can take *"big"* and *"small"* values on the integration interval (:math:`[a, b]`).
Now what happens if we increase the value of `n` (and keep `m = n`)?
>>> n, m = 10, 10 # More points! >>> int_f4_2 = romberg(f4, a, b, n, m); int_f4_2 # doctest: +ELLIPSIS 178482300.944... >>> exact_f4_2 = f4(b) - f4(a) >>> 100 * abs(int_f4_2 - exact_f4_2) # A smaller error! # doctest: +ELLIPSIS 5.960...e-06
Another example on a *"big"* integral and a *"big"* interval (the numerical value of the integral is bigger):
>>> a, b = -1000, 20; n, m = 10, 10 # More points! >>> int_f4_3 = romberg(f4, a, b, n, m); int_f4_3 # doctest: +ELLIPSIS 485483299.278... >>> exact_f4_3 = f4(b) - f4(a); exact_f4_3 # doctest: +ELLIPSIS 485165195.409... >>> 100 * abs(int_f4_3 - exact_f4_3) # It is not accurate for big intervals # doctest: +ELLIPSIS 31810386.832...
Let compare with this not-so-accurate result (obtained with `n = m = 10`), with the one given with `n = m = 20`:
>>> a, b = -1000, 20; n, m = 20, 20 # More points! n=m=20 is really big! >>> int_f4_4 = romberg(f4, a, b, n, m); int_f4_4 # doctest: +ELLIPSIS 485165195.409... >>> 100 * abs(int_f4_4 - exact_f4_3) 0.0
.. admonition:: Example 4 for :func:`romberg`:
On the same example as above (for :func:`simpson` and :func:`boole`, to compare the two implementations of the Romberg's method:
>>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> romberg(f, a, b, n=0) # really not accurate! # doctest: +ELLIPSIS 477173.613... >>> romberg(f, a, b, n=1) # doctest: +ELLIPSIS 345561.243... >>> romberg(f, a, b, n=2) # doctest: +ELLIPSIS 373463.255... >>> romberg(f, a, b, n=3) # doctest: +ELLIPSIS 374357.311... >>> romberg(f, a, b, n=5) # doctest: +ELLIPSIS 374134.549...
At one point, increasing the value of :math:`n` does not change the result anymore (due to the limited precision of `float` computations):
>>> romberg(f, a, b, n=8) # Almost the exact value. # doctest: +ELLIPSIS 374133.192... >>> romberg(f, a, b, n=10) # More precise! # doctest: +ELLIPSIS 374133.193... >>> romberg(f, a, b, n=14) # Again more precise! # doctest: +ELLIPSIS 374133.193... >>> romberg(f, a, b, n=20) # Exact value! # doctest: +ELLIPSIS 374133.193... """ assert xmin <= xmax if m is None: m = n if verb: print("romberg() m was None, now it is equal to n : n = {} = m = {}.".format(n, m)) assert n >= m >= 0 # First value: r = {(0, 0): 0.5 * (xmax - xmin) * (f(xmax) + f(xmin))}
# One side of the triangle: for i in range(1, n + 1): h_i = (xmax - xmin) / float(2**i) xsamples = [xmin + ((2 * k - 1) * h_i) for k in range(1, 1 + 2**(i - 1))] if verb: print("For i = {}, h_i is {}, and so we have {} samples.".format(i, h_i, len(xsamples))) r[(i, 0)] = (0.5 * r[(i - 1, 0)]) + h_i * math.fsum(f(x) for x in xsamples)
# All the other values: for j in range(1, m + 1): for i in range(j, n + 1): try: r[(i, j)] = (((4**j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / float((4**j) - 1) if verb: print("R({}, {}) = {}".format(i, j, r[(i, j)])) except: raise ValueError("romberg() with n = {}, m = {} and i = {}, j = {} was an error.".format(n, m, i, j))
# XXX Computing the error ? No it was not satisfactory. # h_n = (xmax - xmin) / float(2**n) # error = h_n ** (2*m+2) # return r[(n, m)], error return r[(n, m)]
# ======================================================================== # Other techniques, like Gaussian quadrature
# CACHE = True # if CACHE: # xw_gauss_legendre = p_roots # else:
""" *Experimental* caching of the `xi, wi` values returned by `p_roots`, to be more efficient for higher order Gaussian quadrature.
- Higher order quadratures call several times the function :func:`scipy.special.orthogonal.p_roots` with the same parameter `n`, so it is easy to be more efficient, by caching the values `xi, wi` generated by this call. """ if n in _cache_xw_gauss_legendre: # Not recomputing: just reading the values in the _cache_xw_gauss_legendre dictionnary return _cache_xw_gauss_legendre[n] else: xi, wi = p_roots(n)[:2] _cache_xw_gauss_legendre[n] = (xi, wi) return xi, wi
r""" Integrates between :math:`x_{\min}` and :math:`x_{\max}`, using Gaussian quadrature.
- The weigts and roots of the Gauss-Legendre polynomials are computed by SciPy (:func:`scipy.special.orthogonal.p_roots`). - Complexity of my part is :math:`O(n)`, but I don't know how efficient is the `p_roots` function. - I added a cache layer to the `p_roots` function (see :func:`xw_gauss_legendre`).
.. admonition:: Example for :func:`gaussian_quad`:
Same example as previously:
>>> f = lambda x: (12*x+1)/(1+math.cos(x)**2) >>> a, b = 1993, 2015 >>> gaussian_quad(f, a, b, n=1) # doctest: +ELLIPSIS 279755.057... >>> gaussian_quad(f, a, b, n=3) # doctest: +ELLIPSIS 343420.473... >>> gaussian_quad(f, a, b, n=100) # Quite accurate result, see above. # doctest: +ELLIPSIS 374133.206... """ assert n > 0 xi, wi = xw_gauss_legendre(n) sums = 0 k = (xmax - xmin) / 2.0 for l in range(n): sums += wi[l] * f((((xmax - xmin) / 2.0) * xi[l]) + ((xmin + xmax) / 2.0)) return k * sums
r""" Double integrates :math:`f(x, y)`, when :math:`y` is moving between :math:`g(x)` and :math:`h(x)` and when :math:`x` is moving between :math:`a` and :math:`b`, by using two interlaced Gaussian quadratures.
- Based on `Fubini's Theorem <https://en.wikipedia.org/wiki/Fubini%27s_Theorem>`_, for integrals of the forms :math:`\displaystyle \iint_D f(x, y) \mathrm{d}x \mathrm{d}y = \int_{x = a}^{x = b}\left(\int_{y = g(x)}^{y = h(x)} f(x) \mathrm{d}y \right) \mathrm{d}x.`
.. admonition:: Example 1 for :func:`dbl_quad`:
For example, :math:`\int_{x = 0}^{x = 1}\bigg(\int_{y = 0}^{y = 1} \left( x^2 + y^2 \right) \mathrm{d}y \bigg) \mathrm{d}x = 2 \int_{0}^{1} x^2 \mathrm{d}x = 2 \frac{1^3}{3} = 2/3`:
>>> f = lambda x, y: x**2 + y**2 >>> a, b = 0, 1 >>> g = lambda x: a >>> h = lambda x: b >>> dbl_quad(f, a, b, g, h, n=1) 0.5 >>> dbl_quad(f, a, b, g, h, n=2) # exact from n=2 points 0.66666666666666674 >>> dbl_quad(f, a, b, g, h, n=40) # more points do not bring more digits 0.66666666666666574
.. admonition:: Example 2 for :func:`dbl_quad`:
A second example could be: :math:`\int_{x = 0}^{x = \pi/2}\bigg(\int_{y = 0}^{y = \pi/2} \left( \cos(x) y^8 \right) \mathrm{d}y \bigg) \mathrm{d}x`.
>>> f = lambda x, y: math.cos(x) * y**8 >>> a, b = 0, math.pi/2.0 >>> g = lambda x: a >>> h = lambda x: b
This integral can be computed mathematically :math:`\int_{x = 0}^{x = \pi/2}\bigg(\int_{y = 0}^{y = \pi/2} \left( \cos(x) y^8 \right) \mathrm{d}y \bigg) \mathrm{d}x = \frac{(\pi/2)^9}{9} \int_{0}^{\pi/2} \cos(x) \mathrm{d}x = \frac{(\pi/2)^9}{9} \approx 6.468988`
>>> int2d_exact = (b**9) / 9.0; int2d_exact # doctest: +ELLIPSIS 6.4689...
Let see how efficient is the double Gaussian quadrature method:
>>> dbl_quad(f, a, b, g, h, n=1) # doctest: +ELLIPSIS 0.2526... >>> dbl_quad(f, a, b, g, h, n=2) # still not very precise for n=2 points # doctest: +ELLIPSIS 4.3509...
With :math:`n = 40`, we have :math:`n^2 = 40^2 = 1600` points:
>>> int2d_approx = dbl_quad(f, a, b, g, h, n=40); int2d_approx # 13 first digits are perfect # doctest: +ELLIPSIS 6.4689... >>> 100 * abs(int2d_exact - int2d_approx) / int2d_exact # Relative % error, 1e-12 is VERY SMALL # doctest: +ELLIPSIS 6.59...e-13 >>> 100 * abs(int2d_exact - int2d_approx) # Absolute % error, 1e-12 is really good! # doctest: +ELLIPSIS 4.263...e-12
We see again that all these methods are *limited to a precision of 12 to 14 digits*, because we use Python `float` numbers (*IEEE-754* floating point arithmetic).
.. admonition:: Example 3 for :func:`dbl_quad`:
3 examples are given here, with *moving* bounds: :math:`g(x)` or :math:`h(x)` are *really* depending on :math:`x`.
The first one is :math:`\displaystyle \iint_{(x, y) \in D_1} \cos(y^2) \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x}^{1} \cos(y^2) \;\; \mathrm{d}y \mathrm{d}x = \frac{\sin(1)}{2} \approx 0.4207354924039`:
>>> a1, b1 = 0, 1 >>> g1 = lambda x: x; h1 = lambda x: 1 >>> f1 = lambda x, y: math.cos(y**2) >>> exact_dbl_int1 = math.sin(1.0) / 2.0; exact_dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> dbl_int1 = dbl_quad(f1, a1, b1, g1, h1, n=4) >>> dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> 100 * abs(dbl_int1 - exact_dbl_int1) # Not perfect yet, n is too small # doctest: +ELLIPSIS 3.933...e-05 >>> dbl_int1 = dbl_quad(f1, a1, b1, g1, h1, n=100) >>> dbl_int1 # doctest: +ELLIPSIS 0.4207... >>> 100 * abs(dbl_int1 - exact_dbl_int1) # Almost perfect computation (13 digits are correct) # doctest: +ELLIPSIS 0.0
.. note:: Solved with `SymPy <docs.sympy.org/latest/modules/integrals/integrals.html>`_ integrals module:
`This program on bitbucket.org/lbesson/python-demos <https://bitbucket.org/lbesson/python-demos/src/master/DemoSympy.py>`_ (written in January 2015), uses SymPy to *symbolically* compute this double integral.
The second one is computing the area between the curves :math:`y = x^2` and :math:`y = \sqrt{x}`, for :math:`x \in [0, 1]`: :math:`\displaystyle \text{Area}(D_2) = \iint_{(x, y) \in D_2} 1 \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x^2}^{\sqrt{x}} 1 \;\; \mathrm{d}y \mathrm{d}x = \frac{2}{3} - \frac{1}{3} = \frac{1}{3}`:
>>> a2, b2 = 0, 1 >>> g2 = lambda x: x**2; h2 = lambda x: x**0.5 >>> f2 = lambda x, y: 1 >>> exact_dbl_int2 = 1.0 / 3.0 >>> dbl_int2 = dbl_quad(f2, a2, b2, g2, h2, n=100) >>> dbl_int2 # doctest: +ELLIPSIS 0.3333... >>> 100 * abs(dbl_int2 - exact_dbl_int2) # 0.0001% is very good! # doctest: +ELLIPSIS 1.01432...e-05
The last one is :math:`\displaystyle \iint_{(x, y) \in D_3} \frac{\sin(y)}{y} \;\; \mathrm{d}(x,y) = \int_0^1 \int_{x}^{1} \frac{\sin(y)}{y} \;\; \mathrm{d}y \mathrm{d}x = 1 - \cos(1) \approx 0.45969769413186`:
>>> a3, b3 = 0, 1 >>> g3 = lambda x: x; h3 = lambda x: 1 >>> f3 = lambda x, y: math.sin(y) / y >>> exact_dbl_int3 = 1 - math.cos(1.0); exact_dbl_int3 0.45969769413186023 >>> dbl_int3 = dbl_quad(f3, a3, b3, g3, h3, n=100) >>> dbl_int3 0.4596976941318604 >>> 100 * abs(dbl_int3 - exact_dbl_int3) # Almost perfect computation (14 digits are correct) # doctest: +ELLIPSIS 1.665...e-14
.. note:: All these examples are coming from the **MA102** `quiz given on January the 29th, 2015 <http://perso.crans.org/besson/ma102/quiz/29-01/>`_. """ def F(x): """ F(x) = y -> f(x, y) """ def fx(y): """ fx = y -> f(x, y) """ return f(x, y) # This integration is regarding y return gaussian_quad(fx, g(x), h(x), n) # This integration is regarding x return gaussian_quad(F, a, b, n)
r""" k-dimensional integral of :math:`f(\overrightarrow{x})`, on a hypercube (k-dimensional square) :math:`D = [\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`, by using k interlaced Gaussian quadratures.
- Based on the generalized `Fubini's Theorem <https://en.wikipedia.org/wiki/Fubini%27s_Theorem>`_, for integrals of the forms :math:`\displaystyle \int_D f(\overrightarrow{x}) \mathrm{d}\overrightarrow{x} = \int_{x_1=\text{Xmin}_1}^{x_1=\text{Xmax}_1} \int_{x_2=\text{Xmin}_2}^{x_2=\text{Xmax}_2} \dots \int_{x_k=\text{Xmin}_k}^{x_k=\text{Xmax}_k} f(x_1, x_2, \dots, x_k) \mathrm{d}x_k \dots \mathrm{d}x_2 \mathrm{d}x_1`. - The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - Right now, we are taking about :math:`O(n^k)` points, so do not take a too big value for n. - See `this trick <https://scikit-monaco.readthedocs.io/en/latest/tutorials/getting_started.html#complex-integration-volumes>`_ which explains how to integrate a function on a more complicated domain. The basic concept is to include the knowledge of the domain (inequalities, equalities) in the function f itself.
.. admonition:: Example 1 for :func:`nd_quad`:
First example, volume of a 3D sphere:
For example, we can compute the volume of a 3D sphere of radius R: :math:`V_R = \frac{4}{3} \pi R^3`, by integrating the function :math:`f : X \mapsto 1` on the cube :math:`[-R, R]^3`.
>>> R = 1 >>> f = lambda X: 1
For :math:`R = 1`, :math:`V_R = V_1 \approx 4.18879`:
>>> V_3 = (4.0/3.0) * math.pi * (R**3); V_3 # doctest: +ELLIPSIS 4.18879...
The trick is to multiply :math:`f(X)` by 1 if :math:`X` is inside the sphere, or by 0 otherwise:
>>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: f(X) * isInside(X)
Then we integrate on :math:`[0, R]^3` to get :math:`1/8` times the volume (remember that the smaller the integration domain, the more efficient the method will be):
>>> Xmin = [0, 0, 0]; Xmax = [R, R, R] >>> (2**3) * nd_quad(F, Xmin, Xmax, n=2) 4.0 >>> (2**3) * nd_quad(F, Xmin, Xmax, n=4) 4.0 >>> (2**3) * nd_quad(F, Xmin, Xmax, n=8) 4.3182389695603307
The more points we consider, the better the approximation will be (as usual):
>>> V_approx10 = (2**3) * nd_quad(F, Xmin, Xmax, n=10); V_approx10 # doctest: +ELLIPSIS 4.12358... >>> 100 * abs(V_3 - V_approx10) / abs(V_3) # Relative % error, 1.5% is OK # doctest: +ELLIPSIS 1.55... >>> V_approx40 = (2**3) * nd_quad(F, Xmin, Xmax, n=40); V_approx40 # doctest: +ELLIPSIS 4.18170... >>> 100 * abs(V_3 - V_approx40) / abs(V_3) # Relative % error, 0.16% is good # doctest: +ELLIPSIS 0.16...
.. admonition:: Example 2 for :func:`nd_quad`:
Second example, volume of a n-ball:
We can also try to compute the `volume of a higher dimensional ball <https://en.wikipedia.org/wiki/Volume_of_an_n-ball#The_volume>`_: :math:`\displaystyle V_{k, R} = \frac{\pi^{k/2}}{\Gamma(k/2 + 1)} R^k`.
>>> from math import gamma, pi >>> V = lambda k, R: (pi**(k/2.0))/(gamma( 1 + k/2.0 )) * (R**k)
A ball of radius :math:`R = 1` in dimension :math:`k = 5` will have a 5-dim volume of :math:`\frac{8\pi^2}{15} R^{5} \approx 5.263789013914325`:
>>> k = 5; R = 1 >>> V_5 = V(k, R); V_5 # Exact value! # doctest: +ELLIPSIS 5.26378...
Similarly, the integration domain can be :math:`[0, 1] \times \dots \times [0, 1]`.
>>> Xmin = [0]*k; Xmax = [1]*k >>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: 1.0 * isInside(X) >>> V_approx5_3 = (2**k) * nd_quad(F, Xmin, Xmax, n=3) # 3**5 = 243 points, so really not accurate >>> V_approx5_3 # doctest: +ELLIPSIS 4.2634... >>> 100 * abs(V_5 - V_approx5_3) / abs(V_5) # n=3 gives an error of 19%, that's not too bad! # doctest: +ELLIPSIS 19.0049...
Exactly as before, we can try to take more points:
>>> V_approx5_10 = (2**k) * nd_quad(F, Xmin, Xmax, n=10) # 10**5 = 10000 points! >>> V_approx5_10 # doctest: +ELLIPSIS 5.25263... >>> 100 * abs(V_5 - V_approx5_10) / abs(V_5) # Pretty good! # doctest: +ELLIPSIS 0.211... >>> V_approx5_15 = (2**k) * nd_quad(F, Xmin, Xmax, n=15) # 15**5 = 759375 points! >>> V_approx5_15 # doctest: +ELLIPSIS 5.24665... >>> 100 * abs(V_5 - V_approx5_15) / abs(V_5) # 0.32%, that's great! # doctest: +ELLIPSIS 0.325...
The Gaussian quadrature is more efficient with an *even* number of points:
>>> V_approx5_16 = (2**k) * nd_quad(F, Xmin, Xmax, n=16) # 16**5 = 1048576 points! >>> V_approx5_16 # doctest: +ELLIPSIS 5.263061... >>> 100 * abs(V_5 - V_approx5_16) / abs(V_5) # 0.01%, that's great! # doctest: +ELLIPSIS 0.013... """ if len(Xmin) != len(Xmax): raise ValueError("nd_quad() Xmin and Xmax have different sizes.") assert len(Xmin) > 0 if len(Xmin) == 1: def f_1(x_1): """ f_1(x1) = f([x_1])""" return f([x_1]) else: def f_1(x_1): """ f_1(x_1) = nd_quad(f_2_k, Xmin[1:], Xmax[1:], n)""" def f_2_k(X): """ f_2(X) = f([x_1] + X)""" return f([x_1] + X) # This integration is recursively computed using nd_quad on (x_2,..,x_k) return nd_quad(f_2_k, Xmin[1:], Xmax[1:], n) # This integration is regarding x_1 return gaussian_quad(f_1, Xmin[0], Xmax[0], n)
# ======================================================================== # Random Monte-Carlo k-dim
r""" Pick a random point in the k-dimensional hypercub :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`.
By example, a random point taken into :math:`[0, 1] \times [0, 2] \times [0, 3] \times [0, 4]` can be:
>>> random.seed(1) # same values all the time >>> random_point([0, 0, 0, 0], [1, 2, 3, 4], 4) # doctest: +ELLIPSIS [0.134..., 1.694..., 2.291..., 1.020...] """ return [random.uniform(Xmin[i], Xmax[i]) for i in range(k)]
r""" *Experimental* guess of the values :math:`y_{\min}, y_{\max}` for f, by randomly picking :math:`n` points in the hypercube :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`.
- The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - `threshold` is here to increase a little bit the size of the window, to be cautious. Default is 0.5%. - Note: there are more efficient and trustworthy methods, but this one is a simple one.
.. warning:: Not sure if the `threshold` is mathematically a good idea...
.. admonition:: Example for :func:`nd_yminmax`:
One an easy function, just to see if it works:
>>> random.seed(1) # same values all the time >>> ymin_exact, ymax_exact = 0, 1 >>> Xmin = [0, 0]; Xmax = [1, 1] >>> F = lambda X: 1 if (sum(x**2 for x in X) <= 1) else 0 >>> ymin, ymax = nd_yminmax(F, Xmin, Xmax, 100) >>> ymin, ymax (0.0, 1.005) >>> 100 * abs(ymin - ymin_exact) # Absolute % error < 0.5% 0.0 >>> 100 * abs(ymax - ymax_exact) # Absolute % error < 0.5% 0.49999999999998934 """ assert n > 0, "nd_yminmax() n has to be a positive integer." k = len(Xmin) ymin, ymax = f(Xmin), f(Xmax) # initial guess! for _ in range(0, 2 * n): X = random_point(Xmin, Xmax, k) fX = f(X) if fX < ymin: ymin = fX if ymax < fX: ymax = fX # Now we just increase a little bit the size of the window [ymin, ymax] if ymin < 0: ymin *= (1 + threshold) # ymin becomes bigger! else: ymin *= (1 - threshold) # ymin becomes smaller! if ymax < 0: ymax *= (1 - threshold) # ymax becomes smaller! else: ymax *= (1 + threshold) # ymax becomes bigger! return (ymin, ymax)
r""" Compute an approximation of the k-dimensional integral of :math:`f(\overrightarrow{x})`, on a hypercube (k-dimensional square) :math:`D = [\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`
- The function f has to accept an *iterable* of size k (list, tuple, numpy array?). - Each point :math:`\overrightarrow{x}` is taken in the hypercube :math:`[\text{Xmin}_1, \text{Xmax}_1] \times \dots \times [\text{Xmin}_k, \text{Xmax}_k]`. - :math:`n` is the number of random points to pick (it should be big, like 1000 at least). - What is returned is :math:`\text{area} \approx (\text{Volume hypercube}) \times (\text{Estimated ratio})`, ie :math:`\displaystyle \text{area} \approx \prod_{i=1}^{k} \bigg( \text{Xmax}_k - \text{Xmin}_k \bigg) \times \frac{\text{Nb points below the curve}}{\text{Nb points}}`. - See `this trick <https://scikit-monaco.readthedocs.io/en/latest/tutorials/getting_started.html#complex-integration-volumes>`_ which explains how to integrate a function on a more complicated domain. The basic concept is to include the knowledge of the domain (inequalities, equalities) in the function f itself.
.. admonition:: Example for :func:`nd_montecarlo`:
For example, we can compute the volume of a 3D sphere of radius R: :math:`V_R = \frac{4}{3} \pi R^3`, by integrating the function :math:`f : X \mapsto 1` on the cube :math:`[-R, R]^3`.
>>> R = 1 >>> f = lambda X: 1
For :math:`R = 1`, :math:`V_R = V_1 \approx 4.18879`:
>>> V_3 = (4.0/3.0) * math.pi * (R**3); V_3 # doctest: +ELLIPSIS 4.18879...
As previously, the trick is to multiply :math:`f(X)` by 1 if :math:`X` is inside the sphere, or by 0 otherwise:
>>> isInside = lambda X: 1 if (sum(x**2 for x in X) <= R**2) else 0 >>> F = lambda X: f(X) * isInside(X)
Then we integrate on :math:`[0, R]^3` to get :math:`1/8` times the volume:
>>> Xmin = [0, 0, 0]; Xmax = [R, R, R] >>> random.seed(1) # same values all the time >>> (2**3) * nd_montecarlo(F, Xmin, Xmax, n=10) # doctest: +ELLIPSIS 3.2159... >>> (2**3) * nd_montecarlo(F, Xmin, Xmax, n=100) # doctest: +ELLIPSIS 3.9395...
The more points we consider, the better the approximation will be (as usual):
>>> V_approx1000 = (2**3) * nd_montecarlo(F, Xmin, Xmax, n=1000); V_approx1000 # doctest: +ELLIPSIS 4.19687... >>> 100 * abs(V_3 - V_approx1000) / abs(V_3) # Relative % error, 0.19% is already very good! # doctest: +ELLIPSIS 0.193... >>> V_approx10000 = (2**3) * nd_montecarlo(F, Xmin, Xmax, n=10000); V_approx10000 # doctest: +ELLIPSIS 4.25637... >>> 100 * abs(V_3 - V_approx10000) / abs(V_3) # Relative % error, 1.6% is less accurate. Why? # doctest: +ELLIPSIS 1.613...
.. todo:: Compare this *n-dim* Monte-Carlo (:func:`nd_montecarlo`) with the *n-dim* Gaussian quadrature (:func:`nd_quad`).
On the three examples in 2D, but also on more "crazy" examples in higher dimension. My guess is that, for the same number of points (:math:`n^k`), Guassian quadrature is slower but more accurate. And for the same computation time, Monte-Carlo gives a better result. """ # Here we are cautious about the arguments if (not isinstance(n, int)) or (n <= 0): raise ValueError("nd_montecarlo() the argument n has to be a positive integer.") if len(Xmin) != len(Xmax): raise ValueError("nd_montecarlo() Xmin and Xmax have different sizes.") assert len(Xmin) > 0 k = len(Xmin) for i in range(k): if not Xmax[i] >= Xmin[i]: raise ValueError("nd_montecarlo() invalid arguments Xmin[k] < Xmax[k].") if ymin is None and ymax is None: ymin, ymax = nd_yminmax(f, Xmin, Xmax, n) elif ymin is None: ymin, _ = nd_yminmax(f, Xmin, Xmax, n) elif ymax is None: _, ymax = nd_yminmax(f, Xmin, Xmax, n)
# This will count the number of points below the curve nb_below = 0
for _ in range(0, n): X = random_point(Xmin, Xmax, k) fX = f(X)
if not ymin <= fX <= ymax: raise ValueError("nd_montecarlo() ymin and ymax are not correct: for X = {}, f(X) = {} is NOT in the interval [ymin, ymax] = [{}, {}].".format(X, fX, ymin, ymax)) y = random.uniform(ymin, ymax) # Here we need to be cautious, if f(x) is positive or not ! if 0 <= y <= fX: nb_below += 1 elif fX <= y <= 0: nb_below -= 1
volume_hypercube = _reduce(lambda u, v: u * v, [b - a for a, b in zip(Xmin, Xmax)]) observed_probability = float(nb_below) / float(n) volume = volume_hypercube * (ymax - ymin) * observed_probability # FIXME We take into account the possible error ymin > 0, or ymax < 0 # if ymin > 0: # # Hypercube between the x1-xk-plan and the lower-limit of the box # volume += ymin * volume_hypercube # if ymax < 0: # # Rectangle between the upper-limit of the x1-xk-plan and the lower-limit of the box # volume += ymax * volume_hypercube return volume # End of the function nd_montecarlo
# ======================================================================== # We are done print("You can run the file 'tests.py' to see examples of use of this module 'integrals.py'.") print("Testing every doctests in the module 'integrals'...") # Each function or method I wrote includes a doctest: import doctest doctest.testmod(verbose=True) # doctest.testmod() print("\nMore details about doctest can be found on the Python documentation: \nhttps://docs.python.org/2/library/doctest.html")
# End of integrals.py |