Numerical Integration over a Matrix of Functions, SymPy and SciPy
Asked Answered
C

2

7

From my SymPy output I have the matrix shown below, which I must integrate in 2D. Currently I am doing it element-wise as shown below. This method works but it gets too slow (for both sympy.mpmath.quad and scipy.integrate.dblquad) for my real case (in which A and its functions are much bigger (see edit below):

from sympy import Matrix, sin, cos
import sympy
import scipy
sympy.var( 'x, t' )
A = Matrix([[(sin(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*cos(t)*x)*cos(3-0.1*x)*cos(t)],
            [(cos(2-0.1*x)*sin(t)*x+sin(2-0.1*x)*cos(t)*x)*sin(3-0.1*x)*cos(t)],
            [(cos(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*sin(t)*x)*sin(3-0.1*x)*sin(t)]])

# integration intervals
x1,x2,t1,t2 = (30, 75, 0, 2*scipy.pi)

# element-wise integration
from sympy.utilities import lambdify
from sympy.mpmath import quad
from scipy.integrate import dblquad
A_int1 = scipy.zeros( A.shape, dtype=float )
A_int2 = scipy.zeros( A.shape, dtype=float )
for (i,j), expr in scipy.ndenumerate(A):
    tmp = lambdify( (x,t), expr, 'math' )
    A_int1[i,j] = quad( tmp, (x1, x2), (t1, t2) )
    # or (in scipy)
    A_int2[i,j] = dblquad( tmp, t1, t2, lambda x:x1, lambda x:x2 )[0]

I was considering doing it in one shot like, but I'm not sure if this is the way to go:

A_eval = lambdify( (x,t), A, 'math' )
A_int1 = sympy.quad( A_eval, (x1, x2), (t1, t2)                 
# or (in scipy)
A_int2 = scipy.integrate.dblquad( A_eval, t1, t2, lambda x: x1, lambda x: x2 )[0]

EDIT: The real case has been made available in this link. Just unzip and run shadmehri_2012.py (is the author from were this example was taken from: Shadmehri et al. 2012). I've started a bounty of 50 for the one who can do the following:

  • make it reasonably faster than the proposed question
  • manage to run without giving memory error even with a number of terms m=15 and n=15 in the code), I managed up to m=7 and n=7 in 32-bit

The current timing can be summarized below(measured with m=3 and n=3). From there it can be seen that the numerical integration is the bottleneck.

build trial functions = 0%
evaluating differential equations = 2%
lambdifying k1 = 22%
integrating k1 = 74%
lambdifying and integrating k2 = 2%
extracting eigenvalues = 0%


Related questions: about lambdify

Carlettacarley answered 30/4, 2013 at 8:10 Comment(8)
And profiling your code with line_profiler silas.sewell.org/blog/2009/05/28/… would have helped you.Rayner
The problem is that A is obtained after applying an automatic differentiation over another matrix, using the procedure described here, explaining why it comes from SymPy. I fully agree with you to use directly the numeric modules like numpy when applicable.Carlettacarley
Oh, my bad... And looking closely at the code it seems my issue nb. 1 is also incorrect, as you are not redoing the lambdify call each time. So in the hope of actually being helpful unlike in the previous comment: have you actually tried to do the integration symbolically with numpy. If it is only sin, cos and polynomials it should be easy. Especially if you can permit yourself to rewrite it as exponentials expr.rewrite(exp).expand() (there are complex numbers popping out, but after evaluation it is real).Rayner
Thank you for the suggestion, I will try the symbolic integration with sympy (I guess it is sympy what you meant, isn't it?) before going further, this would really save time...Carlettacarley
Indeed, I mean sympy. I must still be sleepy :) ...Rayner
Two more tips if you go that route: 1. Call trigsimp (or just simplify) on the expression first. 2. Use the git master of sympy. There have been a lot of improvements to trigsimp and integration since the last official release.Slit
actually @Rayner puts off the slowest integration algorithms to the very end.Slit
I have removed a wrong comment pointed out by @asmeurer. To correct myself: the symbolic integration routine tries various integration algorithms starting with fast-but-not-general and ending with slow-but-more-general. If you know that the less general ones do not work you can skip them.Rayner
A
5

I think you can avoid the lambdification time by switching to numerical evaluation at a different stage of the calculation.

Namely, your calculation seems to be diagonal in the sense that k1 and k2 are both of the form k = g^T X g where X is some 5x5 matrix (with differential ops inside, but that doesn't matter), and g is 5xM, with M large. Therefore k[i,j] = g.T[i,:] * X * g[:,j].

So you can just replace

for j in xrange(1,n+1):
    for i in xrange(1,m+1):
        g1 += [uu(i,j,x,t),          0,          0,          0,          0]
        g2 += [          0,vv(i,j,x,t),          0,          0,          0]
        g3 += [          0,          0,ww(i,j,x,t),          0,          0]
        g4 += [          0,          0,          0,bx(i,j,x,t),          0]
        g5 += [          0,          0,          0,          0,bt(i,j,x,t)]
g = Matrix( [g1, g2, g3, g4, g5] )

with

i1 = Symbol('i1')
j1 = Symbol('j1')
g1 = [uu(i1,j1,x,t),          0,          0,          0,          0]
g2 = [          0,vv(i1,j1,x,t),          0,          0,          0]
g3 = [          0,          0,ww(i1,j1,x,t),          0,          0]
g4 = [          0,          0,          0,bx(i1,j1,x,t),          0]
g5 = [          0,          0,          0,          0,bt(i1,j1,x,t)]
g_right = Matrix( [g1, g2, g3, g4, g5] )

i2 = Symbol('i2')
j2 = Symbol('j2')
g1 = [uu(i2,j2,x,t),          0,          0,          0,          0]
g2 = [          0,vv(i2,j2,x,t),          0,          0,          0]
g3 = [          0,          0,ww(i2,j2,x,t),          0,          0]
g4 = [          0,          0,          0,bx(i2,j2,x,t),          0]
g5 = [          0,          0,          0,          0,bt(i2,j2,x,t)]
g_left = Matrix( [g1, g2, g3, g4, g5] )

and

tmp = evaluateExpr( B*g )
k1 = r*tmp.transpose() * F * tmp
k2 = r*g.transpose()*evaluateExpr(Bc*g)
k2 = evaluateExpr( k2 )

by

tmp_right = evaluateExpr( B*g_right )
tmp_left = evaluateExpr( B*g_left )
k1 = r*tmp_left.transpose() * F * tmp_right
k2 = r*g_left.transpose()*evaluateExpr(Bc*g_right)
k2 = evaluateExpr( k2 )

Didn't test (past am), but you get the idea.

Now, instead of having a huge symbolic matrix which makes everything slow, you have two matrix indices for the trial function indices, and free parameters i1,j1 and i2,j2 which play their role and you should substitute integers into them in the end.

Since the matrix to lambdify is only 5x5, and needs to be lambdified only once outside all loops, the lambdification and simplification overhead is gone. Moreover, the problem fits easily into memory even for large m, n.

The integration is not any faster, but since the expressions are very small, you can easily e.g. dump them in Fortran or do something else smart.

Aisne answered 2/5, 2013 at 23:46 Comment(6)
thank you very much for the answer. Unfortunately the problem is not diagonalizable... when doing B*g the result is a quite dense matrix. Also, I need all the extension of g since the final matrix for eigenvalue analysis is a cXc matrix, with c=5*m*n, and I need like this to calculate the Ritz coefficients...Carlettacarley
Yes, but keeping i1,j1,i2,j2 as symbolic indices, you do not need to construct the full symbolic matrix before lambdification, which is the point of my answer. You can evaluate the final cxc matrix by substituting integers to lambdified expression.Aisne
why did you use a g_left and g_right if they are the same?Carlettacarley
They are not the same: if you use the same i,j in them, you can only generate the diagonal blocks of the final g^T X g matrix.Aisne
it would be nice if you could detail even more your answer. I still did not get the point about how to use i1,j1,i2,j2. I've updated the original code here in case you want to take a look...Carlettacarley
maybe I got your point...calling B^T*F*B=A at the end we have [g_left11, g_left21, g_left12, g_left22]^T*A*[g_right11, g_right21, g_right12, g_right22], is that what you mean?Carlettacarley
F
1

quadpy (a project of mine) does vectorized numerical integration. This

from numpy import sin, cos, pi
import quadpy


def f(X):
    x, t = X
    return [
        [(sin(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*cos(t)*x)*cos(3-0.1*x)*cos(t)],
        [(cos(2-0.1*x)*sin(t)*x+sin(2-0.1*x)*cos(t)*x)*sin(3-0.1*x)*cos(t)],
        [(cos(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*sin(t)*x)*sin(3-0.1*x)*sin(t)]
        ]


x1 = 30
x2 = 75
t1 = 0
t2 = 2*pi

sol = quadpy.quadrilateral.integrate(
        f,
        [[x1, t1], [x2, t1], [x2, t2], [x1, t2]],
        quadpy.quadrilateral.Product(quadpy.line_segment.GaussLegendre(5))
        )

print(sol)

gives

[[ 1456.3701526 ]
 [ 2620.60490653]
 [ 5034.5831071 ]]

Timings:

%timeit quadpy.quadrilateral.integrate(f, [[x1, t1], [x2, t1], [x2, t2], [x1, t2]], q)
1000 loops, best of 3: 219 µs per loop

This leads to dramatic speedup in your downloadable example:

import numpy
array2mat = [{'ImmutableMatrix': numpy.array}, 'numpy']
k1_lambda = lambdify( (x,t), k1, modules=array2mat)
print 'Finished lambdifying k1:', time.clock()
import quadpy
sol = quadpy.quadrilateral.integrate(
    lambda X: k1_lambda(X[0], X[1]),
    [[x1, t1], [x2, t1], [x2, t2], [x1, t2]],
    quadpy.quadrilateral.Product(quadpy.line_segment.GaussLegendre(5))
    )

Output:

Start: 0.040001
Finished trial functions: 0.379929
Finished evaluating differential equations: 2.669536
Finished lambdifying k1: 29.961808
Finished integrating k1: 30.106988
Finished lambdifying and integrating k2: 34.229007
Finished calculating eigenvalues and eigenvectors: 34.229924

Note that quadpy doesn't do adaptive quadrature, so choose your scheme wisely.

Falter answered 22/3, 2017 at 11:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.