Delta operator in sympy - math

Is it possible to make a delta operator like this in sympy? Im not really sure how to code it. Should be really eazy if there exists a method.

I don't know if SymPy exposes something that could be useful to you. If not, we can create something raw.
Note: the following approach requires a bit of knowledge in Object Oriented Programming and the way SymPy treats things. This is a 5 minutes attempt, and it is not meant to be used in production (as a matter of fact, no test has been done over this code). There are many things that may not work as expected. But, for your case, it might work :)
One possible way is to define a "gradient" class, like this:
class Grad(Expr):
def __mul__(self, other):
return other.diff(*self.args)
def _latex(self, printer):
# create a latex representation to be visualize in Jupyter Notebook
return r"\frac{\partial}{%s}" % " ".join([r"\partial %s" % latex(t) for t in self.args])
We can create a gradient of something with respect to x by writing gx = Grad(x). Once gx is multiplied with some other thing, it returns the partial derivative of that thing with respect to x.
Then you would define your symbols/functions and matrices like this:
from sympy import *
init_printing()
var("x, y")
N1, N2, N3 = [s(x, y) for s in symbols("N1:4", cls=Function)]
A = Matrix(3, 2, [Grad(x), 0, 0, Grad(y), Grad(x), Grad(y)])
B = Matrix(2, 6, [N1, 0, N2, 0, N3, 0, 0, N1, 0, N2, 0, N3])
display(A, B)
Finally, you multiply the matrices together to obtain the symbolic results:
A * B
Eventually, you might want to create a function:
def delta_operator(x, y, N1, N2, N3):
A = Matrix(3, 2, [Grad(x), 0, 0, Grad(y), Grad(x), Grad(y)])
B = Matrix(2, 6, [N1, 0, N2, 0, N3, 0, 0, N1, 0, N2, 0, N3])
return A * B
So, whenever you have to apply that operator, you just execute delta_operator(x, y, N1, N2, N3) to obtain a result similar to above.

Related

Can anyone write a prog that change the sum(1 to n) to n*(n+1)/2 automatic?

with the rec sum:
let rec sum a=if a==0 then 0 else a+sum(a-1)
if the compiler use the tail recursive optimization,it may create a variable "sum" to iteration(when I use the "ocamlc -dlambda",the recursive still there.when I use "ocamlc -dinstr" got the assemably code,I can't read it now)
but on the book《Design Concepts of programming languages》,page 287,it can change the function to this(the key line):n*(n+1)/2
"You should convince yourself that the least fixed point of this
function is the computation csum that returns a summation procedure that,returns n*(n+1)/2 if its argument is a nonnegative integer in"
I can't understand it,the prog not Gauss!I think it can't chang the "rec sum" to n*(n+1)/2 automatic!only man can do it,right?
So how this book write here means?Is anyone know?Thanks!
I believe your book is merely making a small point about equivalence of pure functions. Nevertheless, optimising away a loop that only contains affine operations is relatively easy.
Equivalence of pure functions
I haven't read that book, but from the paragraph you quote, I think the book merely makes a point about pure functions. Since sum is a pure function, i.e. a function without side-effect, then in a sense,
let rec sum n =
if n = 0 then 0
else n + sum (n - 1)
is equivalent to
let sum n =
n * (n + 1) / 2
But of course "equivalent" here ignores the time and space complexity, and unless the compiler has some sort of hardcoding for common functions to optimise, I'd be extremely surprised if it optimised sum like that.
Also note that the two above functions are only equivalent so far as they are only called on a nonnegative argument. The recursive version will loop infinitely (and provoke a stack overflow) if n is negative; the direct formula version will always return a result, although that result will be nonsensical if n is negative.
Optimising loops that only contain affine operations
Nevertheless, writing a compiler that would perform such optimisations is not complete science-fiction. At the end of this answer you will find links to two blogposts which you might be interested in. In this answer I will summarise how the method described in those blog posts can be applied to your problem.
First let's rewrite function sum as a loop in pseudo-code:
function sum(n):
s := 0
i := 1
repeat n:
s += i
i += 1
return s
This kind of rewriting is similar to what happens when sum is transformed into a tail-recursive function.
Now if you consider the vector v = [s, i, 1], then the affine operations s += i and i += 1 can be described as multiplying v by a matrix:
s += i
[[ 1, 0, 0 ], # matrix Msi
[ 1, 1, 0 ],
[ 0, 0, 1 ]]
i += 1
[[ 1, 0, 0 ], # matrix Mi1
[ 0, 1, 0 ],
[ 0, 1, 1 ]]
s += i, i += 1
[[ 1, 0, 0 ], # M = Msi * Mi1
[ 1, 1, 0 ],
[ 0, 1, 1 ]]
This affine operation is wrapped in a "repeat n" loop. So we have to multiply v by this matrix M, n times. But matrix multiplication is associative; so instead of doing n multiplications by matrix M, we can raise matrix M to its nth power, and then multiply v by the resulting matrix M**n.
As it turns out:
[[1, 0, 0], [[ 1, 0, 0],
[1, 1, 0], to the nth = [ n, 1, 0],
[0, 1, 1]] [n*(n - 1)/2, n, 1]]
which represents the affine operation:
s = s + n * i + n * (n - 1) / 2
i = i + n
Starting from s, i = 0, 1, this gives us s = n * (n+1) / 2 as expected.
More reading:
Using the Quick Raise of Matrices to a Power to Write a Very Fast Interpreter of a Simple Programming Language;
Automatic Algorithms Optimization via Fast Matrix Exponentiation.

Find transformation matrix between two matrices

I tried to find the transformation matrix between two matrices, so i can save it, and later apply it on an object so it will maintain its position and rotation relatively to another object. using this sugested solution:
3d (THREE.js) : difference matrix
I used this code:
var aInv1 = new THREE.Matrix4().getInverse(firstObject.matrix.clone());
var aMat2 = new THREE.Matrix4().copy(secondObject.matrix.clone());
var aTrans = new THREE.Matrix4().multiplyMatrices(aMat2, aInv1);
The values of the matrices elements are:
firstObject.matrix.elements = [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, -358.483421667927, 1
]
secondObject.matrix.elements = [
0.5137532240102918, -0.844167465362402, 0.15309773101731067, 0,
0.8579380075617532, 0.5055071032079361, -0.091678480502733, 0,
-1.3877787807814457e-17, 0.1784484772418605, 0.983949257314655, 0,
94.64320536824728, 6.92473686011361, -372.0695450875709, 1
]
I would expect that the transformation Matrix aka variable aTrans elements would be 94.64320536824728, 6.92473686011361, 13.58 because those are the differences in the position, but I get 94.64320536824728, 70.89555757320696, -19.340048577797802, 1.
aTrans.matrix.elements = [
0.5137532240102918, -0.844167465362402, 0.15309773101731067, 0,
0.8579380075617532, 0.5055071032079361, -0.091678480502733, 0,
-1.3877787807814457e-17, 0.1784484772418605, 0.983949257314655, 0,
94.64320536824728, 70.89555757320696, -19.340048577797802, 1
]
I would appriciate any educated explanation for this difference, or another way to solve this problem.
You cannot simply add the translations because the second matrix might affect the effective translation of the first.
Let's consider a simple example - suppose both matrices only contain a rotation and translation:
M = R + T
R corresponds to the top-left 3x3 sub-matrix, and T is the first 3 elements of the last column. Multiplication rule with an arbitrary 3D point p:
M * p = R * p + T
Two of these give:
M2 * M1 * p = R2 * R1 * p + R2 * T1 + T2
The last column of M2 * M1 is R2 * T1 + T2 instead of simply T1 + T2, i.e. effective translation that M1 imposes on p is R2 * T1, and not simply T1.

Prolog recursive program not returning values

I'm still new to Prolog, and I've encountered an error I have no idea how to fix.
I've written a simple exponentiation program that looks like this:
exp(b, 0, R) :- R is 1. % non-recursive case: exponent is 0
exp(0, e, R) :- R is 0. % non-recursive case: base is 0
exp(Base, Exponent, Result) :- % recurse if base and exponent are non-negative
Base >= 0,
Exponent >= 0,
E1 is Exponent-1,
exp(Base, E1, R1),
Result is Base*R1.
This compiles fine, but when I run it and give it a query like, say, exp(2, 4, X). I'm met with the following output:
?- exp(2, 4, X).
false.
Is there something I've done wrong? Or is it a matter of formatting the result in some way I'm unaware of?
You are confusing variables with atoms. It works as expected if you simple change the two nonrecusive clauses to:
exp(_, 0, 1).
exp(0, _, 0).
In fact, I recommend to change the whole program to use CLP(FD) constraints throughout:
exp(_, 0, 1).
exp(0, _, 0).
exp(Base, Exponent, Result):-
Base #>= 0,
Exponent #>= 0,
E1 #= Exponent-1,
exp(Base, E1, R1),
Result #= Base*R1.
Now for example the following at least yields a solution:
?- exp(2, X, 16).
X = 4
whereas we previously had:
?- exp(2, X, 16).
>=/2: Arguments are not sufficiently instantiated
Note also the most general query:
?- exp(X, Y, Z).
Y = 0,
Z = 1 ;
X = Z, Z = 0 ;
X = Z,
Y = 1,
Z in 0..sup ;
X = Z, Z = 0,
Y in 0..sup,
_G801+1#=Y,
_G801 in -1..sup .

Mathematica: integrate symbolic vector function

I wrote a program that defines two piecewise functions "gradino[x_]" and "gradino1[x_]", where x is a vector of m components.
I'm not able to write these functions explicitly using the x_i, I need to keep x as a vector.
I need to measure the distance between these two function doing:
Integrate[Abs[gradino[x]-gradino1[x]],{x[[1]],0,100},{x[[2],0,100},{x[[3]],0,100}...{x[[m]],0,100}]
but it's not working.
Any idea how to do this? Remembering that I can't simply express gradino[x1_,x2_ etc...].
re: "its not working" posting the actual error message is usually a good idea,
in this case "Part specification x[[1]] is longer than depth of object.".. tells you exactly what the problem is. If x is not already defined as a list you cannot use list elements as integration variables.
f[y_] := y[[1]] y[[2]];
Integrate[ f[x] , {x[[1]], 0, 1}, {x[[2]], 0, 1}]
(* error Part specification x[[1]] is longer than depth of object. *)
If you first define x as a list, then it works:
x = Array[z, 2];
Integrate[ f[x] , {x[[1]], 0, 1}, {x[[2]], 0, 1}]
(*1/4*)
Note you can not do this with nintegrate:
NIntegrate[ f[x] , {x[[1]], 0, 1}, {x[[2]], 0, 1}]
(*error Tag Part in x[[1]] is Protected *)
you need to use the explicit elements:
NIntegrate[ f[x] , {z[1], 0, 1}, {z[2], 0, 1}]
(* 0.25 *)
According to the model above, with
x = Array[z, 2];
why the following is ok:
f[y_] := NIntegrate[y[[1]] y[[2]] t, {t, 0, 1}];
NIntegrate[f[x], {z[1], 0, 1}, {z[2], 0, 1}]
but the following is not:
f[y_] := NIntegrate[y[[1]] y[[2]] Exp[t], {t, 0, 1}];
NIntegrate[f[x], {z[1], 0, 1}, {z[2], 0, 1}]
The only difference is changing t in the inner integration into Exp[t].

Problem with Euler angles from YZX Rotation Matrix

I've gotten stuck getting my euler angles out my rotation matrix.
My conventions are:
Left-handed (x right, z back, y up)
YZX
Left handed angle rotation
My rotation matrix is built up from Euler angles like (from my code):
var xRotationMatrix = $M([
[1, 0, 0, 0],
[0, cx, -sx, 0],
[0, sx, cx, 0],
[0, 0, 0, 1]
]);
var yRotationMatrix = $M([
[ cy, 0, sy, 0],
[ 0, 1, 0, 0],
[-sy, 0, cy, 0],
[ 0, 0, 0, 1]
]);
var zRotationMatrix = $M([
[cz, -sz, 0, 0],
[sz, cz, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]
]);
Which results in a final rotation matrix as:
R(YZX) = | cy.cz, -cy.sz.cx + sy.sx, cy.sz.sx + sy.cx, 0|
| sz, cz.cx, -cz.sx, 0|
|-sy.cz, sy.sz.cx + cy.sx, -sy.sz.sx + cy.cx, 0|
| 0, 0, 0, 1|
I'm calculating my euler angles back from this matrix using this code:
this.anglesFromMatrix = function(m) {
var y = 0, x = 0, z = 0;
if (m.e(2, 1) > 0.999) {
y = Math.atan2(m.e(1, 3), m.e(3, 3));
z = Math.PI / 2;
x = 0;
} else if (m.e(2, 1) < -0.999) {
y = Math.atan2(m.e(1, 3), m.e(3, 3));
z = -Math.PI / 2;
x = 0;
} else {
y = Math.atan2(-m.e(3, 1), -m.e(1, 1));
x = Math.atan2(-m.e(2, 3), m.e(2, 2));
z = Math.asin(m.e(2, 1));
}
return {theta: this.deg(x), phi: this.deg(y), psi: this.deg(z)};
};
I've done the maths backwards and forwards a few times, but I can't see what's wrong. Any help would hugely appreciated.
Your matrix and euler angles aren't consistent. It looks like you should be using
y = Math.atan2(-m.e(3, 1), m.e(1, 1));
instead of
y = Math.atan2(-m.e(3, 1), -m.e(1, 1));
for the general case (the else branch).
I said "looks like" because -- what language is this? I'm assuming you have the indexing correct for this language. Are you sure about atan2? There is no single convention for atan2. In some programming languages the sine term is the first argument, in others, the cosine term is the first argument.
The last and most important branch of the anglesFromMatrix function has a small sign error but otherwise works correctly. Use
y = Math.atan2(-m.e(3, 1), m.e(1, 1))
since only m.e(3, 1) of m.e(1, 1) = cy.cz and m.e(3, 1) = -sy.cz should be inverted. I haven't checked the other branches for errors.
Beware that since sz = m.e(2, 1) has two solutions, the angles (x, y, z) used to construct the matrix m might not be the same as the angles (rx, ry, rz) returned by anglesFromMatrix(m). Instead we can test that the matrix rm constructed from (rx, ry, rz) does indeed equal m.
I worked on this problem extensively to come up with the correct angles for a given matrix. The problem in the math comes from the inability to determine a precise value for the SIN since -SIN(x) = SIN(-x) and this will affect the other values of the matrix. The solution I came up with comes up with two equally valid solutions out of eight possible solutions. I used a standard Z . Y . X matrix form but it should be adaptable to any matrix. Start by findng the three angles from: X = atan(m32,m33): Y = -asin(m31) : Z = atan(m21,m11) : Then create angles X' = -sign(X)*PI+X : Y'= sign(Y)*PI-Y : Z = -sign(Z)*pi+Z . Using these angles create eight set of angle groups : XYZ : X'YZ : XYZ' : X'YZ' : X'Y'Z' : XY'Z' : X'Y'Z : XY'Z
Use these set to create the eight corresponding matrixes. Then do a sum of the difference between the unknown matrix and each matrix. This is a sum of each element of the unknown minus the same element of the test matrix. After doing this, two of the sums will be zero and those matrixes will represent the solution angles to the original matrix. This works for all possible angle combinations including 0's. As 0's are introduced, more of the eight test matrixes become valid. At 0,0,0 they all become idenity matrixes!
Hope this helps, it worked very well for my application.
Bruce
update
After finding problems with Y = -90 or 90 degrees in the solution above. I came up with this solution that seems to reproduce the matrix at all values!
X = if(or(m31=1,m31=-1),0,atan(m33+1e-24,m32))
Y = -asin(m31)
Z = if(or(m31=1,m31=-1),-atan2(m22,m12),atan2(m11+1e-24,m21))
I went the long way around to find this solution, but it wa very enlightening :o)
Hope this helps!
Bruce

Resources