Controlling the 'levels' of differentiation in SageMath 9.1 - sage

Sage seems to want to evaluate derivatives as far as possible using the chain rule. A simple example is:
var('theta')
f = function('f')(theta)
g = function('g')(theta)
h = f*g
diff(h,theta)
which would display
g(theta)*diff(f(theta), theta) + f(theta)*diff(g(theta), theta)
My question is, is there a way to control just how far Sage will take derivatives? In the example above for instance, how would I get Sage to display instead:
diff(f(theta)*g(theta))
I'm working through some pretty intensive derivations in fluid mechanics, and being able to not evaluate derivatives all the way like discussed above would really help with this. Thanks in advance. Would appreciate any help on this.

This would be called "holding" the derivative.
Adding this possibility to Sage has already been considered.
Progress on this is tracked at:
Sage Trac ticket 24861
and the ticket even links to a branch with code implementing this.
Although progress on this is stalled, and the branch has not been merged,
you could use the code from the branch.

Related

Heaviside theta integration with Mathematica

I am trying to integrate a Heaviside theta function with two signs inside and Mathematica won't give me a solution. Is there any way of improving the approach before just acknowledging that Mathematica cannot integrate it?
Some things you do really worry me.
You use X as a variable and as the name of a function. I've changed those to X and Xf.
You use ω as a variable and as the name of a function. I've changed those to ω and ωf.
You use = and not := in your function definitions. I've changed that.
With those changes I have
Xf[s1_,s2_,α_]:=(1+s2(-2+α)-α+s1(-1+2s2+α))/((-1+2 s1)(-1+2s2));
ωf[s1_,s2_,α_]:=((1-s2+s1(-3+4s2))(1+s2(-2+α)-α+s1(-1+2s2+α)))/((-1+2s1)(-1+s1+s2)(-1+2s2));
λ1[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω-
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ2[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω+
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ1simp[s1_,s2_,α_]:=λ1[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
λ2simp[s1_,s2_,α_]:=λ2[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
fint[s1_,s2_]:=HeavisideTheta[Sign[-λ1simp[s1,s2,α]]*Sign[-λ2simp[s1,s2,α]]];
Please check that very carefully to see if I've made any mistakes.
Now I want to look at your integrand and see what Mathematica sees.
Simplify[fint[s1,s2],1/2<=s1<=1&&1/2<=s2<=s1]
And it responds with
HeavisideTheta[CompexInfinity] 2s1==1||2s2==1||s1+s2==1
HeavisideTheta[True Sign[...]*Sign[...]]
so it looks like your integrand is blowing up at the boundary.
I check that with
Simplify[fint[1/2,s2]]
or
Simplify[fint[s1,1/2]]
and it responds with 1/0 and Indeterminate and HeavisideTheta[Indeterminate]
When it isn't at the boundary, for example
Simplify[fint[3/4,3/4]]
it returns
HeavisideTheta[Sign[5-4*Sqrt[7-28*α+25*α^2]]*Sign[5+4*Sqrt[7-28*α+25*α^2]]]
and that probably says that α is a free variable and we aren't able to determine the value of the Sign without more information.
So I think this is a strong hint where I would begin looking for why your integration is not simply completing.
If you are curious what that integrand looks like then try
α=1/4;
Plot3D[fint[s1,s2],{s1,1/2,1},{s2,1/2,s1}]

CRAN package submission: "Error: C stack usage is too close to the limit"

Right upfront: this is an issue I encountered when submitting an R package to CRAN. So I
dont have control of the stack size (as the issue occured on one of CRANs platforms)
I cant provide a reproducible example (as I dont know the exact configurations on CRAN)
Problem
When trying to submit the cSEM.DGP package to CRAN the automatic pretest (for Debian x86_64-pc-linux-gnu; not for Windows!) failed with the NOTE: C stack usage 7975520 is too close to the limit.
I know this is caused by a function with three arguments whose body is about 800 rows long. The function body consists of additions and multiplications of these arguments. It is the function varzeta6() which you find here (from row 647 onwards).
How can I adress this?
Things I cant do:
provide a reproducible example (at least I would not know how)
change the stack size
Things I am thinking of:
try to break the function into smaller pieces. But I dont know how to best do that.
somehow precompile? the function (to be honest, I am just guessing) so CRAN doesnt complain?
Let me know your ideas!
Details / Background
The reason why varzeta6() (and varzeta4() / varzeta5() and even more so varzeta7()) are so long and R-inefficient is that they are essentially copy-pasted from mathematica (after simplifying the mathematica code as good as possible and adapting it to be valid R code). Hence, the code is by no means R-optimized (which #MauritsEvers righly pointed out).
Why do we need mathematica? Because what we need is the general form for the model-implied construct correlation matrix of a recursive strucutral equation model with up to 8 constructs as a function of the parameters of the model equations. In addition there are constraints.
To get a feel for the problem, lets take a system of two equations that can be solved recursivly:
Y2 = beta1*Y1 + zeta1
Y3 = beta2*Y1 + beta3*Y2 + zeta2
What we are interested in is the covariances: E(Y1*Y2), E(Y1*Y3), and E(Y2*Y3) as a function of beta1, beta2, beta3 under the constraint that
E(Y1) = E(Y2) = E(Y3) = 0,
E(Y1^2) = E(Y2^2) = E(Y3^3) = 1
E(Yi*zeta_j) = 0 (with i = 1, 2, 3 and j = 1, 2)
For such a simple model, this is rather trivial:
E(Y1*Y2) = E(Y1*(beta1*Y1 + zeta1) = beta1*E(Y1^2) + E(Y1*zeta1) = beta1
E(Y1*Y3) = E(Y1*(beta2*Y1 + beta3*(beta1*Y1 + zeta1) + zeta2) = beta2 + beta3*beta1
E(Y2*Y3) = ...
But you see how quickly this gets messy when you add Y4, Y5, until Y8.
In general the model-implied construct correlation matrix can be written as (the expression actually looks more complicated because we also allow for up to 5 exgenous constructs as well. This is why varzeta1() already looks complicated. But ignore this for now.):
V(Y) = (I - B)^-1 V(zeta)(I - B)'^-1
where I is the identity matrix and B a lower triangular matrix of model parameters (the betas). V(zeta) is a diagonal matrix. The functions varzeta1(), varzeta2(), ..., varzeta7() compute the main diagonal elements. Since we constrain Var(Yi) to always be 1, the variances of the zetas follow. Take for example the equation Var(Y2) = beta1^2*Var(Y1) + Var(zeta1) --> Var(zeta1) = 1 - beta1^2. This looks simple here, but is becomes extremly complicated when we take the variance of, say, the 6th equation in such a chain of recursive equations because Var(zeta6) depends on all previous covariances betwenn Y1, ..., Y5 which are themselves dependend on their respective previous covariances.
Ok I dont know if that makes things any clearer. Here are the main point:
The code for varzeta1(), ..., varzeta7() is copy pasted from mathematica and hence not R-optimized.
Mathematica is required because, as far as I know, R cannot handle symbolic calculations.
I could R-optimze "by hand" (which is extremly tedious)
I think the structure of the varzetaX() must be taken as given. The question therefore is: can I somehow use this function anyway?
Once conceivable approach is to try to convince the CRAN maintainers that there's no easy way for you to fix the problem. This is a NOTE, not a WARNING; The CRAN repository policy says
In principle, packages must pass R CMD check without warnings or significant notes to be admitted to the main CRAN package area. If there are warnings or notes you cannot eliminate (for example because you believe them to be spurious) send an explanatory note as part of your covering email, or as a comment on the submission form
So, you could take a chance that your well-reasoned explanation (in the comments field on the submission form) will convince the CRAN maintainers. In the long run it would be best to find a way to simplify the computations, but it might not be necessary to do it before submission to CRAN.
This is a bit too long as a comment, but hopefully this will give you some ideas for optimising the code for the varzeta* functions; or at the very least, it might give you some food for thought.
There are a few things that confuse me:
All varzeta* functions have arguments beta, gamma and phi, which seem to be matrices. However, in varzeta1 you don't use beta, yet beta is the first function argument.
I struggle to link the details you give at the bottom of your post with the code for the varzeta* functions. You don't explain where the gamma and phi matrices come from, nor what they denote. Furthermore, seeing that beta are the model's parameter etimates, I don't understand why beta should be a matrix.
As I mentioned in my earlier comment, I would be very surprised if these expressions cannot be simplified. R can do a lot of matrix operations quite comfortably, there shouldn't really be a need to pre-calculate individual terms.
For example, you can use crossprod and tcrossprod to calculate cross products, and %*% implements matrix multiplication.
Secondly, a lot of mathematical operations in R are vectorised. I already mentioned that you can simplify
1 - gamma[1,1]^2 - gamma[1,2]^2 - gamma[1,3]^2 - gamma[1,4]^2 - gamma[1,5]^2
as
1 - sum(gamma[1, ]^2)
since the ^ operator is vectorised.
Perhaps more fundamentally, this seems somewhat of an XY problem to me where it might help to take a step back. Not knowing the full details of what you're trying to model (as I said, I can't link the details you give to the cSEM.DGP code), I would start by exploring how to solve the recursive SEM in R. I don't really see the need for Mathematica here. As I said earlier, matrix operations are very standard in R; analytically solving a set of recursive equations is also possible in R. Since you seem to come from the Mathematica realm, it might be good to discuss this with a local R coding expert.
If you must use those scary varzeta* functions (and I really doubt that), an option may be to rewrite them in C++ and then compile them with Rcpp to turn them into R functions. Perhaps that will avoid the C stack usage limit?

Wreath Product Of Groups In Sagemath

Can anyone help me with taking Wreath Products of Groups in Sagemath?
I haven't been able to find a online reference and it doesn't appear to be built in as far as I can tell.
As far as I know, you would have to use GAP to compute them within Sage (and then can manipulate them from within Sage as well). See e.g. this discussion from 2012. This question has information about it, here is the documentation, and here it is within Sage:
F = AbelianGroup(3,[2]*3)
G = PermutationGroup([[(1,2,3),(4,5)],[(3,4)]])
Gp = gap.StandardWreathProduct(gap(F),gap(G))
print Gp
However, if you try to get this back into Sage, you will get a NotImplementedError because Sage doesn't understand what GAP returns in this wacky case (which I hope even is legitimate). Presumably if a recognized group is returned then one could eventually get it back to Sage for further processing. In this case, you might be better off doing some GAP computations and then putting them back in Sage after doing all your group stuff (which isn't always the case).

what happens to Schweizer T-Norm when p goes to zero?

I am reading Jang's book of Neuro-Fuzzy and Soft Computing and in the 2nd chapter the author talks about Schweizer and Sklar T-Norm which is presented by this equation:
it's a handy T-norm. in the exercises (#20 page 45) it asks what would happen to the Tss(a,b,p) if p->0
In fact it asks to show that the whole equation is going to be just ab in the end.
I tried different things and at last, I used Ln but I got this: -1/p Ln(a^-p + b^-p) and I have no idea where to go from here!
can anybody suggest anything? thanks for your help.
p.s: is there any simple way of expanding Ln(x+y) generally?

Mathematica not evaluating any trigonometric integrals

When I input any integral, say sin[x] or ln[x], all I get back is the input. And from what I understand that usually means that mathematica can't evaluate the integral...What am i missing here?
I'm new to stackoverflow so I can't post screenshots.
Read the documentation:
Sin[x] manual
Log[x] manual
Note also that by default Log function is of base e, therefore it is your Ln.
In addition, if you have further questions mathematica/wolfram has it's own subsite where you should ask related questions https://mathematica.stackexchange.com/
As of version 9 you can query wolfram alpha directly and it can even try to interpret human language (and fix syntax mistakes like this). To query wolfram alpha start line with =.

Resources