Using Forward- ,Backward Automatic Differentiation to compute higher order derivatives - math

Im currently trying to wrap my head around Automatic Differentiation. I got to the point where i
successfully implemented both forward and reverse mode autodiff. Now to my question:
Given a function f i want to compute the second derivative. How to
compose forward and reverse mode to achive this. My current understanding is, that you first run
one pass of forward-mode and then run the reverse-mode ontop of this pass but this doesnt yield the
excpected results.
help would be very appreciated, have a nice day :)

Related

Heaviside theta integration with Mathematica

I am trying to integrate a Heaviside theta function with two signs inside and Mathematica won't give me a solution. Is there any way of improving the approach before just acknowledging that Mathematica cannot integrate it?
Some things you do really worry me.
You use X as a variable and as the name of a function. I've changed those to X and Xf.
You use ω as a variable and as the name of a function. I've changed those to ω and ωf.
You use = and not := in your function definitions. I've changed that.
With those changes I have
Xf[s1_,s2_,α_]:=(1+s2(-2+α)-α+s1(-1+2s2+α))/((-1+2 s1)(-1+2s2));
ωf[s1_,s2_,α_]:=((1-s2+s1(-3+4s2))(1+s2(-2+α)-α+s1(-1+2s2+α)))/((-1+2s1)(-1+s1+s2)(-1+2s2));
λ1[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω-
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ2[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω+
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ1simp[s1_,s2_,α_]:=λ1[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
λ2simp[s1_,s2_,α_]:=λ2[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
fint[s1_,s2_]:=HeavisideTheta[Sign[-λ1simp[s1,s2,α]]*Sign[-λ2simp[s1,s2,α]]];
Please check that very carefully to see if I've made any mistakes.
Now I want to look at your integrand and see what Mathematica sees.
Simplify[fint[s1,s2],1/2<=s1<=1&&1/2<=s2<=s1]
And it responds with
HeavisideTheta[CompexInfinity] 2s1==1||2s2==1||s1+s2==1
HeavisideTheta[True Sign[...]*Sign[...]]
so it looks like your integrand is blowing up at the boundary.
I check that with
Simplify[fint[1/2,s2]]
or
Simplify[fint[s1,1/2]]
and it responds with 1/0 and Indeterminate and HeavisideTheta[Indeterminate]
When it isn't at the boundary, for example
Simplify[fint[3/4,3/4]]
it returns
HeavisideTheta[Sign[5-4*Sqrt[7-28*α+25*α^2]]*Sign[5+4*Sqrt[7-28*α+25*α^2]]]
and that probably says that α is a free variable and we aren't able to determine the value of the Sign without more information.
So I think this is a strong hint where I would begin looking for why your integration is not simply completing.
If you are curious what that integrand looks like then try
α=1/4;
Plot3D[fint[s1,s2],{s1,1/2,1},{s2,1/2,s1}]

Using results from ODEProblem while it is running

I’m currently studying the documentation of DifferentialEquations.jl and trying to port my older computational neuroscience codes for using it instead of my own, less elegant and performant, ODE solvers. While doing this, I stumbled upon the following question: is it possible to access and use the results returned from the solver as soon as the current step is returned (instead of waiting for the problem to finish)?
I’m looking for a way to e.g. plot in real-time the voltage levels of a simulated neuron, which seems like a simple enough task and one that’s probably trivial to do using already existing Julia packages but I can’t figure out how. Does it have to do anything with callbacks? Thanks in advance.
Plots.jl doesn't seem to be animating for me right now, but I'll show you the steps anyways. Yes, you can use a DiscreteCallback for this. If you make condition(u,t,integrator)=true then the affect! is called every step, and you could do that.
But, I think using the integrator interface is perfect for this case. Let me show you an example of this. Take the 2D problem from the tutorial:
using DifferentialEquations
using Plots
A = [1. 0 0 -5
4 -2 4 -3
-4 0 0 1
5 -2 2 3]
u0 = rand(4,2)
tspan = (0.0,1.0)
f(u,p,t) = A*u
prob = ODEProblem(f,u0,tspan)
Now instead of using solve, use init to get an integrator out.
integrator = init(prob,Tsit5())
The integrator interface is defined in full at its documentation page, but the basic usage is that you can step using step!. If you put that in a loop and keep stepping then that's essentially what solve does. But it also has the iterator interface, so if you do something like for integ in integrator then inside of the for loop integ will be the current state of the integrator, with values integ.u at time point integ.t. It also has all sorts of things like a plot recipe for intermediate interpolation integ(t) (this is true even when dense=false because it's free and doesn't require extra saving allocations, so feel free to use it).
So, you can do
p = plot(integrator,markersize=0,legend=false,xlims=tspan)
anim = #animate for integ in integrator
plot!(p,integrator,lw=3)
end
plot(p)
gif(anim, "test.gif", fps = 2)
and Plots.jl will give you the animated gif that adds the current interval at each step. Here's what the end plot looks like:
It colored differently in each step because it was a different plot, so you can see how it continued. Of course, you can do anything inside of that loop, or if you want more control you can manually step!(integrator) as necessary.

Wreath Product Of Groups In Sagemath

Can anyone help me with taking Wreath Products of Groups in Sagemath?
I haven't been able to find a online reference and it doesn't appear to be built in as far as I can tell.
As far as I know, you would have to use GAP to compute them within Sage (and then can manipulate them from within Sage as well). See e.g. this discussion from 2012. This question has information about it, here is the documentation, and here it is within Sage:
F = AbelianGroup(3,[2]*3)
G = PermutationGroup([[(1,2,3),(4,5)],[(3,4)]])
Gp = gap.StandardWreathProduct(gap(F),gap(G))
print Gp
However, if you try to get this back into Sage, you will get a NotImplementedError because Sage doesn't understand what GAP returns in this wacky case (which I hope even is legitimate). Presumably if a recognized group is returned then one could eventually get it back to Sage for further processing. In this case, you might be better off doing some GAP computations and then putting them back in Sage after doing all your group stuff (which isn't always the case).

Collision detection in paper.js

I'm implementing a simple game in Paper.js for educational purposes. The game features some bacteria, whose bodies are Path.RoundedRectangles. I'm trying to write a function colliding(roundedRect1, roundedRect2) using Paper's PathItem.intersects(item) method, but it returns true every time!
Before I scrap this tactic and write my own collision detection, I'm wondering if anybody has successfully used Paper's builtin intersects for this. Thanks!
You use the code path.intersects(otherPath) which returns true or false.
You can take a look at a simple example that shows intersect works here:
Simple Paper Sketch showing intersect function.

Strange behavior when implementing Back propagation in DBN

Currently I'm trying to implement the Deep Belief Network. But I've met a very strange problem. My source code can be found here: https://github.com/mistree/GoDeep/blob/master/GoDeep/
I first implemented the RBM using CD and it works perfectly (by using the concurrency feature of Golang it's quite fast). Then I start to implement a normal feed forward network with back propagation and then the strange thing happens. It seems very unstable. When I run it with xor gate test it sometimes fails, only when I set the hidden layer nodes to 10 or more then it never fails. Below is how I calculate it
Step 1 : calculate all the activation with bias
Step 2 : calculate the output error
Step 3 : back propagate the error to each node
Step 4 : calculate the delta weight and bias for each node with momentum
Step 1 to Step 4 I do a full batch and sum up these delta weight and bias
Step 5 : apply the averaged delta weight and bias
I followed the tutorial here http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
And normally it works if I give it more hidden layer nodes. My test code is here https://github.com/mistree/GoDeep/blob/master/Test.go
So I think it should work and start to implement the DBN by combining the RBM and normal NN. However then the result becomes really bad. It even can't learn a xor gate in 1000 iteration. And sometimes goes totally wrong. I tried to debug with that so after the PreTrain of DBN I do a reconstruction. Most times the reconstruction looks good but the back propagation even fails when the preTrain result is perfect.
I really don't know what's wrong with the back propagation. I must misunderstood the algorithm or made some big mistakes in the implementation.
If possible please run the test code and you'll see how weird it is. The code it self is quite readable. Any hint will be great help.Thanks in advance
I remember Hinton saying you cant train RBM's on an XOR, something about the vector space that doesnt allow a two layer network to work. Deeper networks have less linear properties that allow it to work.

Resources