The problem I'm trying to solve is this PDE,
I wrote the buck of the solver as follows (mostly based on the Euler Explicit):
for n in 2:N-1
wl[n]= w[n] + Δx*v[n]
ϕl[n]= ϕ[n] + Δx*ρ[n]
ρl[n] = ρ[n] - (Δt/Δx)*(dif_nt(ρ,n)*v[n]) + (Δt)*w[n]*ρ[n]
vl[n] = v[n] - (Δt/Δx)*(v[n]*w[n]) + (μ*Δt/(ρ[n]*Δx))*(dif_nt(w,n)) + (c₀^2*Δt/ρ[n]*Δx)*(dif_nt(ρ,n)) + (Δt/τ)*(V(ρ[n])-v[n])
end
In which the dif_nt is simply the forward distance u[n+1] - u[n] for a variable. nt stands for no-time.
function dif_nt(v, n)
return v[n+1] - v[n]
end
How could I find a good approximation to (pl[n],vl[n],ϕ[n],wl[n]), e.i., these are the next-step values in time?
What you are looking for has been solved in matlab for a long time and it's called pdepe . If you're just looking for an answer, I'd suggest just switching to this more robust package if you can.
Unfortunately the Julia ecosystems is not as developed as the Matlab ecosystem when it comes to PDE's. The survey found in this repo might be of use to you. There are some implementation of finite difference techniques in this repo.
Related
Right upfront: this is an issue I encountered when submitting an R package to CRAN. So I
dont have control of the stack size (as the issue occured on one of CRANs platforms)
I cant provide a reproducible example (as I dont know the exact configurations on CRAN)
Problem
When trying to submit the cSEM.DGP package to CRAN the automatic pretest (for Debian x86_64-pc-linux-gnu; not for Windows!) failed with the NOTE: C stack usage 7975520 is too close to the limit.
I know this is caused by a function with three arguments whose body is about 800 rows long. The function body consists of additions and multiplications of these arguments. It is the function varzeta6() which you find here (from row 647 onwards).
How can I adress this?
Things I cant do:
provide a reproducible example (at least I would not know how)
change the stack size
Things I am thinking of:
try to break the function into smaller pieces. But I dont know how to best do that.
somehow precompile? the function (to be honest, I am just guessing) so CRAN doesnt complain?
Let me know your ideas!
Details / Background
The reason why varzeta6() (and varzeta4() / varzeta5() and even more so varzeta7()) are so long and R-inefficient is that they are essentially copy-pasted from mathematica (after simplifying the mathematica code as good as possible and adapting it to be valid R code). Hence, the code is by no means R-optimized (which #MauritsEvers righly pointed out).
Why do we need mathematica? Because what we need is the general form for the model-implied construct correlation matrix of a recursive strucutral equation model with up to 8 constructs as a function of the parameters of the model equations. In addition there are constraints.
To get a feel for the problem, lets take a system of two equations that can be solved recursivly:
Y2 = beta1*Y1 + zeta1
Y3 = beta2*Y1 + beta3*Y2 + zeta2
What we are interested in is the covariances: E(Y1*Y2), E(Y1*Y3), and E(Y2*Y3) as a function of beta1, beta2, beta3 under the constraint that
E(Y1) = E(Y2) = E(Y3) = 0,
E(Y1^2) = E(Y2^2) = E(Y3^3) = 1
E(Yi*zeta_j) = 0 (with i = 1, 2, 3 and j = 1, 2)
For such a simple model, this is rather trivial:
E(Y1*Y2) = E(Y1*(beta1*Y1 + zeta1) = beta1*E(Y1^2) + E(Y1*zeta1) = beta1
E(Y1*Y3) = E(Y1*(beta2*Y1 + beta3*(beta1*Y1 + zeta1) + zeta2) = beta2 + beta3*beta1
E(Y2*Y3) = ...
But you see how quickly this gets messy when you add Y4, Y5, until Y8.
In general the model-implied construct correlation matrix can be written as (the expression actually looks more complicated because we also allow for up to 5 exgenous constructs as well. This is why varzeta1() already looks complicated. But ignore this for now.):
V(Y) = (I - B)^-1 V(zeta)(I - B)'^-1
where I is the identity matrix and B a lower triangular matrix of model parameters (the betas). V(zeta) is a diagonal matrix. The functions varzeta1(), varzeta2(), ..., varzeta7() compute the main diagonal elements. Since we constrain Var(Yi) to always be 1, the variances of the zetas follow. Take for example the equation Var(Y2) = beta1^2*Var(Y1) + Var(zeta1) --> Var(zeta1) = 1 - beta1^2. This looks simple here, but is becomes extremly complicated when we take the variance of, say, the 6th equation in such a chain of recursive equations because Var(zeta6) depends on all previous covariances betwenn Y1, ..., Y5 which are themselves dependend on their respective previous covariances.
Ok I dont know if that makes things any clearer. Here are the main point:
The code for varzeta1(), ..., varzeta7() is copy pasted from mathematica and hence not R-optimized.
Mathematica is required because, as far as I know, R cannot handle symbolic calculations.
I could R-optimze "by hand" (which is extremly tedious)
I think the structure of the varzetaX() must be taken as given. The question therefore is: can I somehow use this function anyway?
Once conceivable approach is to try to convince the CRAN maintainers that there's no easy way for you to fix the problem. This is a NOTE, not a WARNING; The CRAN repository policy says
In principle, packages must pass R CMD check without warnings or significant notes to be admitted to the main CRAN package area. If there are warnings or notes you cannot eliminate (for example because you believe them to be spurious) send an explanatory note as part of your covering email, or as a comment on the submission form
So, you could take a chance that your well-reasoned explanation (in the comments field on the submission form) will convince the CRAN maintainers. In the long run it would be best to find a way to simplify the computations, but it might not be necessary to do it before submission to CRAN.
This is a bit too long as a comment, but hopefully this will give you some ideas for optimising the code for the varzeta* functions; or at the very least, it might give you some food for thought.
There are a few things that confuse me:
All varzeta* functions have arguments beta, gamma and phi, which seem to be matrices. However, in varzeta1 you don't use beta, yet beta is the first function argument.
I struggle to link the details you give at the bottom of your post with the code for the varzeta* functions. You don't explain where the gamma and phi matrices come from, nor what they denote. Furthermore, seeing that beta are the model's parameter etimates, I don't understand why beta should be a matrix.
As I mentioned in my earlier comment, I would be very surprised if these expressions cannot be simplified. R can do a lot of matrix operations quite comfortably, there shouldn't really be a need to pre-calculate individual terms.
For example, you can use crossprod and tcrossprod to calculate cross products, and %*% implements matrix multiplication.
Secondly, a lot of mathematical operations in R are vectorised. I already mentioned that you can simplify
1 - gamma[1,1]^2 - gamma[1,2]^2 - gamma[1,3]^2 - gamma[1,4]^2 - gamma[1,5]^2
as
1 - sum(gamma[1, ]^2)
since the ^ operator is vectorised.
Perhaps more fundamentally, this seems somewhat of an XY problem to me where it might help to take a step back. Not knowing the full details of what you're trying to model (as I said, I can't link the details you give to the cSEM.DGP code), I would start by exploring how to solve the recursive SEM in R. I don't really see the need for Mathematica here. As I said earlier, matrix operations are very standard in R; analytically solving a set of recursive equations is also possible in R. Since you seem to come from the Mathematica realm, it might be good to discuss this with a local R coding expert.
If you must use those scary varzeta* functions (and I really doubt that), an option may be to rewrite them in C++ and then compile them with Rcpp to turn them into R functions. Perhaps that will avoid the C stack usage limit?
All I know is that a single integer will surely accept it.
The equation is like :
Ax^5 + Bx^3 + Cx^2 = D
I tried to brute force value of x , but was getting TLE , can I use an optimised binary search as I know only one root will be real?
You may want to search for Newton-Raphson's method which is known to quickly converge to solution with just a few iterations.
You're just asking to find the zeros of a function when you've been guaranteed that there's no more than one zero. To put it concretely, let's assume you have the following equation:
-15x^5 + 12x^3 - 203x^2 = -2.193113e+12
You could use the root-finding function from your favorite statistical software package to find the root. For instance, here's how you would do it with uniroot in R:
uniroot(function(x) -15*x^5 + 12*x^3 - 203*x^2 + 2.193113e+12, c(-1000, 1000))$root
# [1] 171
You could try typing this into Wolfram Alpha.
Solve[3x^5+4 x^3+5x^2==148,x]
I am trying to find the runtime of the equation;
T(n) = T(n-2) + n³.
When I am solving it I arrive at the summation T(n) = T(n-k) + Σk = 0,...,n/2(n-2k)³.
Solving that sum I get 1/8(n²)(n + 2)². Solving this I would get the runtime to be Θ(n⁴).
However, I think I did something wrong, does anyone have any ideas?
Why do you think that it is wrong? This equation is clearly Theta(n^4)
The more detailed solution can be obtained from WolframALpha (did you know it solves recurrence equations?)
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3
You can also add some border cases, like T(0)=T(1)=1
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3%2C+T%281%29%3D1%2C+T%282%29%3D1
and finally: asymptotic plot, showing that it truly behaves like n^4 function
Here is an attempt to show your recursive recursive recurrence with steps:
With WolframAlpha engine solving the summation.
R question: Looking for the fastest way to NUMERICALLY solve a bunch of arbitrary cubics known to have real coeffs and three real roots. The polyroot function in R is reported to use Jenkins-Traub's algorithm 419 for complex polynomials, but for real polynomials the authors refer to their earlier work. What are the faster options for a real cubic, or more generally for a real polynomial?
The numerical solution for doing this many times in a reliable, stable manner, involve: (1) Form the companion matrix, (2) find the eigenvalues of the companion matrix.
You may think this is a harder problem to solve than the original one, but this is how the solution is implemented in most production code (say, Matlab).
For the polynomial:
p(t) = c0 + c1 * t + c2 * t^2 + t^3
the companion matrix is:
[[0 0 -c0],[1 0 -c1],[0 1 -c2]]
Find the eigenvalues of such matrix; they correspond to the roots of the original polynomial.
For doing this very fast, download the singular value subroutines from LAPACK, compile them, and link them to your code. Do this in parallel if you have too many (say, about a million) sets of coefficients.
Notice that the coefficient of t^3 is one, if this is not the case in your polynomials, you will have to divide the whole thing by the coefficient and then proceed.
Good luck.
Edit: Numpy and octave also depend on this methodology for computing the roots of polynomials. See, for instance, this link.
The fastest known way (that I'm aware of) to find the real solutions a system of arbitrary polynomials in n variables is polyhedral homotopy. A detailed explanation is probably beyond a StackOverflow answer, but essentially it's a path algorithm that exploits the structure of each equation using toric geometries. Google will give you a number of papers.
Perhaps this question is better suited for mathoverflow?
Fleshing out Arietta's answer above:
> a <- c(1,3,-4)
> m <- matrix(c(0,0,-a[1],1,0,-a[2],0,1,-a[3]), byrow=T, nrow=3)
> roots <- eigen(m, symm=F, only.values=T)$values
Whether this is faster or slower than using the cubic solver in the GSL package (as suggested by knguyen above) is a matter of benchmarking it on your system.
Do you need all 3 roots or just one? If just one, I would think Newton's Method would work ok. If all 3 then it might be problematic in circumstances where two are close together.
1) Solve for the derivative polynomial P' to locate your three roots. See there to know how to do it properly. Call those roots a and b (with a < b)
2) For the middle root, use a few steps of bisection between a and b, and when you're close enough, finish with Newton's method.
3) For the min and max root, "hunt" the solution. For the max root:
Start with x0 = b, x1 = b + (b - a) * lambda, where lambda is a moderate number (say 1.6)
do x_n = b + (x_{n - 1} - a) * lambda until P(x_n) and P(b) have different signs
Perform bisection + newton between x_{n - 1} and x_n
The common methods are available: Newton's Method, Bisection Method, Secant, Fixed point iteration, etc. Google any one of them.
If you have a non-linear system on the other hand (e.g. a system on N polynomial eqn's in N unknowns), a method such as high-order Newton may be used.
Have you tried looking into the GSL package http://cran.r-project.org/web/packages/gsl/index.html?
This is a really basic question but this is the first time I've used MATLAB and I'm stuck.
I need to simulate a simple series RC network using 3 different numerical integration techniques. I think I understand how to use the ode solvers, but I have no idea how to enter the differential equation of the system. Do I need to do it via an m-file?
It's just a simple RC circuit in the form:
RC dy(t)/dt + y(t) = u(t)
with zero initial conditions. I have the values for R, C the step length and the simulation time but I don't know how to use MATLAB particularly well.
Any help is much appreciated!
You are going to need a function file that takes t and y as input and gives dy as output. It would be its own file with the following header.
function dy = rigid(t,y)
Save it as rigid.m on the MATLAB path.
From there you would put in your differential equation. You now have a function. Here is a simple one:
function dy = rigid(t,y)
dy = sin(t);
From the command line or a script, you need to drive this function through ODE45
[T,Y] = ode45(#rigid,[0 2*pi],[0]);
This will give you your function (rigid.m) running from time 0 through time 2*pi with an initial y of zero.
Plot this:
plot(T,Y)
More of the MATLAB documentation is here:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/ode23tb.html
The Official Matlab Crash Course (PDF warning) has a section on solving ODEs, as well as a lot of other resources I found useful when starting Matlab.