I'm self learning R and I keep seeing the mention of Integrals or calculating Integration in R when trawling through google examples. The three books I'm using (Discovering Statistics using R, R's Cookbook, Beginning R) have no mention of calculating these.
I found a couple of questions on here but they're not the same format/direction as this question. Also, this question didn't provide any information either.
So my query is, how can you calculate one:
I guess h(x) is a uniform in the unit interval?
Given that I have only found "text" and no actual examples of this implementation, I may just dismiss it and move on if you guys can't verify anything. Thanks in advance.
I'm not quite sure whether you're asking the very straightforward question here or not, but
f <- function(x) x*(x^5-1)
integrate(f,0,1)
## -0.3571429 with absolute error < 4e-15
seems to work fine. Checking analytically:
f_indef <- function(x) x^7/7-x^2/2
f_indef(1)-f_indef(0)
## [1] -0.3571429
or for the lazy:
library(Ryacas)
(intres <- yacas("Integrate (x) x*(x^5-1)",addSemi=FALSE))
## x^7/7-x^2/2;
I can't get Eval.yacas to give me anything but NULL, though ...
Related
Introduction
I'm doing research in computational contact mechanics, in which I try to solve a PDE using a finite difference method. Long story short, I need to solve a linear system like Ax = b.
The suspects
In the problem, the matrix A is sparse, and so I defined it accordingly. On the other hand, both x and b are dense arrays.
In fact, x is defined as x = A\b, the potential solution of the problem.
So, the least one might expect from this solution is to satisfy that Ax is close to b in some sense. Great is my surprise when I find that
julia> norm(A*x-b) # Frobenius or 2-norm
5018.901093242197
The vector x does not solve the system! I've tried a lot of tricks discover what is going on, but no clues as of now. My first candidate is that I've found a bug, however I need more evidence to make this assertion.
The hints
Here are some tests that I've done to try to pinpoint the error
If you convert A to dense, the solution changes completely, and in fact it returns the correct solution.
I have repeated the proccess above in matlab, and it seems to work well with both sparse and dense matrices (that is, the sparse version does not agree with that of Julia's)
Not all sparse matrices cause a problem. I have tried other initial conditions and the solver seems to work quite well. I am not able to predict what property of the matrix can be causing this discrepancy. However;
A has a condition number of 120848.06, which is quite high, although matlab doesn't seem to complain. Also, the absolute error of the solution to the real solution is huge.
How to reproduce this "bug"
Download the .csv files in the following link
Run the following code in the folder of the files (install the packages if necessary
using DelimitedFiles, LinearAlgebra, SparseArrays;
A = readdlm("A.csv", ',');
b = readdlm("b.csv", ',');
x = readdlm("x.csv", ',');
A_sparse = sparse(A);
println(norm(A_sparse\b - x)); # You should get something close to zero, x is the solution of the sparse implementation
println(norm(A_sparse*x - b)); # You should get something not close to zero, something is not working!
Final words
It might easily be the case that I'm missing something. Are there any other implementations apart from the usual A\b to test against?
To solve a sparse square system Julia chooses to do a sparse LU decomposition. For the specific matrix A in the question, this decomposition is numerically ill-conditioned. This is evidenced by the cond(lu(A_sparse).U) == 2.879548971708896e64. This causes the solve routine to make numerical errors in turn.
A quick solution is to use a QR decomposition instead, by running x = qr(A_sparse)\b.
The solve or LU routines might need to be fixed to handle this case, or at least maintainers need to know of this issue, so opening an issue on the github repo might be good.
(this is a rewrite of my comment on question)
Sage seems to want to evaluate derivatives as far as possible using the chain rule. A simple example is:
var('theta')
f = function('f')(theta)
g = function('g')(theta)
h = f*g
diff(h,theta)
which would display
g(theta)*diff(f(theta), theta) + f(theta)*diff(g(theta), theta)
My question is, is there a way to control just how far Sage will take derivatives? In the example above for instance, how would I get Sage to display instead:
diff(f(theta)*g(theta))
I'm working through some pretty intensive derivations in fluid mechanics, and being able to not evaluate derivatives all the way like discussed above would really help with this. Thanks in advance. Would appreciate any help on this.
This would be called "holding" the derivative.
Adding this possibility to Sage has already been considered.
Progress on this is tracked at:
Sage Trac ticket 24861
and the ticket even links to a branch with code implementing this.
Although progress on this is stalled, and the branch has not been merged,
you could use the code from the branch.
Right upfront: this is an issue I encountered when submitting an R package to CRAN. So I
dont have control of the stack size (as the issue occured on one of CRANs platforms)
I cant provide a reproducible example (as I dont know the exact configurations on CRAN)
Problem
When trying to submit the cSEM.DGP package to CRAN the automatic pretest (for Debian x86_64-pc-linux-gnu; not for Windows!) failed with the NOTE: C stack usage 7975520 is too close to the limit.
I know this is caused by a function with three arguments whose body is about 800 rows long. The function body consists of additions and multiplications of these arguments. It is the function varzeta6() which you find here (from row 647 onwards).
How can I adress this?
Things I cant do:
provide a reproducible example (at least I would not know how)
change the stack size
Things I am thinking of:
try to break the function into smaller pieces. But I dont know how to best do that.
somehow precompile? the function (to be honest, I am just guessing) so CRAN doesnt complain?
Let me know your ideas!
Details / Background
The reason why varzeta6() (and varzeta4() / varzeta5() and even more so varzeta7()) are so long and R-inefficient is that they are essentially copy-pasted from mathematica (after simplifying the mathematica code as good as possible and adapting it to be valid R code). Hence, the code is by no means R-optimized (which #MauritsEvers righly pointed out).
Why do we need mathematica? Because what we need is the general form for the model-implied construct correlation matrix of a recursive strucutral equation model with up to 8 constructs as a function of the parameters of the model equations. In addition there are constraints.
To get a feel for the problem, lets take a system of two equations that can be solved recursivly:
Y2 = beta1*Y1 + zeta1
Y3 = beta2*Y1 + beta3*Y2 + zeta2
What we are interested in is the covariances: E(Y1*Y2), E(Y1*Y3), and E(Y2*Y3) as a function of beta1, beta2, beta3 under the constraint that
E(Y1) = E(Y2) = E(Y3) = 0,
E(Y1^2) = E(Y2^2) = E(Y3^3) = 1
E(Yi*zeta_j) = 0 (with i = 1, 2, 3 and j = 1, 2)
For such a simple model, this is rather trivial:
E(Y1*Y2) = E(Y1*(beta1*Y1 + zeta1) = beta1*E(Y1^2) + E(Y1*zeta1) = beta1
E(Y1*Y3) = E(Y1*(beta2*Y1 + beta3*(beta1*Y1 + zeta1) + zeta2) = beta2 + beta3*beta1
E(Y2*Y3) = ...
But you see how quickly this gets messy when you add Y4, Y5, until Y8.
In general the model-implied construct correlation matrix can be written as (the expression actually looks more complicated because we also allow for up to 5 exgenous constructs as well. This is why varzeta1() already looks complicated. But ignore this for now.):
V(Y) = (I - B)^-1 V(zeta)(I - B)'^-1
where I is the identity matrix and B a lower triangular matrix of model parameters (the betas). V(zeta) is a diagonal matrix. The functions varzeta1(), varzeta2(), ..., varzeta7() compute the main diagonal elements. Since we constrain Var(Yi) to always be 1, the variances of the zetas follow. Take for example the equation Var(Y2) = beta1^2*Var(Y1) + Var(zeta1) --> Var(zeta1) = 1 - beta1^2. This looks simple here, but is becomes extremly complicated when we take the variance of, say, the 6th equation in such a chain of recursive equations because Var(zeta6) depends on all previous covariances betwenn Y1, ..., Y5 which are themselves dependend on their respective previous covariances.
Ok I dont know if that makes things any clearer. Here are the main point:
The code for varzeta1(), ..., varzeta7() is copy pasted from mathematica and hence not R-optimized.
Mathematica is required because, as far as I know, R cannot handle symbolic calculations.
I could R-optimze "by hand" (which is extremly tedious)
I think the structure of the varzetaX() must be taken as given. The question therefore is: can I somehow use this function anyway?
Once conceivable approach is to try to convince the CRAN maintainers that there's no easy way for you to fix the problem. This is a NOTE, not a WARNING; The CRAN repository policy says
In principle, packages must pass R CMD check without warnings or significant notes to be admitted to the main CRAN package area. If there are warnings or notes you cannot eliminate (for example because you believe them to be spurious) send an explanatory note as part of your covering email, or as a comment on the submission form
So, you could take a chance that your well-reasoned explanation (in the comments field on the submission form) will convince the CRAN maintainers. In the long run it would be best to find a way to simplify the computations, but it might not be necessary to do it before submission to CRAN.
This is a bit too long as a comment, but hopefully this will give you some ideas for optimising the code for the varzeta* functions; or at the very least, it might give you some food for thought.
There are a few things that confuse me:
All varzeta* functions have arguments beta, gamma and phi, which seem to be matrices. However, in varzeta1 you don't use beta, yet beta is the first function argument.
I struggle to link the details you give at the bottom of your post with the code for the varzeta* functions. You don't explain where the gamma and phi matrices come from, nor what they denote. Furthermore, seeing that beta are the model's parameter etimates, I don't understand why beta should be a matrix.
As I mentioned in my earlier comment, I would be very surprised if these expressions cannot be simplified. R can do a lot of matrix operations quite comfortably, there shouldn't really be a need to pre-calculate individual terms.
For example, you can use crossprod and tcrossprod to calculate cross products, and %*% implements matrix multiplication.
Secondly, a lot of mathematical operations in R are vectorised. I already mentioned that you can simplify
1 - gamma[1,1]^2 - gamma[1,2]^2 - gamma[1,3]^2 - gamma[1,4]^2 - gamma[1,5]^2
as
1 - sum(gamma[1, ]^2)
since the ^ operator is vectorised.
Perhaps more fundamentally, this seems somewhat of an XY problem to me where it might help to take a step back. Not knowing the full details of what you're trying to model (as I said, I can't link the details you give to the cSEM.DGP code), I would start by exploring how to solve the recursive SEM in R. I don't really see the need for Mathematica here. As I said earlier, matrix operations are very standard in R; analytically solving a set of recursive equations is also possible in R. Since you seem to come from the Mathematica realm, it might be good to discuss this with a local R coding expert.
If you must use those scary varzeta* functions (and I really doubt that), an option may be to rewrite them in C++ and then compile them with Rcpp to turn them into R functions. Perhaps that will avoid the C stack usage limit?
Although my original question is more general, in order to keep things more comprehensive, I'm formulating below just its partial case, - I expect that a solution/ answer for it will serve as an answer for the more general question.
Question:
to integrate a function f(x)=(...(((x^x)^x)^x)...^x)^x (... powered x n times) on the interval (0,1) ?
Thanks a lot for any ideas!
P.S.: please, do not try to solve the problem mathematically or to simplify an expression (e.g., to approximate the result with a Taylor expansion, whatever), since it's not the main topic (however, I've tried to choose such an example, which should not have any simple transformations)
P.S.2: Original question (which does not require an answer here, since it's expected that an answer for posted question is valid for original one):
if it's possible in R to create a function with an arbitrarily long expression (avoiding "manual" defining). For example, it's easy to set up manually a given function for n=5:
f<-function(x) {
((((x^x)^x)^x)^x)^x
}
But what if n=1'000, or 1'000'000 ?
It seems that simple looping is not appropriate here...
Copied from Rhelp: You should look at:
# ?funprog Should have worked but didn't. Try instead ...
?Reduce
There are several examples of repeated applications of a functional argument. Also composition of list of functions.
One instance:
Funcall <- function(f, ...) f(...) # sort of like `do.call`
Iterate <- function(f, n = 1)
function(x) Reduce(Funcall, rep.int(list(f), n), x, right = TRUE)
Iterate(function(x) x^1.1, 30)(1.01)
#[1] 1.189612
Is there a function or package in R for calculating the Sliding FFT of a sample? By this I mean that given the output of fft(x[n:m]), calculate fft(x[1+(n:m)]) efficiently.
Ideally I'd find both an online version (where I don't have access to the full time series at the beginning, or it's too big to fit in memory, and I'm not going to try to save the whole running FFT in memory either) and a batch version (where I give it the whole sample x and tell it the running window width w, resulting in a complex matrix of dimension c(w,length(x)/w)).
An example of such an algorithm is presented here (but I've never tried implementing it in any language yet):
http://cnx.org/content/m12029/latest/
If no such thingy exists already in R, that doesn't look too hard to implement I guess.
As usually happens when I post something here, I kept working on it and came up with a solution:
fft.up <- function(x1, xn, prev) {
b <- length(prev)
vec <- exp(2i*pi*seq.int(0,b-1)/b)
(prev - x1 + xn) * vec
}
# Test it out
x <- runif(6)
all.equal(fft.up(x[1], x[6], fft(x[1:5])), fft(x[2:6]))
# [1] TRUE
Still interested to know if some library offers this, because then it might offer other handy things too. =) But for now my problem's solved.