In R, incomplete gamma function with complex input? - r

Incomplete gamma functions can be calculated in R with pgamma, or with gamma_inc_Q from library(gsl), or with gammainc from library(expint). However, all of these functions take only real input.
I need an implementation of the incomplete gamma function which will take complex input. Specifically, I have an integer for the first argument, and a complex number for the second argument (the limit in the integral).
This function is well-defined for complex inputs (see Wikipedia), and I've been calculating it in Mathematica. It doesn't seem to be built into R though, and I don't see it in any libraries.
So, can anyone suggest a shorter path to doing these calculations, than looking up an algorithm, implementing it in C, and writing an R interface?
(If I do have to implement it myself, here's the only algorithm for complex inputs that I've found: Kostlan & Gokhman 1987)

Here is an implementation, assuming you want the lower incomplete gamma function. I've compared a couple of values with Wolfram and they match.
library(CharFun)
incgamma <- function(s,z){
z^s * exp(-z) * hypergeom1F1(z, 1, s+1) / s
}
Perhaps the evaluation fails for a large s.
EDIT
Looks like CharFun has been removed from CRAN. You can use IncGamma in HypergeoMat:
> library(HypergeoMat)
> IncGamma(m=50, 2+2i, 5-6i)
[1] 0.3841221+0.3348439i
The result is the same on Wolfram.

Related

CRAN package submission: "Error: C stack usage is too close to the limit"

Right upfront: this is an issue I encountered when submitting an R package to CRAN. So I
dont have control of the stack size (as the issue occured on one of CRANs platforms)
I cant provide a reproducible example (as I dont know the exact configurations on CRAN)
Problem
When trying to submit the cSEM.DGP package to CRAN the automatic pretest (for Debian x86_64-pc-linux-gnu; not for Windows!) failed with the NOTE: C stack usage 7975520 is too close to the limit.
I know this is caused by a function with three arguments whose body is about 800 rows long. The function body consists of additions and multiplications of these arguments. It is the function varzeta6() which you find here (from row 647 onwards).
How can I adress this?
Things I cant do:
provide a reproducible example (at least I would not know how)
change the stack size
Things I am thinking of:
try to break the function into smaller pieces. But I dont know how to best do that.
somehow precompile? the function (to be honest, I am just guessing) so CRAN doesnt complain?
Let me know your ideas!
Details / Background
The reason why varzeta6() (and varzeta4() / varzeta5() and even more so varzeta7()) are so long and R-inefficient is that they are essentially copy-pasted from mathematica (after simplifying the mathematica code as good as possible and adapting it to be valid R code). Hence, the code is by no means R-optimized (which #MauritsEvers righly pointed out).
Why do we need mathematica? Because what we need is the general form for the model-implied construct correlation matrix of a recursive strucutral equation model with up to 8 constructs as a function of the parameters of the model equations. In addition there are constraints.
To get a feel for the problem, lets take a system of two equations that can be solved recursivly:
Y2 = beta1*Y1 + zeta1
Y3 = beta2*Y1 + beta3*Y2 + zeta2
What we are interested in is the covariances: E(Y1*Y2), E(Y1*Y3), and E(Y2*Y3) as a function of beta1, beta2, beta3 under the constraint that
E(Y1) = E(Y2) = E(Y3) = 0,
E(Y1^2) = E(Y2^2) = E(Y3^3) = 1
E(Yi*zeta_j) = 0 (with i = 1, 2, 3 and j = 1, 2)
For such a simple model, this is rather trivial:
E(Y1*Y2) = E(Y1*(beta1*Y1 + zeta1) = beta1*E(Y1^2) + E(Y1*zeta1) = beta1
E(Y1*Y3) = E(Y1*(beta2*Y1 + beta3*(beta1*Y1 + zeta1) + zeta2) = beta2 + beta3*beta1
E(Y2*Y3) = ...
But you see how quickly this gets messy when you add Y4, Y5, until Y8.
In general the model-implied construct correlation matrix can be written as (the expression actually looks more complicated because we also allow for up to 5 exgenous constructs as well. This is why varzeta1() already looks complicated. But ignore this for now.):
V(Y) = (I - B)^-1 V(zeta)(I - B)'^-1
where I is the identity matrix and B a lower triangular matrix of model parameters (the betas). V(zeta) is a diagonal matrix. The functions varzeta1(), varzeta2(), ..., varzeta7() compute the main diagonal elements. Since we constrain Var(Yi) to always be 1, the variances of the zetas follow. Take for example the equation Var(Y2) = beta1^2*Var(Y1) + Var(zeta1) --> Var(zeta1) = 1 - beta1^2. This looks simple here, but is becomes extremly complicated when we take the variance of, say, the 6th equation in such a chain of recursive equations because Var(zeta6) depends on all previous covariances betwenn Y1, ..., Y5 which are themselves dependend on their respective previous covariances.
Ok I dont know if that makes things any clearer. Here are the main point:
The code for varzeta1(), ..., varzeta7() is copy pasted from mathematica and hence not R-optimized.
Mathematica is required because, as far as I know, R cannot handle symbolic calculations.
I could R-optimze "by hand" (which is extremly tedious)
I think the structure of the varzetaX() must be taken as given. The question therefore is: can I somehow use this function anyway?
Once conceivable approach is to try to convince the CRAN maintainers that there's no easy way for you to fix the problem. This is a NOTE, not a WARNING; The CRAN repository policy says
In principle, packages must pass R CMD check without warnings or significant notes to be admitted to the main CRAN package area. If there are warnings or notes you cannot eliminate (for example because you believe them to be spurious) send an explanatory note as part of your covering email, or as a comment on the submission form
So, you could take a chance that your well-reasoned explanation (in the comments field on the submission form) will convince the CRAN maintainers. In the long run it would be best to find a way to simplify the computations, but it might not be necessary to do it before submission to CRAN.
This is a bit too long as a comment, but hopefully this will give you some ideas for optimising the code for the varzeta* functions; or at the very least, it might give you some food for thought.
There are a few things that confuse me:
All varzeta* functions have arguments beta, gamma and phi, which seem to be matrices. However, in varzeta1 you don't use beta, yet beta is the first function argument.
I struggle to link the details you give at the bottom of your post with the code for the varzeta* functions. You don't explain where the gamma and phi matrices come from, nor what they denote. Furthermore, seeing that beta are the model's parameter etimates, I don't understand why beta should be a matrix.
As I mentioned in my earlier comment, I would be very surprised if these expressions cannot be simplified. R can do a lot of matrix operations quite comfortably, there shouldn't really be a need to pre-calculate individual terms.
For example, you can use crossprod and tcrossprod to calculate cross products, and %*% implements matrix multiplication.
Secondly, a lot of mathematical operations in R are vectorised. I already mentioned that you can simplify
1 - gamma[1,1]^2 - gamma[1,2]^2 - gamma[1,3]^2 - gamma[1,4]^2 - gamma[1,5]^2
as
1 - sum(gamma[1, ]^2)
since the ^ operator is vectorised.
Perhaps more fundamentally, this seems somewhat of an XY problem to me where it might help to take a step back. Not knowing the full details of what you're trying to model (as I said, I can't link the details you give to the cSEM.DGP code), I would start by exploring how to solve the recursive SEM in R. I don't really see the need for Mathematica here. As I said earlier, matrix operations are very standard in R; analytically solving a set of recursive equations is also possible in R. Since you seem to come from the Mathematica realm, it might be good to discuss this with a local R coding expert.
If you must use those scary varzeta* functions (and I really doubt that), an option may be to rewrite them in C++ and then compile them with Rcpp to turn them into R functions. Perhaps that will avoid the C stack usage limit?

Basis provided by Ns() in R Epi package

As I was working out how Epi generates the basis for its spline functions (via the function Ns), I was a little confused by how it handles the detrend argument.
When detrend=T I would have expected that Epi::Ns(...) would more or less project the basis given by splines::ns(...) onto the orthogonal complement of the column space of [1 t] and finally extract the set of linearly independent columns (so that we have a basis).
However, this doesn’t appear to be the exactly the case; I tried
library(Epi)
x=seq(-0.75, 0.75, length.out=5)
Ns(x, knots=c(-0.5,0,0.5), Boundary.knots=c(-1,1), detrend=T)
and
library(splines)
detrend(ns(x, knots=c(-0.5,0,0.5), Boundary.knots=c(-1,1)), x)
The matrices produced by the above code are not the same, however, they do have the same column space (in this example) suggesting that if plugged in to a linear model, the fitted coefficients will be different but the fit (itself) will be the same.
The first question I had was; is this true in general?
The second question is why are the two different?
Regarding the second question - when detrend is specified, Epi::Ns gives a warning that fixsl is ignored.
Diving into Epi github NS.r ... in the construction of the basis, in the call to Epi::Ns above with detrend=T, the worker ns.ld() is called (a function almost identical to the guts of splines::ns()), which passes c(NA,NA) along to splines::spline.des as the derivs argument in determining a matrix const;
const <- splines::spline.des( Aknots, Boundary.knots, 4, c(2-fixsl[1],2-fixsl[2]))$design
This is the difference between what happens in Ns(detrend=T) and the call to ns() above which passes c(2,2) to splineDesign as the derivs argument.
So that explains how they are different, but not why? Does anyone have an explanation for why fixsl=c(NA,NA) is used instead of fixsl=c(F,F) in Epi::Ns()?
And does anyone have a proof/or an answer to the first question?
I think the orthogonal complement of const's column space is used so that second (or desired) derivatives are zero at the boundary (via projection of the general spline basis) - but I'm not sure about this step as I haven't dug into the mathematics, I'm just going by my 'feel' for it. Perhaps if I understood this better, the reason that the differences in the result for const from the call to splineDesign/spline.des (in ns() and Ns() respectively) would explain why the two matrices from the start are not the same, yet yield the same fit.
The fixsl=c(NA,NA) was a bug that has been fixed since a while. See the commits on the CRAN Github mirror.
I have still sent an email to the maintainer to ask if the fix could be made a little bit more consistent with the condition, but in principle this could be closed.

Change Objective Function in nls.lm() in "R"

I'm using the function nls.lm {package: minpack.lm} to optimize a parameteristion for a hydrological model. The function is working quite well, but I want to use an other objective function (OF). Normally, the obective function "fn" in the nls.lm is defined as
A function that returns a vector of residuals, the sum square of which
is to be minimized. The first argument of fn must be par.
Now I want to use the Nash-Sutcliff-Efficiency, which is defined as
NSE <- 1 - (sum((obs - sim)^2) / sum((obs - mean(obs))^2))
or other OF. The problem is that nls.lm minimizes the expression sum(x)^2 and only the x is modifiable. I know that the best fit NSE = 1. Thus 1 - NSE creates a real minimization problem.
BTW: Example 1 from a nls.lm help page is a good example; there
observed - getPred(p,xx)
is minimized, what actually means that
sum ( observed - getPred(p,xx) )^2
is minimized by the nls.lm function, whereas getPred(p,xx) returns sim.
Any suggestion would be helpful. Thanks in advance.
Micha
nls.lm (and the related functions nls, and nlsLM) are designed to minimize the sum square of the residuals. For the problem you seek to solve, I would try application of a general-purpose minimizer.
If the problem is 'not too hard' (that is, has a single global minimum, is smooth), you could try to apply 'optim' to it (I would try the 'Nelder-Mead' and 'BFGS' options first), or the 'bobyqa' function from the package 'minqa', among other functions.
If the problem requires a global optimizer, you could try the 'GenSA' function from package 'GenSA', the 'genoud' function from the package 'rgenoud', or the 'DEoptim' function from package 'DEoptim', among other options. A review on 'Global Optimization in R' is forthcoming in the Journal of Statistical Software, and that will give a more complete overview of applicable functions.

numerical differentiation with Scipy

I was trying to learn Scipy, using it for mixed integrations and differentiations, but at the very initial step I encountered the following problems.
For numerical differentiation, it seems that the only Scipy function that works for callable functions is scipy.derivative() if I'm right!? However, I couldn't work with it:
1st) when I am not going to specify the point at which the differentiation is to be taken, e.g. when the differentiation is under an integral so that it is the integral that should assign the numerical values to its integrand's variable, not me. As a simple example I tried this code in Sage's notebook:
import scipy as sp
from scipy import integrate, derivative
var('y')
f=lambda x: 10^10*sin(x)
g=lambda x,y: f(x+y^2)
I=integrate.quad( sp.derivative(f(y),y, dx=0.00001, n=1, order=7) , 0, pi)[0]; show(I)
show( integral(diff(f(y),y),y,0,1).n() )
also it gives the warning that "Warning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated." and I don't know what does this warning stand for as it persists even with increasing "dx" and decreasing the "order".
2nd) when I want to find the derivative of a multivariable function like g(x,y) in the above example and something like sp.derivative(g(x,y),(x,0.5), dx=0.01, n=1, order=3) gives error, as is easily expected.
Looking forward to hearing from you about how to resolve the above cited problems with numerical differentiation.
Best Regards
There are some strange problems with your code that suggest you need to brush up on some python! I don't know how you even made these definitions in python since they are not legal syntax.
First, I think you are using an older version of scipy. In recent versions (at least from 0.12+) you need from scipy.misc import derivative. derivative is not in the scipy global namespace.
Second, var is not defined, although it is not necessary anyway (I think you meant to import sympy first and use sympy.var('y')). sin has also not been imported from math (or numpy, if you prefer). show is not a valid function in sympy or scipy.
^ is not the power operator in python. You meant **
You seem to be mixing up the idea of symbolic and numeric calculus operations here. scipy won't numerically differentiate an expression involving a symbolic object -- the second argument to derivative is supposed to be the point at which you wish to take the derivative (i.e. a number). As you say you are trying to do numeric differentiation, I'll resolve the issue for that purpose.
from scipy import integrate
from scipy.misc import derivative
from math import *
f = lambda x: 10**10*sin(x)
df = lambda x: derivative(f, x, dx=0.00001, n=1, order=7)
I = integrate.quad( df, 0, pi)[0]
Now, this last expression generates the warning you mentioned, and the value returned is not very close to zero at -0.0731642869874073 in absolute terms, although that's not bad relative to the scale of f. You have to appreciate the issues of roundoff error in finite differencing. Your function f varies on your interval between 0 and 10^10! It probably seems paradoxical, but making the dx value for differentiation too small can actually magnify roundoff error and cause numerical instability. See the second graph here ("Example showing the difficulty of choosing h due to both rounding error and formula error") for an explanation: http://en.wikipedia.org/wiki/Numerical_differentiation
In fact, in this case, you need to increase it, say to 0.001: df = lambda x: derivative(f, x, dx=0.001, n=1, order=7)
Then, you can integrate safely, with no terrible roundoff.
I=integrate.quad( df, 0, pi)[0]
I don't recommend throwing away the second return value from quad. It's an important verification of what happened, as it is "an estimate of the absolute error in the result". In this case, I == 0.0012846582250212652 and the abs error is ~ 0.00022, which is not bad (the interval that implies still does not include zero). Maybe some more fiddling with the dx and absolute tolerances for quad will get you an even better solution, but hopefully you get the idea.
For your second problem, you simply need to create a proper scalar function (call it gx) that represents g(x,y) along y=0.5 (this is called Currying in computer science).
g = lambda x, y: f(x+y**2)
gx = lambda x: g(x, 0.5)
derivative(gx, 0.2, dx=0.01, n=1, order=3)
gives you a value of the derivative at x=0.2. Naturally, the value is huge given the scale of f. You can integrate using quad like I showed you above.
If you want to be able to differentiate g itself, you need a different numerical differentiation functio. I don't think scipy or numpy support this, although you could hack together a central difference calculation by making a 2D fine mesh (size dx) and using numpy.gradient. There are probably other library solutions that I'm not aware of, but I know my PyDSTool software contains a function diff that will do that (if you rewrite g to take one array argument instead). It uses Ridder's method and is inspired from the Numerical Recipes pseudocode.

How to vectorize equations?

I'm trying to implement the Softmax regression algorithm to solve the K-classifier problem after watching Professor Andrew Ng's lectures on GLM. I thought I understood everything he was saying until it finally came to writing the code to implement the cost function for Softmax regression, which is as follows:
The problem I am having is trying to figure out a way to vectorize this. Again I thought I understood how to go about vectorizing equations like this since I was able to do it for linear and logistic regression, but after looking at that formula I am stuck.
While I would love to figure out a vectorized solution for this (I realize there is a similar question posted already: Vectorized Implementation of Softmax Regression), what I am more interested in is whether any of you can tell me a way (your way) to methodically convert equations like this into vectorized forms. For example, for those of you who are experts or seasoned veterans in ML, when you read of new algorithms in the literature for the first time, and see them written in similar notation to the equation above, how do you go about converting them to vectorized forms?
I realize I might be coming off as being like the student who is asking Mozart, "How do you play the piano so well?" But my question is simply motivated from a desire to become better at this material, and assuming that not everyone was born knowing how to vectorize equations, and so someone out there must have devised their own system, and if so, please share! Many thanks in advance!
Cheers
This one looks pretty hard to vectorize since you are doing exponentials inside of your summations. I assume you are raising e to arbitrary powers. What you can vectorize is the second term of the expression \sum \sum theta ^2 just make sure to use .* operator in matlab enter link description here to computer \theta ^2
Same goes for the inner terms of the ratio of the that goes into the logarithm. \theta ' x^(i) is vectorizable expression.
You might also benefit from a memoization or dynamic programming technique and try to reuse the results of computations of e^\theta' x^(i).
Generally in my experience the way to vectorize is first to get non-vectorized implementation working. Then try to vectorize the most obvious parts of your computation. At every step tweak your function very little and always check if you get the same result as non-vectorized computation. Also, having multiple test cases is very helpful.
The help files that come with Octave have this entry:
19.1 Basic Vectorization
To a very good first approximation, the goal in vectorization is to
write code that avoids loops and uses whole-array operations. As a
trivial example, consider
for i = 1:n
for j = 1:m
c(i,j) = a(i,j) + b(i,j);
endfor
endfor
compared to the much simpler
c = a + b;
This isn't merely easier to write; it is also internally much easier to
optimize. Octave delegates this operation to an underlying
implementation which, among other optimizations, may use special vector
hardware instructions or could conceivably even perform the additions in
parallel. In general, if the code is vectorized, the underlying
implementation has more freedom about the assumptions it can make in
order to achieve faster execution.
This is especially important for loops with "cheap" bodies. Often it
suffices to vectorize just the innermost loop to get acceptable
performance. A general rule of thumb is that the "order" of the
vectorized body should be greater or equal to the "order" of the
enclosing loop.
As a less trivial example, instead of
for i = 1:n-1
a(i) = b(i+1) - b(i);
endfor
write
a = b(2:n) - b(1:n-1);
This shows an important general concept about using arrays for
indexing instead of looping over an index variable.  Index Expressions.
Also use boolean indexing generously. If a condition
needs to be tested, this condition can also be written as a boolean
index. For instance, instead of
for i = 1:n
if (a(i) > 5)
a(i) -= 20
endif
endfor
write
a(a>5) -= 20;
which exploits the fact that 'a > 5' produces a boolean index.
Use elementwise vector operators whenever possible to avoid looping
(operators like '.*' and '.^').  Arithmetic Ops. For simple
inline functions, the 'vectorize' function can do this automatically.
-- Built-in Function: vectorize (FUN)
Create a vectorized version of the inline function FUN by replacing
all occurrences of '', '/', etc., with '.', './', etc.
This may be useful, for example, when using inline functions with
numerical integration or optimization where a vector-valued
function is expected.
fcn = vectorize (inline ("x^2 - 1"))
=> fcn = f(x) = x.^2 - 1
quadv (fcn, 0, 3)
=> 6
See also:  inline,  formula,
 argnames.
Also exploit broadcasting in these elementwise operators both to
avoid looping and unnecessary intermediate memory allocations.
 Broadcasting.
Use built-in and library functions if possible. Built-in and
compiled functions are very fast. Even with an m-file library function,
chances are good that it is already optimized, or will be optimized more
in a future release.
For instance, even better than
a = b(2:n) - b(1:n-1);
is
a = diff (b);
Most Octave functions are written with vector and array arguments in
mind. If you find yourself writing a loop with a very simple operation,
chances are that such a function already exists. The following
functions occur frequently in vectorized code:
Index manipulation
* find
* sub2ind
* ind2sub
* sort
* unique
* lookup
* ifelse / merge
Repetition
* repmat
* repelems
Vectorized arithmetic
* sum
* prod
* cumsum
* cumprod
* sumsq
* diff
* dot
* cummax
* cummin
Shape of higher dimensional arrays
* reshape
* resize
* permute
* squeeze
* deal
Also look at these pages from a Stanford ML wiki for some more guidance with examples.
http://ufldl.stanford.edu/wiki/index.php/Vectorization
http://ufldl.stanford.edu/wiki/index.php/Logistic_Regression_Vectorization_Example
http://ufldl.stanford.edu/wiki/index.php/Neural_Network_Vectorization

Resources