Find a primitive function for the antiderivative? - r

I've been searching on R help for a primitive function, because in my case I can't do it with integrate function, is there any way to find a primitive function (antiderivative)?

If it is a one-off, you can use a computer-algebra system (Maxima, Maple, Wolfram Alpha, etc.).
If you want to do it from R, you can use the Ryacas package.
For instance, yacas(expression(integrate(sin))) returns -Cos(x).

There is no analytical method to generate F where F'=f and f is known but you can always approximate a value for a specific bounds when the bounds is known using the trapezoid approximation for instance.

Related

How can I define a function in the form of f(a;b) in R?

This has been a recurring problem for me, so I'll have to address it here.
I'm trying to build a non-linear optimization problem using optim or fsolve, and I need certain variables as fixed parameters.
So by f(a;b), I mean a function that defines b as given, and immovable for the optimization.
Evidently matlab has a way to define b as just a parameter set & only allow a to be moved in the optimization; in R this doesn't seem to apply in the same way as matlab. Both optim & fsolve require that I specify the initial values of both a & b, and they calculate the results shifting b as well as a.
If anyone could let me know of a good way out of this that would be much appreciated.
Thank you!
You can pass b via the ... argument of optim.

2D optimize() in R

I would like to use optimize() to find parameters of 2 dimensional mixed normal distribution but I don't know how to use the function.
I have the density function:
mixdnorm2<-function(x,y,p,mu11,mu12,s11,s12,rho1,mu21,mu22,s21,s22,rho2){
dnorm2<-function(x,y,m1,m2,s1,s2,r){
U<-c(x-m1,y-m2)
S<-matrix(c(s1^2,s1*s2*r,s1*s2*r,s2^2),2,2,byrow = T)
f<-1/(2*pi*sqrt(det(S)))*exp(-0.5%*%t(U)%*%solve(S)%*%U)
return(f)
}
f<-p*dnorm2(x,y,m11,m12,s11,s12,rho1)+(1-p)*dnorm2(x,y,m21,m22,s21,s22,rho2)
return(f)
}
but I don't know what to do with it.
optimize(mixdnorm2...)
Please do you know, how to use the function? I couldn't find anything about the problem so I'll glad for any advice :)
The optimize function is only for 1 dimension. The optim function is the one to use for 2 or more dimensions. So look up the help page for optim.
For some reason the help page for optimize does not mention optim, though the help page for optim does mention optimize.
There are also some packages that provide additional optimization functions (search on CRAN).

Arc length parametrization using R

I am dealing with 2-dimensional parametric curve f.
Is there there is any function in R (in any package) which gives the arc-length parametrization for any such given parametric f?
I know how to derive the arc-length parametrized function from a given function. It involves derivative and integration as here. But looking for whether there is any R function for computation of arc-length parametrization.
The pracma package seems to have the functions you are looking for. See arclength() in particular on page 23 of the pracma documentation.

Is there an equivalent to matlab's rcond() function in Julia?

I'm porting some matlab code that uses rcond() to test for singularity, as also recommended here (for matlab singularity testing).
I see that there is a cond() function in Julia (as also in Matlab), but rcond() doesn't appear to be available by default:
ERROR: rcond not defined
I'd assume that rcond(), like the Matlab version is more efficient than 1/cond(). Is there such a function in Julia, perhaps using an add-on module?
Julia calculates the condition number using the ratio of maximum to the minimum of the eigenvalues (got to love open source, no more MATLAB black boxs!)
Julia doesn't have a rcond function in Base, and I'm unaware of one in any package. If it did, it'd just be the ratio of the maximum to the minimum instead. I'm not sure why its efficient in MATLAB, but its quite possible that whatever the reason is it doesn't carry though to Julia.
Matlab's rcond is an optimization based upon the fact that its an estimate of the condition number for square matrices. In my testing and given that its help mentions LAPACK's 1-norm estimator, it appears as though it uses LAPACK's dgecon.f. In fact, this is exactly what Julia does when you ask for the condition number of a square matrix with the 1- or Inf-norm.
So you can simply define
rcond(A::StridedMatrix) = 1/cond(A,1)
You can save Julia from twice-inverting LAPACK's results by manually combining cond(::StridedMatrix) and cond(::LU), but the savings here will almost certainly be immeasurable. Where there is a measurable savings, however, is that you can directly take the norm(A) instead of reconstructing a matrix similar to A through its LU factorization.
rcond(A::StridedMatrix) = LAPACK.gecon!('1', lufact(A).factors, norm(A, 1))
In my tests, this behaves identically to Matlab's rcond (2014b), and provides a decent speedup.

Operations on long numbers in R

I aim to use maximum likelihood methods (usually about 10^5 iterations) with a probability distribution that creates very big integers and very small float values that cannot be stored as a numeric nor a in a float type.
I thought I would use the as.bigq in the gmp package. My issue is that one can only add, substract, multiply and dived two objects of class/type bigq, while my distribution actually contains logarithm, power, gamma and confluent hypergeometric functions.
What is my best option to deal with this issue?
Should I use another package?
Should I code all these functions for bigq objects.
Coding these function on R may cause some functions to be very slow, right?
How to write the logarithm function using only the +,-,*,/ operators? Should I approximate this function using a taylor series expansion?
How to write the power function using only the +,-,*,/ operators when the exponent is not an integer?
How to write the confluent hypergeometric function (the equivalent of the Hypergeometric1F1Regularized[..] function in Mathematica)?
I could eventually write these functions in C and call them from R but it sounds like some complicated work for not much, especially if I have to use the gmp package in C as well to handle these big numbers.
All your problems can be solved with Rmpfr most likely which allows you to use all of the functions returned by getGroupMembers("Math") with arbitrary accuracy.
Vignette: http://cran.r-project.org/web/packages/Rmpfr/vignettes/Rmpfr-pkg.pdf
Simple example of what it can do:
test <- mpfr(rnorm(100,mean=0,sd=.0001), 240)
Reduce("*", test)
I don't THINK it has hypergeometric functions though...

Resources