I was writing this code to fit a skew-normal to my data. Since I had some crazy value for alpha (alpha=183) I wanted to choose a starting point and see if things goes better. This is my code
my.mle=selm(Y2~1,start = list(xi=1, omega=1, alpha=0))
but I get this error
Error in abs(alpha) : non-numeric argument to mathematical function
what's wrong?
if you choose alpha=0 your fisher information matrix become singular and so your maximization doesn't work.
Related
I've got a code that works with the Data Set. I found out that it doesn't want to work with the ln(x) function. The data set can be found here.
LY <- ln(Apple$Close - Apple$Open)
Warning in log(x) : NaNs produced
Could you please help me to fix this problem?
Since stocks can go down as well as up (unfortunately), Close can be less than Open and Close - Open can be negative. It just doesn't make sense to take the natural log of a negative number; it's like dividing by zero, or more precisely like taking the square root of a negative number.
Actually, you can take the logarithm of a complex number with a negative real part:
log(as.complex(-1))
## [1] 0+3.141593i
... but "i times pi" is probably not a very useful result for further data analysis ...
(in R, log() takes the natural logarithm. While the SciViews package provides ln() as a synonym, you might as well just get used to using log() - this is a convention across most programming languages ...)
Depending on what you're trying to do, the logarithm of the close/open ratio can be a useful value (log(Close/Open)): this is negative when Close < Open, positive when Close > Open). As #jpiversen points out, this is called the logarithmic return; as #KarelZe points out, log(Close/Open) is mathematically equivalent to log(Close) - log(Open) (which might be what your professor wanted ... ???)
Are you looking for logarithmic return? In that case the formula would be:
log(Apple$Close / Apple$Open)
Since A / B for two positive values is always positive, this will not create NaNs.
I have tried, log, sqrt root and arcsine transformation on my data and nothing worked.
I tried to use boxcox and I got the error response variable must be positive
md<-lm(Score1~Location1+Site1+Trial1+Stage1)
summary(md)
plot(md, which = 1)
bc<-boxcox(md, plotit = T, lambda = seq(0.5,1.5, by =0.1))
This is what I ran on R and I got the error message
Any idea on how I can fix my code?
The Box-Cox transformation is defined as BC(y) = (y^lambda - 1)/lambda (and as log(y) for lambda==0). This transformation is not generally well-defined for negative y values (because it requires raising negative values to a power, which generates complex values in most cases).
The car package provides similar transformations that allow negative input, specifically the Yeo-Johnson transformation (see Wikipedia link above) and an adjusted version of the Box-Cox transformation; these are available via car::yjPower() and car::bcnPower() respectively (see the documentation for more details).
(Now that you know the names of some alternatives, you can also search for other R functions/packages that provide them.)
I want to program the maximum likelihood of a gamma distribution in R; until now I have done the following:
library(stats4)
x<-scan("http://www.cmc.edu/pages/faculty/MONeill/Math152/Handouts/gamma-arrivals.txt")
loglike2<-function(LL){
alpha<-LL$a
beta<-LL$b
(alpha-1)*sum(log(x))-n*alpha*log(beta)-n*lgamma(alpha)}
mle(loglike2,start=list(a=0.5,b=0.5))
but when I want to run it, the following message appear:
Error in mle(loglike2, start = list(a = 0.5, b = 0.5)) :
some named arguments in 'start' are not arguments to the supplied log-likelihood
What am I doing wrong?
From the error message it sounds like mle needs to be able to see the variable names listed in start= in the function call itself.
loglike2<-function(a, b){
alpha<-a
beta<-b
(alpha-1)*sum(log(x))-n*alpha*log(beta)-n*lgamma(alpha)
}
mle(loglike2,start=list(a=0.5,b=0.5))
If that doesn't work you should post a reproducible example with all variables defined and also explicitly indicate which package the mle function is coming from.
The error message is unfortunately criptic because it indicates mising
values owing to the fact that alpha and gamma have to be positive and mle optimizes over the real numbers. Hence, you need to transfomt the vector over which the function is being optimized, like so:
library(stats4)
x<-scan("http://www.cmc.edu/pages/faculty/MONeill/Math152/Handouts/gamma-arrivals.txt")
loglike<-function(alpha,beta){
(alpha-1)*sum(log(x))-n*alpha*log(beta)-n*lgamma(alpha)
}
fit <- mle(function(alpha,beta)
# transfrom the parameters so they are positive
loglike(exp(alpha),exp(beta)),
start=list(alpha=log(.5),beta=log(.5)))
# of course you would have to exponentiate the estimates too.
exp(coef(fit1))
note that the error now is that you are using n in loglike()
which you have not defined. If you define n, then you get an error stating
Lapack routine dgesv: system is exactly singular: U[1,1] = 0. which is
caused either by a not very good guess for the start value of alpha and
beta or (more likely) that loglike() does not have a minima (I think your
deleted post from last night had a slightly different formula which I was
able to get working, but not able to respond to b/c the post was deleted...)
FYI, if you want to inspect the alpha and beta parameters that cause the
errors, you can use scoping assignment to post the most recently called
parameters to the environment in which loglike() is defined as in:
loglike<-function(alpha,beta){
g <<- c(alpha,beta)
(alpha-1)*sum(log(x))-n*alpha*log(beta)-n*lgamma(alpha)
}
I am solving simple optimization problem. The data set has 26 columns and over 3000 rows.
The source code looks like
Means <- colMeans(Returns)
Sigma <- cov(Returns)
invSigma1 <- solve(Sigma)
And everything works perfect- but then I want to do the same for shorter period (only 261 rows) and the solve function writes the following error:
solve(Sigma)
Error in solve.default(Sigma) :
Lapack routine dgesv: system is exactly singular
Its weird because when I do the same with some random numbers:
Returns<-matrix(runif(6786,-1,1), nrow=261)
Means <- colMeans(Returns)
Sigma <- cov(Returns)
invSigma <- solve(Sigma)
no error occurs at all. Could someone explain me where could be the problem and how to treat it.
Thank you very much,
Alex
Using solve with a single parameter is a request to invert a matrix. The error message is telling you that your matrix is singular and cannot be inverted.
I guess your code uses somewhere in the second case a singular matrix (i.e. not invertible), and the solve function needs to invert it. This has nothing to do with the size but with the fact that some of your vectors are (probably) colinear.
Lapack is a Linear Algebra package which is used by R (actually it's used everywhere) underneath solve(), dgesv spits this kind of error when the matrix you passed as a parameter is singular.
As an addendum: dgesv performs LU decomposition, which, when using your matrix, forces a division by 0, since this is ill-defined, it throws this error. This only happens when matrix is singular or when it's singular on your machine (due to approximation you can have a really small number be considered 0)
I'd suggest you check its determinant if the matrix you're using contains mostly integers and is not big. If it's big, then take a look at this link.
I can understand your question. The problem is that your matrix is perpendicular. You can see your first number and the last number of your matrix is same.
Is there a way to calculate the determinant of a complex matrix?
F4<-matrix(c(1,1,1,1,1,1i,-1,-1i,1,-1,1,-1,1,-1i,-1,1i),nrow=4)
det(F4)
Error in determinant.matrix(x, logarithm = TRUE, ...) :
determinant not currently defined for complex matrices
library(Matrix)
determinant(Matrix(F4))
Error in Matrix(F4) :
complex matrices not yet implemented in Matrix package
Error in determinant(Matrix(F4)) :
error in evaluating the argument 'x' in selecting a method for function 'determinant'
If you use prod(eigen(F4)$values)
I'd recommend
prod(eigen(F4, only.values=TRUE)$values)
instead.
Note that the qr() is advocated to use iff you are only interested in the
absolute value or rather Mod() :
prod(abs(Re(diag(qr(x)$qr))))
gives the Mod(determinant(x))
{In X = QR, |det(Q)|=1 and the diagonal of R is real (in R at least).}
BTW: Did you note the caveat
Often, computing the determinant is
not what you should be doing
to solve a given problem.
on the help(determinant) page ?
If you know that the characteristic polynomial of a matrix A splits into linear factors, then det(A) is the product of the eigenvalues of A, and you can use eigen value functions like this to work around your problem. I suspect you'll still want something better, but this might be a start.