I'm running different model of this form:
gamm(H_1_3~ s(wcomp.x.cum, bs='cr')+s(wcomp.y.cum, bs='cr')+s(h_AST, bs='cr'),
na.action=na.omit,data=lag4_1DAY, method='REML', weights=vf)
R doesn't throw me an Error (i.e. I have an output) but I have a warning like this one:
Warning message:
In logLik.reStruct(object, conLin):
Singular precision matrix in level -3, block 1
what does it means?
is it a problem or can I live with it?
Related
Simply speaking, I have a function f(x, t2) and I want to find the value of x that maximize the integral of f(x, t2) with respect to t2. I choose pso algorithm to do the optimization. The excutable code is as follows
library(pso)
xl=0; xu=2000; n=1; t2l=100; t2u=2000; t1=1
g<-function(x, t2) t1*x/(t2+x)
h<-function(z) 1/z^n
gdot<-function(x, t2){
c(x/(t2+x),-t1*x/(t2+x)^2)
}
logdetHinv<-function(dp, dw, t2){
gmat=mapply(function(x) gdot(x,t2),dp)
D0=gmat%*%diag(dw)%*%t(gmat)
D1=gmat%*%diag(1/h(g(dp,t2)))%*%diag(dw)%*%t(gmat)
2*log(det(D1))-log(det(D0))
}
obj<-function(x){
dp=x[1:2]; dw=c(x[3],1-x[3])
fitness_value=-integrate(Vectorize(function(t2) logdetHinv(dp, dw, t2)*1/(t2u-t2l)), t2l, t2u)$value
return(ifelse(dw[2]>0, fitness_value, fitness_value+1e3))
}
x <- psoptim(rep(1,3), fn = obj, lower = c(rep(xl,2),0.1), upper = c(rep(xu,2), 0.9))$par
x
Because the global optimization involves some random procedure, it sometimes reports the correct result
> x
[1] 2000.0000 754.4146 0.5000
the other times it reports error
Error in integrate(Vectorize(function(t2) logdetHinv(dp, dw, t2) * 1/(t2u - :
non-finite function value
In addition: There were 11 warnings (use warnings() to see them)
> warnings()
Warning messages:
1: In log(det(D1)) : NaNs produced
2: In log(det(D0)) : NaNs produced
3: In log(det(D1)) : NaNs produced
4: In log(det(D0)) : NaNs produced
I suppose the algorithm tries take log of some negative values in logdetHinv, which returns NaN with a warning message, not an error yet, and finally causes error in integrate.
I want to avoid such values, maybe with tryCatch, like if there is warning in the function logdetHinv, it returns a very small value, but not NaN, so it will not cause error in integrate, and the psoptim is unlikely to choose such values when maximizing the objective function (minimizing -integrate(logdetHinv)) . I am not familiar with tryCatch in such complex situation. Where should I put the tryCatch? Thanks.
Moreover, I would like to know if there are some debugging techniques in R that allow me to know what random value (D0/D1) cause the error in this case. I guess it is some negative value in log, but it should not, as inside the log is a determinant of a positive definite matrix. In the traceback mode, in browse, if I type D0 the object 'D0' will not be found.
In this case, I would not use tryCatch which is usually more appropriate in testing than in your main code. Why don't you simply test the determinants in your function ? Something like that should work:
logdetHinv<-function(dp, dw, t2){
gmat=mapply(function(x) gdot(x,t2),dp)
D0=gmat%*%diag(dw)%*%t(gmat)
D1=gmat%*%diag(1/h(g(dp,t2)))%*%diag(dw)%*%t(gmat)
detD1 <- max(0.01, det(D1))
detD0 <- max(0.01, det(D0))
2*log(detD1)-log(detD0)
}
I do not have a reproducible example here but I am hoping to see if the following error is diagnostic of an obvious problem.
Using miceadds, I am running correlations on data that has been imputed with MICE. I receive the following error when a continuous variable ("6") (-0.5 to 1.4) is added to the model.
micombine.cor(imp, variables=c(3:4:5:6))
Above results in the following error
Error in stats::cor(dat_ii, dat_jj, method = method, use =
"pairwise.complete.obs") :
'y' must be numeric
In addition: Warning messages:
1: In 3:4:5 : numerical expression has 2 elements: only the first used
2: In 3:4:5:6 :
numerical expression has 3 elements: only the first used
I have been trying to find the glasso matrix for a covariance matrix input link:
SP_glasso_matrix= Glasso(SP_covar_matrix, rho=0)
Warning message returned is:
Warning message: In glasso(SP_covar_matrix, rho = 0) : With rho=0,
there may be convergence problems if the input matrix is not of full
rank
Is there something wrong with my covariance matrix? What is rho and how do I set it?
I'm trying to run a factor analysis on a set of 80 dichotomous variables (1440 cases) using the hector function from the polycor package and the instructions I found here: http://researchsupport.unt.edu/class/Jon/Benchmarks/BinaryFA_L_JDS_Sep2014.pdf
Sadly, after I select just the variables interest from the rest of my dataset and run the factor analysis on them, I seem to consistently get the following error and warnings
Error in optim(0, f, control = control, hessian = TRUE, method = "BFGS") :
non-finite finite-difference value [1]
In addition: Warning messages:
1: In log(P) : NaNs produced
2: In log(P) : NaNs produced
This is with the command/when I hit the step described in the above PDF:
testMat <- hetcor(data)$cor
No idea what this means or how to proceed... Your thoughts are appreciated. Thank you!
I'm trying to reproduce the following example from David Ruppert's "Statistics and Data Analysis for Financial Engineering", which fits Students t-distribution to historical risk free rate:
library(MASS)
data(Capm, package = "Ecdat")
x <- Capm$rf
fitt <- fitdistr(x,"t", start = list(m=mean(x),s=sd(x)), df=3)
as.numeric(fitt$estimate)
0.437310595161651 0.152205764779349
The output is accompanied by the following Warnings message:
Warning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs producedWarning message:
In log(s): NaNs produced
It appears from the R's help file that MASS::fitdistr uses maximum-likelihood for finding optimal parameters. However, when I do optimization manually (same book), all goes smoothly and there is no warnings:
library(fGarch)
loglik_t <- function(beta) {sum( - dt((x - beta[1]) / beta[2],
beta[3], log = TRUE) + log(beta[2]) )}
start <- c(mean(x), sd(x), 5)
lower <- c(-1, 0.001, 1)
fit_t <- optim(start, loglik_t, hessian = T, method = "L-BFGS-B", lower = lower)
fit_t$par
0.44232633269102 0.163306955396773 4.12343777572566
The fitted parameters are within acceptable standard errors, and, in addition to mean and sd I have gotten df.
Can somebody advise me please:
Why MASS::fitdistr produces warnings whereas optimization via fGarch::optim succeeds without a warning?
Why there is no df in MASS::fitdistr output?
Is there a way to run MASS:fitdistr on this data without a warning and get df?
Disclaimer:
a similar question was asked couple of times without an answer here and here
You are not passing the lower argument to the function fitdistr which leads it to make a search in positive and negative domain. By passing the lower argument to function
fitt <- fitdistr(x,"t", start = list(m=mean(x),s=sd(x)), df=3, lower=c(-1, 0.001))
you get no NaNs -as you did in your manual optimisation.
EDIT:
fitt <- fitdistr(x,"t", start = list(m=mean(x),s=sd(x),df=3),lower=c(-1, 0.001,1))
returns non-integer degrees of freedom result. However, I guess, the rounded value of it, which is round(fitt$estimate['df'],0) can be used for fitted degrees of freedom parameter.