Error in summary quantreg backsolve - r

When I run a quantile regression in R, using the quantreg package, and then I run summary(quantregObject), I get this error message:
Error in base::backsolve(r, x, k = k, upper.tri = upper.tri, transpose = transpose, : singular matrix in 'backsolve'. First zero in diagonal [1]
Any suggestion how could I fix this problem?

In short, try:
summary(quantregObject, se = "iid")
which puts a strong assumption on your residuals. Or if you need accuracy use a boot strap to get the standard errors:
summary(quantregObject, se = "boot")
If you call summary on a an object returned by quantreg:rq
summary(quantregObject)
This will call summary.rq.
From ?summary.rq.
You can see that there are 4 options to compute the standard errors (se).
Depending on the sample size (N < 1000) the default is se = "rank" or se = "nid".
"nid" does something which sounds complicated and may then yield a diagonal matrix which backsolve cannot handle.

Related

Is there a difference between summary() and stargazer() by calculating standard error?

Recently I have used quantreg::rq() to estimate a quantile regression. It returns the following error when using summary():
Error in base::backsolve(r, x, k = k, upper.tri = upper.tri, transpose = transpose, : singular matrix in 'backsolve'. First zero in diagonal
I checked this question and found the solution, which is to set se method to iid : summary(df, se="iid) and it worked. But if I use stargazer() and set stargazer(df, rq.se="iid") it returns again the same error message :
Error in base::backsolve(r, x, k = k, upper.tri = upper.tri, transpose = transpose, :
singular matrix in 'backsolve'. First zero in diagonal
Thus I want to ask is there a difference between summary() and stargazer() by calculating standard error?
Here are the codes and URL to test.RData:
library(quantreg)
library(stargazer)
df <- rq(logincome ~ ., data=rfmale1995,tau=c(0.2))
summary(df,se="iid")
stargazer(df,rq.se="iid")
Edit: I have checked the internal codes for stargazer. rq.se is nothing else but a direct extract from summary(model, se= ""). This means there should be no difference between summary() and stargazer() by calculating standard error. And it makes me more confused...

Unknown error message when attempting to find MLE in R

I'm trying to find the MLE of distribution whose pdf is specified as 'mixture' in the code. I've provided the code below that gives an error of
"Error in optim(start, f, method = method, hessian = TRUE, ...) :
L-BFGS-B needs finite values of 'fn'"
"claims" is the dataset im using. I tried the same code with just the first two values of "claims" and encountered the same problem, so for a reproducible example the first two values are 1536.77007 and 1946.92409.
The limits on the parameters of the distribution is that 0<.p.<1 and a>0 and b>0, hence the lower and upper bounds in the MLE function. Any help is much appreciated.
#create mixture of two exponential distribution
mixture<-function(x,p,a,b){
d<-p*a*exp(-a*x)+(1-p)*b*exp(-b*x)
d
}
#find MLE of mixture distribution
LL <- function(p,a,b) {
X = mixture(claims,p,a,b)
#
-sum(log(X))
}
mle(LL, start = list(p=0.5,a=1/100,b=1/100),method = "L-BFGS-B", lower=c(0,0,0), upper=c(1,Inf,Inf))
edit: Not really sure why dput(), but anyway,
#first two values of claims put into dput() (the actual values are above)
dput(claims[1:2])
c(307522.103, 195633.5205)

Error when fitting a beta distribution: the function mle failed to estimate the parameters with error code 100

I'm trying to use fitdist () function from the fitdistrplus package to fit my data to different distributions. Let's say that my data looks like:
x = c (1.300000, 1.220000, 1.160000, 1.300000, 1.380000, 1.240000,
1.150000, 1.180000, 1.350000, 1.290000, 1.150000, 1.240000,
1.150000, 1.120000, 1.260000, 1.120000, 1.460000, 1.310000,
1.270000, 1.260000, 1.270000, 1.180000, 1.290000, 1.120000,
1.310000, 1.120000, 1.220000, 1.160000, 1.460000, 1.410000,
1.250000, 1.200000, 1.180000, 1.830000, 1.670000, 1.130000,
1.150000, 1.170000, 1.190000, 1.380000, 1.160000, 1.120000,
1.280000, 1.180000, 1.170000, 1.410000, 1.550000, 1.170000,
1.298701, 1.123595, 1.098901, 1.123595, 1.110000, 1.420000,
1.360000, 1.290000, 1.230000, 1.270000, 1.190000, 1.180000,
1.298701, 1.136364, 1.098901, 1.123595, 1.316900, 1.281800,
1.239400, 1.216989, 1.785077, 1.250800, 1.370000)
Next, if i run fitdist (x, "gamma") everything is fine, but if I use fitdist (x, "beta") instead I get the following error:
Error in start.arg.default(data10, distr = distname) :
values must be in [0-1] to fit a beta distribution
Ok, so I'm not native english but as far as I understand this method requires data to be in the range [0,1], so I scale it by using x_scaled = (x-min(x))/max(x). This gives me a vector with values in that range that perfectly correlates the original vector x.
Because of x_scaled is of class matrix, I convert into a numeric vector using as.numeric(). And then fit the model with fitdist(x_scale,"beta").
This time I get the following error:
Error in fitdist(x_scale, "beta") :
the function mle failed to estimate the parameters, with the error code 100
So after that I've been doing some search engine queries but I don't find anything useful. Does anybody ave an idea of whats going on wrong here? Thank you
By reading into the source code, it can be found that the default estimation method of fitdist is mle, which will call mledist from the same package, which will construct a negative log-likelihood for the distribution you have chosen and use optim or constrOptim to numerically minimize it. If there is anything wrong with the numerical optimization process, you get the error message you've got.
It seems like the error occurs because when x_scaled contains 0 or 1, there will be some problem in calculating the negative log-likelihood for beta distribution, so the numerical optimization method will simply broke. One dirty trick is to let x_scaled <- (x - min(x) + 0.001) / (max(x) - min(x) + 0.002), so there is no 0 nor 1 in x_scaled, and fitdist will work.

passing sparse xreg to stlf in R causes optimisation error

I am trying to forecast a time series, and regress on temperature. The residuals show a different behaviour at low and high temperatures so I want to use piecewise linear approach, so learn different coeffecients for temperatures above and below 35 degrees.
The data is in a dataframe data$x, data$Season, data$Temp.
#Create data frame
len<-365*3 + 1 +31
x<-rnorm(len,mean=4000000,sd=100000)
Season<-c(rep(3,62),rep(4,91),rep(1,90),rep(2,92),rep(3,92),rep(4,91),rep(1,90),rep(2,92),rep(3,92),rep(4,91),rep(1,91),rep(2,92),rep(3,61))
Temp<-rnorm(len,mean=20,sd=5)
data<-data.frame(x,Season,Temp)
#Create model matrix
season_dummy<-model.matrix(~as.factor(data$Season)+0)
Temp_max=pmax(0,data$Temp-35) # creates 0, or a difference
Temp_restore<-restore_temp_up(Temp_max,data$Temp,35) # restores difference to original value
Temp_season_matrix_max=Temp_restore * season_dummy
#Create time-series and forecast
data_ts<-ts(data$x[1:1000],freq=365,start=c(2009,182))
len_train<-length(data_ts)
xreg1<-Temp_season_matrix_max[1:len_train,]
newxreg1<-Temp_season_matrix_max[(len_train+1):(len_train+30),]
stlf(data_ts,method="arima",h=30,xreg=xreg1,newxreg=newxreg1,s.window="periodic")
> Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE, :
non-finite value supplied by optim
Error in auto.arima(x, xreg = xreg, seasonal = FALSE, ...) :
No suitable ARIMA model found
In addition: Warning message:
In auto.arima(x, xreg = xreg, seasonal = FALSE, ...) :
Unable to calculate AIC offset
>
Other threads suggest changing method solver from CSS to ML, but I cant edit these parameters in stlf. The help file shows an optional parameter "forecastfunction" but there are no examples of real explanation how to use it.
Note - when I set the min temperature to say 20, instead of 35, this works ok - I am sure it is because the xreg matrix containing temperatures above 35 degress is sparse (most temperatures are below this value), but I am not sure how to get around this.
(I have included code for restore_temp_up - possibly inefficient, but included here for question completion.)
restore_temp_up<-function(x,original,k){
if(!is.vector(x))
stop('x must be a vector')
for (i in 1:length(x)){
if(!is.na(x[i])){
if (x[i] > 0){
x[i]<-x[i]+k
}
if (original[i] == k){
x[i]<-original[i] ## this is the case if original WAS =k, then dont know whether original is 0,
}
}
}
return(x)
}
Your design matrix is rank deficient so the regression is singular. To see this:
> eigen(t(xreg1) %*% xreg1)$val
[1] 1321.223 0.000 0.000 0.000
You cannot fit a regression model with a rank deficient design matrix.

Model fitting with nls.lm in R, "Error: unused argument"

I'm trying to use the nls.lm function in the minpack.lm to fit a non-linear model to some data from a psychophysics experiment.
I've had a search around and can't find a lot of information about the package so have essentially copied the format of the example given on the nls.lm help page. Unfortunately my script is still failing to run and R is throwing out this error:
Error in fn(par, ...) :
unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))
It appears that the script thinks the data I want to fit the model to is irrelevant, which is definitely wrong.
I'm expecting it to fit the model and produce a value of 0.5403 for the spare parameter (w).
Any help is greatly appreciated.
I'm making the transfer from Matlab over to R so apologies if my code looks sloppy.
Here's the script.
install.packages("pracma")
require(pracma)
install.packages("minpack.lm")
require(minpack.lm)
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(w,n) .5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) )
# example for residFun
# calculates an error rate of 2.59%
a=matrix(c(2,1),1,byrow=TRUE)
residFun(.23,a)
# Initial guess for parameter to be fitted (w)
parStart=list(w=0.2)
# Recorded accuracies in matrix, 1- gives errors to input into residFun
# i.e. the y-values I want to fit the model
Acc=1-(matrix(c(0.8571,0.7143,0.6250,0.6154,0.5333,0.3846),ncol=6))
# Ratios (converted to proportions) used in testing
# i.e. the points along the x-axis to fit the above data to
Ratios=matrix(c(0.3,0.7,0.4,0.6,0.42,0.58,0.45,0.55,0.47,0.53,0.49,0.51),nrow=6,byrow=TRUE)
# non-linear model fitting, attempting to calculate the value of w using the Levenberg-Marquardt nonlinear least-squares algorithm
output=nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
# Error message shown after running
# Error in fn(par, ...) :
# unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))
The error means you passed a function an argument that it did not expect. ?nls.lm has no argument observed, so it is passed to the function passed to fn, in your case, residFun. However, residFun doesn't expect this argument either, hence the error. You need to redefine this function like this :
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(par,observed, n) {
w <- par$w
r <- observed - (.5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) ))
return(r)
}
It gives the following result :
> output = nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
> output
Nonlinear regression via the Levenberg-Marquardt algorithm
parameter estimates: 0.540285874836135
residual sum-of-squares: 0.02166
reason terminated: Relative error in the sum of squares is at most `ftol'.
Why that happened :
It seems that you were inspired by this example in he documentation :
## residual function
residFun <- function(p, observed, xx) observed - getPred(p,xx)
## starting values for parameters
parStart <- list(a=3,b=-.001, c=1)
## perform fit
nls.out <- nls.lm(par=parStart, fn = residFun, observed = simDNoisy,
xx = x, control = nls.lm.control(nprint=1))
Note that observed is an argument of residFun here.

Resources