I have checked previous questions but couldn't find an answer
columnmean<-function(y){
n<-ncol(y)
means<-numeric(n)
for(i in 1:n){
means[i]<-mean(y[,i])
}
means}
I simply cannot understand the error, even the code seems right. Also, i get some dimension error if i input the value of n in this line
means[i]<-mean(y[,i])
Here is a reproduction of the error:
columnmean<-function(y){
n <- ncol(y)
means <- numeric(n)
for(i in 1:n) {
means[i] <- mean(y[,i])
}
means
}
columnmean(1:10)
If y is a vector the result of ncol(y) is NULL. The following calculation in your function raises the error.
Also colMeans(1:10) will cause an error (another error because of better internal checking of the argument).
So, your code is correct for twodimensional data, e.g.:
columnmean(BOD)
# [1] 3.666667 14.833333
The error depends from y (y with only one dimension, i.e. a vector ~~> error).
Related
There are similar questions on StackOverflow, however, I am failing to find a solution identical (or remotely similar) to my issue. I am running a simulation as follows in R
iterSim <- 1000
for(i in 1:iterSim){
#Generate some random data
Z <- NormalDGP(n=T, beta=coef_min, theta=Theta, rho=Rho, ...)
y <- Z[1:(length(Z[,1])-1),1]
x <- Z[2:length(Z[,1]),2]
#Conduct the following tests! However, Dec_POS_Dep sometimes gives an error
Dec_POS_Dep <- POS_Dep(y, x, simul=TRUE, trueBeta=coef_min, ...)
Dec_POS_Fix <- POS_Fix(y, x, simul=TRUE, trueBeta=coef_min, ...)
Dec_CD_95 <- CD_95(y, x, simul=TRUE)
}
where for each iteration i random numbers are generated and three tests are run - i.e., Dec_POS_Dep,Dec_POS_Fix and Dec_CD_95. Unfortunately, sometimes in the simulation Dec_POS_Dep gives an error and the simulation terminates. I am not looking for the loop to skip an iteration if an error is given (as per many suggestions on StackOverflow); however, I would like that iteration to be repeated. E.g. If the code is on the 265th iteration and Dec_Pos_Dep gives an error, I want it to give it many more shots at the 265th iteration. Some solution to this would very much be appreciated.
Two things stand out as broken here:
Use of try (as MartinGal suggested) or tryCatch will allow things to continue. As far as rerunning that iteration, you'll need to keep track somehow of these failed runs and run them yourself, there's no notion of telling R to repeat a for loop iteration.
You are discarding data on each iteration, Dec_CD_95 is overwritten each time. Perhaps you mean to keep things around?
Here's a suggestion:
iterSim <- 1000
out <- list()
while (length(out) < iterSim) {
try({
#Generate some random data
Z <- NormalDGP(n=T, beta=coef_min, theta=Theta, rho=Rho, ...)
y <- Z[1:(length(Z[,1])-1),1]
x <- Z[2:length(Z[,1]),2]
#Conduct the following tests! However, Dec_POS_Dep sometimes gives an error
Dec_POS_Dep <- POS_Dep(y, x, simul=TRUE, trueBeta=coef_min, ...)
Dec_POS_Fix <- POS_Fix(y, x, simul=TRUE, trueBeta=coef_min, ...)
Dec_CD_95 <- CD_95(y, x, simul=TRUE)
out <- c(out, list(Dec_POS_Dep, Dec_POS_Fix, Dec_CD_95))
}, silent = TRUE)
}
This is a little sloppy, admittedly, but it should always end up with 1000 iterations of your simulation.
I have this part of my jags code. I really can't see where the code gets out of the range. Can anyone please see any error that I can't recognize? These are the data sizes.
N = 96
L = c(4,4,4,4,4)
length(media1) = 96
length(weights1) = 4
for(t in 1:N){
current_window_x <- ifelse(t <= L[1], media1[1:t], media1[(t - L[1] + 1):t])
t_in_window <- length(current_window_x)
new_media1[t] <- ifelse(t <= L[1], inprod(current_window_x, weights1[1:t_in_window]),
inprod(current_window_x, weights1))
}
The error is (where line 41 correspond to the first line in the loop)
Error in jags.model(model.file, data = data, inits = init.values, n.chains = n.chains, :
RUNTIME ERROR:
Compilation error on line 41.
Index out of range taking subset of media1
I actually just happened on to the answer here earlier today for something I was working on. The answer is in this post. The gist is that ifelse() in jags is not a control flow statement, it is a function and both the TRUE and FALSE conditions are evaluated. So, even though you are saying to use media1[1:t] if t<=L[1], the FALSE condition is also being evaluated which produces the error.
The other problem once you're able to fix that is that you're re-defining the parameter current_window_x, which will throw an error. I think the easiest way to deal with the variable window width is just to hard code the first few observations of new_media and then calculate the remaining ones in the loop, like this:
new_media[1] <- media1[1]*weights1[1]
new_media[2] <- inprod(media1[1:2], weights1[1:2])
new_media[3] <- inprod(media1[1:3], weights1[1:3])
for(t in 4:N){
new_media[t] <- inprod(media1[(t - L[1] + 1):t], weights1)
}
I am using the function plkhci from library Bhat to construct Profile-likelihood based confidence intervals and I got this warning:
Warning message: In dqstep(list(label = x$label, est = btrf(xt, x$low,
x$upp), low = x$low, : oops: unable to find stepsize, use default
when i run
r <- dfp(x,f=nlogf)
Can I ignore this warning as I still can get the output?
Following is the complete coding:
library(Bhat)
beta0<--8
beta1<-0.03
gamma<-0.0105
alpha<-0.05
n<-100
u<-runif(n)
u
x<-rnorm(n)
x
c<-rexp(100,1/1515)
c
t1<-(1/gamma)*log(1-((gamma/(exp(beta0+beta1*x)))*(log(1-u))))
t1
t<-pmin(t1,c)
t
delta<-1*(t1>c)
delta
length(delta)
cp<-length(delta[delta==1])/n
cp
delta[delta==1]<-ifelse(rbinom(length(delta[delta==1]),1,0.5),1,2)
delta
deltae<-ifelse(delta==0, 1,0)
deltar<-ifelse(delta==1, 1,0)
deltai<-ifelse(delta==2, 1,0)
dat=data.frame(t,delta, deltae,deltar,deltai,x)
dat$interval[delta==2] <- as.character(cut(dat$t[delta==2], breaks=seq(0, 600, 100)))
labs <- cut(dat$t[delta==2], breaks=seq(0, 600, 100))
dat$lower[delta==2]<-as.numeric( sub("\\((.+),.*", "\\1", labs) )
dat$upper[delta==2]<-as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", labs) )
data0<-dat[which(dat$delta==0),]#uncensored data
data1<-dat[which(dat$delta==1),]#right censored data
data2<-dat[which(dat$delta==2),]#interval censored data
nlogf<-function(para)
{
b0<-para[1]
b1<-para[2]
g<-para[3]
e<-sum((b0+b1*data0$x)+g*data0$t+(1/g)*exp(b0+b1*data0$x)*(1-exp(g*data0$t)))
r<-sum((1/g)*exp(b0+b1*data1$x)*(1-exp(g*data1$t)))
i<-sum(log(exp((1/g)*exp(b0+b1*data2$x)*(1-exp(g*data2$lower)))-exp((1/g)*exp(b0+b1*data2$x)*(1-exp(g*data2$upper)))))
l<-e+r+i
return(-l)
}
x <- list(label=c("beta0","beta1","gamma"),est=c(-8,0.03,0.0105),low=c(-10,0,0),upp=c(10,1,1))
r <- dfp(x,f=nlogf)
x$est <- r$est
plkhci(x,nlogf,"beta0")
plkhci(x,nlogf,"beta1")
plkhci(x,nlogf,"gamma")
I am giving you a super long answer, but it will help you see that you can chase down your own error messages (most of the time, sometimes this means of looking at functions will not work). It is good to see what is happening inside a method when it throws an warning because sometimes it is fine and sometimes you need to fix your data.
This function is REALLY involved! You can look at it by typing dfp into the R command line (NO TRAILING PARENTHESES) and it will print out the whole function.
17 lines from the end, you will see an assignment:
del <- dqstep(x, f, sens = 0.01)
You can see that this calls the function dqstep, which is reflected in your warning.
You can see this function by typing dqstep into the command line of R again. In reading through this function, also long but not so tedious, there is this section of boolean logic:
if (r < 0 | is.na(r) | b == 0) {
warning("oops: unable to find stepsize, use default")
cat("problem with ", x$label[i], "\n")
break
}
This is the culprit, it returns the message you are getting. The line right above it spells out how r is calculated. You are feeding this function your default x from the prior function plus a sensitivity equations (which I assume dfp generates, it is huge and ugly, so I did not untangle all of it). When the previous nested function returns either an r value lower than Zero, and r value of NA or a b value of ZERO, that message is displayed.
The second error tells you that it was likely b==0 because b is in the denominator and it returned and infinity value, so NO STEP SIZE IS RETURNED FROM THIS NESTED FUNCTION to the variable del in dfp.
The step is fed into THIS equation:
h <- logit.hessian(x, f, del, dapprox = FALSE, nfcn)
which you can look into by typing logit.hessian into the R commandline.
When you do, you see that del is a step size in a logit scale, with a default value of del=rep(0.002, length(x$est))...which the function set for you because running the function dqstep returned no value.
So, you now get to decide if using that step size in the calculation of your confidence interval seems right or if there is a problem with your data which needs resolving to make this work better for you.
When I ran it, line by line, I got this message:
Error in if (denom <= 0) { : missing value where TRUE/FALSE needed
at this line of code:
r <- dfp(x,f=nlogf(x))
Which makes me think I was correct.
That is how I chase down issues I have with messages from packages when I get a message like yours.
I'm optimising a simple function in r using 'nloptr' and I'm having difficulty passing arguments in to the objective function. Here is the code I'm using:
require("nloptr")
Correl <- matrix(c(1,-.3,-.3,1), nrow=2, ncol=2)
Wghts <- c(.01,.99)
floor <- .035
expret <- c(.05,.02)
pf.return <- function(r, x, threshold=0){
return(r * x - threshold)
}
pf.vol <- function(x, C){
return(sqrt(x %*% C %*% x))
}
res <- nloptr(x0=Wghts,eval_f = pf.vol,eval_g_ineq=pf.return,opts=list(algorithm="NLOPT_GN_ISRES"), x=Wghts,C=Correl)
(I know I'm missing parameters here but I'm trying to highlight a behaviour I don't understand)
Running this gives the following error:
Error in .checkfunargs(eval_f, arglist, "eval_f") :
x' passed to (...) in 'nloptr' but this is not required in the eval_f function.
Just to see what happens I can try running it without the x argument:
res <- nloptr(x0=Wghts,eval_f = pf.vol,eval_g_ineq=pf.return,opts=list(algorithm="NLOPT_GN_ISRES"), C=Correl)
which gives the error:
Error in .checkfunargs(eval_g_ineq, arglist, "eval_g_ineq") :
eval_g_ineq requires argument 'x' but this has not been passed to the 'nloptr' function.
So including x throws an error that it is unnecessary and omitting it throws an (at least understandable) error that it has been omitted.
Ok for posterity:
I rewrote the functions so that they had the same set of arguments, in the same order.
I also omitted the x=Wghts bit as this is the parameter I'm trying to search over.
I'm just learning to create functions in R so I'm trying to make a function which graphs residual lines for a linear regression. I've already tried it and the code works outside of the function, but once I put it all into a function I get the 'x' and 'y' lengths differ error.
Here is my function:
`reslines <- function(x,y) {
abline(lm(y~x))
for(k in 1: length(y)) lines(c(x[k],x[k]), c(y[k], predict(lm(y~x))))
}`
The tracebook shows that the error occurs here:
6 stop("'x' and 'y' lengths differ")
5 xy.coords(x, y)
4 plot.xy(xy.coords(x, y), type = type, ...)
3 lines.default(c(x[k], x[k]), c(y[k], predict(lm(y ~ x))))
2 lines(c(x[k], x[k]), c(y[k], predict(lm(y ~ x))))
1 reslines(a, b)
I've checked the lengths of each data set I've tried using the length() function, and they all match, so something is happening inside the function which appears to change the length or 'x' or 'y' or both.
Can anyone tell me what the error is and how to fix it? Thanks.
I think I fixed it, it was not super easy, mainly the problem was in your predict where you used y, x instead of y[k], x[k]. But there was a little bit more:
reslines <- function(x,y) {
plot(y~x)
abline(lm(y~x))
lm.xy <- lm(y~x)
for(k in 1: length(y)) {
lines(c(x[k],x[k]), c(y[k], predict(lm.xy, data.frame(x=x[k], y = y[k]))))
}
}
Now a test
set.seed(123)
reslines(rnorm(10), rnorm(10))