Storing data in long or array format in simulation - r

I have a simulation study which I would eventually like to plot the results of using ggplot2. However, this requires the data to be in long format, which I find not very convenient when doing a simulation study which naturally employs a kind of factorial design. My question concerns how to approach this.
Here's a dummy example just to illustrate it all. Suppose we want to compare the OLS estimator for the slope in a simple linear regression with and without intercept included for two sample sizes for R replications. We can store this using:
an R x 2 x 2 array (replications x estimators x sample sizes)
a data frame (tibble) with variables Replication, Sample size, Estimator and Value
Here's the array and data frame in R:
library(tidyverse)
# Settings
R <- 10
est <- c("OLS1", "OLS2")
n <- c(50, 100)
# Initialize array
res <- array(NA,
dim = c(R, length(est), length(n)),
dimnames = list(Replication = 1:R,
Estimator = est,
Sample_size = n))
tibb <- as_tibble(expand.grid(Replication = 1:R, Sample_size = n, Estimator = est)) %>%
mutate(Value = NA)
To fill these with values, here's the main body of the simulation:
for (i in seq_along(n)) {
nn <- n[i]
x <- rnorm(nn)
for (j in 1:R) {
y <- 1 * x + rnorm(nn)
mod1 <- lm(y ~ 0 + x)
mod2 <- lm(y ~ 1 + x)
res[j, 1, i] <- mod1$coefficients[1]
res[j, 2, i] <- mod2$coefficients[2]
tibb[tibb$Replication == j & tibb$Sample_size == nn & tibb$Estimator == "OLS1", "Value"] <- mod1$coefficients[1]
tibb[tibb$Replication == j & tibb$Sample_size == nn & tibb$Estimator == "OLS2", "Value"] <- mod2$coefficients[2]
}
}
Now, tibb is immediately ready for plotting with ggplot2. However, that row selection that is going on is pretty awkward. On the other hand, while filling the array feels natural and intuitive, it needs more work to be transformed into the appropriate format for plotting.
So how should I best approach this? (Also bearing in mind that real simulations would usually have more dimensions than what I used here.) Are there other, better ways to do this?

First of all, I suggest reading the good blog about tidy data
Keeping in mind, that
Each column is a variable.
Each row is an observation.
you can build upa datafram containing all planned simulations. Define your simulation as a function and apply this function to every row of the dataframe:
library(dplyr)
library(ggplot2)
# pre-define your simulations
df = expand.grid(Replication=1:10, Sample_size=c(50,100), Estimator=c("OLS1", "OLS2"))
# your simulation in a function
sim <- function(n, est) {
x = rnorm(n)
y = 1 * x + rnorm(n)
ic = rep(ifelse(est=="OLS1",0,1), n)
lm(y ~ ic + x)$coefficients["x"]
}
# simulate and plot
df %>%
rowwise() %>%
mutate(coefs= sim(Sample_size, Estimator)) %>%
ggplot(aes(x=Replication, y=coefs, colour=as.factor(Sample_size), shape=Estimator)) +
geom_point()

Related

update function in R not updating model

I generated some data in R
n <- 1000; p <- 30
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
beta <- c(rep(1, 10), rep(0, 10), rep(-2, 10))
y <- X %*% beta + rnorm(1000)
Next, I want to run a stepwise regression of y on the columns of X, from 1 to 30. First I only include the intercept, then only intercept and column one, then add column two, column three, and so on. I wrote the following code
model <- lm(y~1)
for(i in 1:30){
model <- update(model, ~.+X[, i])
print(model)
}
What I see in the output now is that for each iteration, the regression is of y on an intercept and X[, i], i.e. the i-th column of X, and not the previous columns, even though I'm updating at every step. For example, when i = 4, the model is a regression of y on an intercept and X[, 4], not all of columns 1, 2, 3, 4. Why does this happen?
Try this
model <- lm(y~1)
for(i in 1:30){
model <- update(model, ~.+X[, 1:i])
print(model)
}
The reason your proposed code doesn't work is because of how R sees the formula and the fact that R updates the formula before it evaluates i.
The source code for the relevant update method can be viewed by running update.default at the command line. You'll see that after some error checking it runs call$formula <- update(formula(object), formula.), which calls the update.formula() function. update.formula() sees that you want to add the term X[, i] into the formula and does that. But update.formula() doesn't evaluate the value of i at this point, it relies on "lazy evaluation". This can be seen more clearly if we expand out the loop.
form <- y ~ 1
form
#> y ~ 1
i <- 1
form <- update.formula(form, ~. +X[, i])
form
#> y ~ X[, i]
i <- 2
form <- update.formula(form, ~. +X[, i])
form
#> y ~ X[, i]
The formula is being updated with the symbol X[, i] and then simplified to remove the duplicate symbol. This lazy evaluation is useful because it means that I don't need to actually define what X of y are for the above code to run. R trusts that I'll create appropriate objects before I try to use them.
After update() has updated the formula, it eval()'s the updated call. At this time i is evaluated and its current value is used. So in fact, this loop below gives the exact same output as your loop even though it doesn't try to change the formula at all. Each time lm() runs it looks for the current value of i to use.
for(i in 1:30){
model <- lm(y ~ X[, i])
print(model)
}
To achieve your desired effect you can programmatically create the formula outside the lm() function, not using an update() function. Like so,
n <- 1000; p <- 30
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
beta <- c(rep(1, 10), rep(0, 10), rep(-2, 10))
y <- X %*% beta + rnorm(1000)
xnames <- sapply(list(1:ncol(X)), function(x) paste0("X",x))
colnames(X) <- xnames
dat <- data.frame(y,X)
for(i in 1:30){
form <- as.formula(paste0("y ~ ", paste(xnames[1:i], collapse = "+")))
model <- lm(form, data = dat)
print(model)
}
EDIT:
After reading this post, https://notstatschat.rbind.io/2022/06/23/getting-strings-into-code-in-base-r/, an alternate way to perform the formula manipulations is to use bquote(). This has the advantage that the model summary contains the correct formula.
for(i in 1:30){
model <- eval(bquote(update(model, ~. + .(as.name(xnames[[i]])))))
print(model)
}

How to create the sampling matrixes for Sobol sensitivity analysis in R (package "sensitivity")

I would like to perform a Sobol sensitivity analysis in R
The package "sensitivity" should allow me to do so, but I don't understand how to generate the sampling matrixes (X1, X2). I have a model that runs outside of R. I have 6 parameters with uniform distribution.
In my text book: N = (2k+2)*M ; M = 2^b ; b=[8,12] (New sampling method : Wu et al. 2012)
I had the feeling that I should create two sampling matrix and feed the two to the sobol function X1_{M,k} X2_{M,k}.
The dimension of final sampling matrix x$X is then (k+2)*M. because:
X <- rbind(X1, X2)
for (i in 1:k) {
Xb <- X1
Xb[, i] <- X2[, i]
X <- rbind(X, Xb)
}
How should I conduct my sampling to get the right number of runs as (2*k+2)*M ?
This script is for the old method but does someone know if the new method is already implemented yet in the sensitivity package? Feel free to comment this procedure
name = c("a" , "b" , "c" , "d" , "e", "f")
vals <- list(list(var="a",dist="unif",params=list(min=0.1,max=1.5)),
list(var="b",dist="unif",params=list(min=-0.3,max=0.4)),
list(var="c",dist="unif",params=list(min=-0.3,max=0.3)),
list(var="d",dist="unif",params=list(min=0,max=0.5)),
list(var="e",dist="unif",params=list(min=2.4E-5,max=2.4E-3)),
list(var="f",dist="unif",params=list(min=3E-5,max=3E-3)))
k = 6
b = 8
M = 2^b
n <- 2*M
X1 <- makeMCSample(n,vals, p = 1)
X2 <- makeMCSample(n,vals, p = 2)
x <- sobol2007(model = NULL, X1, X2, nboot = 200)
if I understand correctly, I should provide a y for each x$X sampling combination
then I can use the function "tell" which will generate the Sobol' first-order indices as well as the total indices
tell(x,y)
ggplot(x)
Supplemental R function SobolR
makeMCSample <- function(n, vals) {
# Packages to generate quasi-random sequences
# and rearrange the data
require(randtoolbox)
require(plyr)
# Generate a Sobol' sequence
if (p == 2){ sob <- sobol(n, length(vals), seed = 4321, scrambling = 1)
}else{sob <- sobol(n, length(vals), seed = 1234, scrambling = 1)}
# Fill a matrix with the values
# inverted from uniform values to
# distributions of choice
samp <- matrix(rep(0,n*(length(vals)+1)), nrow=n)
samp[,1] <- 1:n
for (i in 1:length(vals)) {
# i=1
l <- vals[[i]]
dist <- l$dist
params <- l$params
fname <- paste("q",dist,sep="")
samp[,i+1] <- do.call(fname,c(list(p=sob[,i]),params))
}
# Convert matrix to data frame and add labels
samp <- as.data.frame(samp)
names(samp) <- c("n",laply(vals, function(l) l$var))
return(samp)
}
ref: Qiong-Li Wu, Paul-Henry Cournède, Amélie Mathieu, 2012, Efficient computational method for global sensitivity analysis and its application to tree growth modelling

How can I filter() within a function() in R?

I am trying to write a function that runs a linear regression to a subset of my data. I want to run two separate regressions for each id. These regressions should be used to add a new column that gives the residuals for each model. The variable e_hat is the desired outcome I want to create.
#create sample data
x <- rnorm(10,10,1)
id <- c("1","1","1","1","1","1","1","1","1","1")
e <- rnorm(10,0,1)
data <- data.frame(cbind(id,x,e))
data$y <- 27+1.2*as.numeric(data$x)+as.numeric(data$e)
x <- rnorm(10,10,3)
id <- c("2","2","2","2","2","2","2","2","2","2")
e <- rnorm(10,0,2)
data2 <- data.frame(cbind(id,x,e))
data2$y <- 10+1.6*as.numeric(data$x)+as.numeric(data$e)
data <- rbind(data, data2)
#my code
unex_changes <- function(x, y, z){
model <- lm(as.numeric(y)~as.numeric(x), data=filter(data, id == z))
data$y - predict(model)
}
data <- mutate(data,e_hat = unex_changes( x, y, id))
However, the filtering approach I used does not work properly because the regression parameters are estimated based on the entire dataset. Does someone has another solution for this problem?
The problem with your code is that in id == z, z contains both the ids 1 and 2. So the data=filter(data, id == z) is not working, and the filtered table is the same as before.
Let me know whether this works for you:
#my code
unex_changes <- function(x, y, z){
predictions <- c()
for (i in unique(z)){
model <- lm(as.numeric(y)~as.numeric(x), data=filter(data, id == i))
predictions <- c(predictions, filter(data, id == i)$y - predict(model))
}
return(predictions)
}
data <- mutate(data,e_hat = unex_changes( x, y, id))
data

Plot cumulative value for different series

I have run a short simulation and want to plot the outcomes of each simulation in terms of the "running sum" over parameter k. For reference, I want to end up with a plot that looks similar to the ones in this article:
https://www.pinnacle.com/en/betting-articles/Betting-Strategy/betting-bankroll-management/VDM2GY6UX3B552BG
The following is the code for the simulation:
## Simulating returns over k bets.
odds <- 1.5
k <- 100
return <- odds - 1
edge <- 0.04
pw <- 1/(odds/(1-edge))
pl <- 1-pw
nsims <- 10000
set.seed(42)
sims <- replicate(nsims, {
x <- sample(c(-1,return), k, TRUE, prob=c(pl, pw))
})
rownames(sims) <- c(1:k)
colnames(sims) <- c(1:nsims)
If I wasn't being clear in the description let me know.
Okay so here is how you can achieve the plot of the cumulative value over bets (I set nsims <- 10 so that the plot is readable).
First I generate the data :
## Simulating returns over k bets.
odds <- 1.5
k <- 100
return <- odds - 1
edge <- 0.04
pw <- 1/(odds/(1-edge))
pl <- 1-pw
nsims <- 10
set.seed(42)
sims <- replicate(nsims, {
x <- sample(c(-1,return), k, TRUE, prob=c(pl, pw))
})
rownames(sims) <- c(1:k)
colnames(sims) <- c(1:nsims)
Then I create a dataframe containing the results of the n simulations (10 here) :
df <- as.data.frame(sims)
What we want to plot is the cumulative sum, not the result at a specific bet so we iterate through the columns (i.e. the simulations) to have that value :
for (i in colnames(df)){
df[[i]] <- cumsum(df[[i]])
}
df <- mutate(df, bets = rownames(df))
output <- melt(df, id.vars = "bets", variable.name = 'simulation')
Now we can plot our data :
ggplot(output, aes(bets,value,group=simulation)) + geom_line(aes(colour = simulation))

Multi-data likelihood function and mle2 function from bbmle package in R

I have written a custom likelihood function that fits a multi-data model that integrates mark-recapture and telemetry data (sensu Royle et al. 2013 Methods in Ecology and Evolution). The likelihood function is designed to be flexible in terms of whether and how many covariates are specified for different linear models in different likelihood components which is determined by values supplied as function arguments (i.e., data matrices "detcovs" and "dencovs" in my code). The likelihood function works when I directly supply it to optimization functions (e.g., optim or nlm), but does not play nice with the mle2 function in the bbmle package. My problem is that I continually run into the following error: "some named arguments in 'start' are not arguments to the specified log-likelihood function". This is my first attempt at writing custom likelihood functions so I'm sure there are general coding conventions of which I'm unaware that make such tasks much more efficient and amendable to the mle2 function. Below is my likelihood function, code creating the staring value objects, and code calling the mle2 function. Any advice how to solve the error problem and general comments on writing cleaner functions is welcome. Many thanks in advance.
Edit: As requested, I have simplified the likelihood function and provided code to simulate reproducible data to which the model can be fit. Included in the simulation code are 2 custom functions and use of the raster function from the raster package. Hopefully, I have sufficiently simplified everything to enable others to troubleshoot. Again, many thanks for your help!
Jared
Likelihood function:
CSCR.RSF.intlik2.EXAMPLE <- function(alpha0,sigma,alphas=NULL,betas=NULL,n0,yscr=NULL,K=NULL,X=X,trapcovs=NULL,Gden=NULL,Gdet=NULL,ytel=NULL,stel=NULL,
dencovs=NULL,detcovs=NULL){
#
# this version of the code handles a covariate on log(Density). This is starting value 5
#
# start = vector of starting values
# yscr = nind x ntraps encounter matrix
# K = number of occasions
# X = trap locations
# Gden = matrix with grid cell coordinates for density raster
# Gdet = matrix with gride cell coordinates for RSF raster
# dencovs = all covariate values for all nGden pixels in density raster
# trapcovs = covariate value at trap locations
# detcovs = all covariate values for all nGrsf pixels in RSF raster
# ytel = nguys x nGdet matrix of telemetry fixes in each nGdet pixels
# stel = home range center of telemetered individuals, IF you wish to estimate it. Not necessary
# alphas = starting values for RSF/detfn coefficients excluding sigma and intercept
# alpha0 = starting values for RSF/detfn intercept
# sigma = starting value for RSF/detfn sigma
# betas = starting values for density function coefficients
# n0 = starting value for number of undetected individuals on log scale
#
n0 = exp(n0)
nGden = nrow(Gden)
D = e2dist(X,Gden)
nGdet <- nrow(Gdet)
alphas = alphas
loglam = alpha0 -(1/(2*sigma*sigma))*D*D + as.vector(trapcovs%*%alphas) # ztrap recycled over nG
psi = exp(as.vector(dencovs%*%betas))
psi = psi/sum(psi)
probcap = 1-exp(-exp(loglam))
#probcap = (exp(theta0)/(1+exp(theta0)))*exp(-theta1*D*D)
Pm = matrix(NA,nrow=nrow(probcap),ncol=ncol(probcap))
ymat = yscr
ymat = rbind(yscr,rep(0,ncol(yscr)))
lik.marg = rep(NA,nrow(ymat))
for(i in 1:nrow(ymat)){
Pm[1:length(Pm)] = (dbinom(rep(ymat[i,],nGden),rep(K,nGden),probcap[1:length(Pm)],log=TRUE))
lik.cond = exp(colSums(Pm))
lik.marg[i] = sum( lik.cond*psi )
}
nv = c(rep(1,length(lik.marg)-1),n0)
part1 = lgamma(nrow(yscr)+n0+1) - lgamma(n0+1)
part2 = sum(nv*log(lik.marg))
out = -1*(part1+ part2)
lam = t(exp(a0 - (1/(2*sigma*sigma))*t(D2)+ as.vector(detcovs%*%alphas)))# recycle zall over all ytel guys
# lam is now nGdet x nG!
denom = rowSums(lam)
probs = lam/denom # each column is the probs for a guy at column [j]
tel.loglik = -1*sum( ytel*log(probs) )
out = out + tel.loglik
out
}
Data simulation code:
library(raster)
library(bbmle)
e2dist <- function (x, y){
i <- sort(rep(1:nrow(y), nrow(x)))
dvec <- sqrt((x[, 1] - y[i, 1])^2 + (x[, 2] - y[i, 2])^2)
matrix(dvec, nrow = nrow(x), ncol = nrow(y), byrow = F)
}
spcov <- function(R) {
v <- sqrt(nrow(R))
D <- as.matrix(dist(R))
V <- exp(-D/2)
cov1 <- t(chol(V)) %*% rnorm(nrow(R))
Rd <- as.data.frame(R)
colnames(Rd) <- c("x", "y")
Rd$C <- as.numeric((cov1 - mean(cov1)) / sd(cov1))
return(Rd)
}
set.seed(1234)
co <- seq(0.3, 0.7, length=5)
X <- cbind(rep(co, each=5),
rep(co, times=5))
B <- 10
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
dencovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(dencovs)[[2]][3:4] <- c("dencov1","dencov2")
denr.list <- vector("list",2)
for(i in 1:2){
denr.list[[i]] <- raster(
list(x=seq(0,1,length=10),
y=seq(0,1,length=10),
z=t(matrix(dencovs[,i+2],10,10,byrow=TRUE)))
)
}
B <- 20
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
detcovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(detcovs)[[2]][3:4] <- c("detcov1","detcov2")
detcov.raster.list <- vector("list",2)
trapcovs <- matrix(0,J,2)
for(i in 1:2){
detr.list[[i]] <- raster(
list(x=seq(0,1,length=20),
y=seq(0,1,length=20),
z=t(matrix(detcovs[,i+2],20,20,byrow=TRUE)))
)
trapcovs[,i] <- extract(detr.list[[i]],X)
}
alpha0 <- -3
sigma <- 0.15
alphas <- c(1,-1)
beta0 <- 3
betas <- c(-1,1)
pixelArea <- (dencovs$y[2] - dencovs$y[1])^2
mu <- exp(beta0 + as.matrix(dencovs[,3:4])%*%betas)*pixelArea
EN <- sum(mu)
N <- rpois(1, EN)
pi <- mu/sum(mu)
s <- dencovs[sample(1:nrow(dencovs), size=N, replace=TRUE, prob=pi),1:2]
J <- nrow(X)
K <- 10
yc <- d <- p <- matrix(NA, N, J)
D <- e2dist(s,X)
loglam <- t(alpha0 - t((1/(2*sigma*sigma))*D*D) + as.vector(trapcovs%*%alphas))
p <- 1-exp(-exp(loglam))
for(i in 1:N) {
for(j in 1:J) {
yc[i,j] <- rbinom(1, K, p[i,j])
}
}
detected <- apply(yc>0, 1, any)
yscr <- yc[detected,]
ntel <- 5
nfixes <- 100
poss.tel <- which(s[,1]>0.2 & s[,1]<0.8 & s[,2]>0.2 & s[,2]<0.8)
stel.id <- sample(poss.tel,ntel)
stel <- s[stel.id,]
ytel <- matrix(NA,ntel,nrow(detcovs))
d <- e2dist(stel,detcovs[,1:2])
lam <- t(exp(1 - t((1/(2*sigma*sigma))*d*d) + as.vector(as.matrix(detcovs[,3:4])%*%alphas)))
for(i in 1:ntel){
ytel[i,] <- rmultinom(1,nfixes,lam[i,]/sum(lam[i,]))
}
Specify starting values and call mle2 function:
start1 <- list(alpha0=alpha0,sigma=sigma,alphas=alphas,betas=betas,n0=log(N-nrow(yscr)))
parnames(CSCR.RSF.intlik2.EXAMPLE) <- names(start)
out1 <- mle2(CSCR.RSF.intlik2.EXAMPLE,start=start1,method="SANN",optimizer="optim",
data=list(yscr=yscr,K=K,X=X,trapcovs=trapcovs,Gden=dencovs[,1:2],Gdet=detcovs[,1:2],
ytel=ytel,stel=stel,dencovs=as.matrix(dencovs[,3:4]),detcovs=as.matrix(detcovs[,3:4]))
)

Resources