R -- Simulate sigmoidally correlated covariates - r

I am attempting to simulate two weight and age values for a population of children. These data should be sigmoidally correlated such that at low ages weight changes slowly, then by approximately 30 weeks postmenstrual age there is an acceleration in weight gain, which begins to level off past about 50 weeks.
I have been able to use the code below to get a linear correlation between weight and age to work fairly well. The part I am having trouble with is adapting this code to get a more sigmoidal shape to the data. Any suggestions would be greatly appreciated.
# Load required packages
library(MASS)
library(ggplot2)
# Set the number of simulated data points
n <- 100
# Set the mean and standard deviations for
# the two variables
mean_age <- 50
sd_age <- 20
mean_wt <- 10
sd_wt <- 4
# Set the desired level of correlation
# between the two variables
cor_agewt <- 0.9
# Build the covariance matrix
covmat <- matrix(c(sd_age^2, cor_agewt * sd_age * sd_wt,
cor_agewt * sd_age * sd_wt, sd_wt^2),
nrow = 2, ncol = 2, byrow = TRUE)
# Simulate the correlated results
res <- mvrnorm(n, c(mean_age, mean_wt), covmat)
# Reorganize the simulate data into a data frame
df <- data.frame(age = res[,1],
wt = res[,2])
# Plot the results and fit a loess spline
# to the data
ggplot(df, aes(x = age, y = wt)) +
geom_point() +
stat_smooth(method = 'loess')
Current output:
Ideal output (albeit over a smaller range of ages and weights):

One approach is to specify the functional form between weight and age more specific than just a single correlation. After specifying the functional form of weight~age+e you just draw (age,e) and then calculate the weight. A simple example follows:
set.seed(1234)
mean_age <- 50; sd_age <- 20
mean_wt <- 3.5; sd_wt <- 2.2
n<-400
age.seq<-rnorm(n,mean_age,sd_age)
age.seq<-age.seq[order(age.seq)]
#functional form: (here a "logistic" with a a location and scale)
f<-function(x,loc,sca) 1/(1+exp(-(x-loc)/sca))
wt<-f(age.seq,65,20) #wt
m<-mean_wt/mean(wt) #simple adjustment of the mean
sdfit<-sqrt( sd_wt^2-var(m*wt) )
sim_wt<-m*wt+rnorm(n,0,sdfit) #simulated wt
plot(age.seq,sim_wt)
lines(age.seq,m*wt)
with mean & sd:
>sd(age.seq); sd(sim_wt); mean(sim_wt); mean(age.seq) #check
[1] 20.29432
[1] 2.20271
[1] 3.437339
[1] 50.1549
:::::: EDIT partially wrt. comment::::::
A restriction on the samplespace, eg. nonzero criteria for weights, gonna make the problem much harder. But if you drop the mean+sd restriction on weights, then it's easy to extend the example to a flexible specification of the functional-form. Following, is a simple example using a truncated normal-dist.:
set.seed(1234)
mean_age<-30
sd_age<-10
n<-500
#ex. of control of functional-form
loc<-40 #location
scale<-10 #scaling
sd_wt <- 0.8 #in the truncated normal
ey_min<-c(0,0.2) #in the truncated normal
ey_max<-c(55,6) #in the truncated normal
age.seq<-rnorm(n,mean_age,sd_age)
#age.seq<-0:55
n<-length(age.seq)
age.seq<-age.seq[order(age.seq)]
#functional form: (here a "logistic" with a a location and scale)
f<-function(x,loc,sca) 1/(1+exp(-(x-loc)/sca))
wt<-f(age.seq,loc,scale) #wt
#correct lower:
corr_lower<-ey_min[2]-f(ey_min[1],loc,scale) #add. correction lower
wt<-wt+corr_lower
#correct upper
mult<-(ey_max[2]-ey_min[2])/(f(ey_max[1],loc,scale)+corr_lower) #mult. correction
wt<-ey_min[2]+wt*mult*(age.seq/ey_max[1])
plot(age.seq,wt,type="l",ylim=c(0,8)) #plot mean used as par in the truncated normal
sim_wt<-truncnorm::rtruncnorm(n,0,,mean=wt,sd=sd_wt)
points(age.seq,sim_wt)
abline(h=0.2,col=2);abline(v=0,col=2)
abline(h=6,col=2);abline(v=55,col=2)
which gives (red lines illustrating the controls):
Of course you could also try control the variance wrt. age, simplified:
plot(age.seq,wt,type="l",ylim=c(0,8)) #plot mean used as par in the truncated normal
sim_wt<-truncnorm::rtruncnorm(n,0,,mean=wt,sd=sd_wt*seq(0.3,1.3,len=n))
points(age.seq,sim_wt)
The point here is, that you need more structure to simulate specific data like that (not going into ex. bootstrap methods), eg. there is no internal R-function to the rescue. Of course it get's harder to sample from the distribution when introducing more restrictions. You can always consult Cross Validated for different approaches, choice of distribution etc.

Related

Plotting distribution of variances

My dataset has 2 fields:
Time stamp t --- Varies between 0 to 60
Variable x – variance in value of a variable (say, A) from t-1 to t. Varies between -100% to 100%
There are roughly 500 records for each value of time stamp- e.g.
500 records where t= 0 and x takes any value between -100% to 100%
490 records where t= 1 and x takes any value between -100% to 100%, and so on.
Note, the value of x is 0 for ~80% of the records
The aim here is to determine at what value of t (Can be one value, or a range, e.g., when t= 22, or is between 20 -25), is the day-on-day change in A the minimum: Which effectively translates to finding out t when x is very frequently= 0, and when not, is at least close to zero.
To this purpose, I aim to plot the variance of x for each day. I can think of using a violin plot with x (Y axis) and t (X-axis), but there being 60 values of t makes it difficult to plot all in one chart. Can you suggest any alternative plot for the intended visual analysis?
Does it help if you do the absolute value of the variance (so its concentrated in 0-100) and trying with logs in here? https://stats.stackexchange.com/questions/251066/boxplot-for-data-with-a-large-number-of-zero-values.
When you say smallest, you mean closest to 0, right? In this case its better to work to reduce absolute variance (on a 0-1 scale), as you can then treat this like zero-inflated binomial data e.g. with the VGAM package: https://rdrr.io/cran/VGAM/man/zibinomial.html
I've had a play around, and below is an example that I think makes sense. I've only had some experience with zero-inflated models, so would be good if anyone has some feedback :)
library(ggplot2)
library(data.table)
library(VGAM)
# simulate some data
N_t <- 60 # number of t
N_o <- 500 # number of observations at t
t_smallest <- 30 # best value
# simulate some data crudely
set.seed(1)
dataL <- lapply(1:N_t, function(t){
dist <- abs(t_smallest-t)+10
values <- round(rbeta(N_o, 10/dist, 300/dist), 2) * sample(c(-1,1), N_o, replace=TRUE)
data.table(t, values)
})
data <- rbindlist(dataL)
# raw
ggplot(data, aes(factor(t), values)) + geom_boxplot() +
coord_cartesian(ylim=c(0, 0.1))
# log transformed - may look better with your data
ggplot(data, aes(factor(t), log(abs(values)+1))) +
geom_violin()
# use absolute values, package needs it as integer p & n, so approximate these
data[, abs.values := abs(values)]
data[, p := round(1000*abs.values, 0)]
data[, n := 1000]
# with a gam, so smooth fit on t. Found it to be unstable though
fit <- vgam(cbind(p, n-p) ~ s(t), zibinomialff, data = data, trace = TRUE)
# glm, with a coefficient for each t, so treats independently
fit2 <- vglm(cbind(p, n-p) ~ factor(t), zibinomialff, data = data, trace = TRUE)
# predict
output <- data.table(t=1:N_t)
output[, prediction := predict(fit, newdata=output, type="response")]
output[, prediction2 := predict(fit2, newdata=output, type="response")]
# plot out with predictions
ggplot(data, aes(factor(t), abs.values)) +
geom_boxplot(col="darkgrey") +
geom_line(data=output, aes(x=t, y=prediction2)) +
geom_line(data=output, aes(x=t, y=prediction), col="darkorange") +
geom_vline(xintercept = output[prediction==min(prediction), t]) +
coord_cartesian(ylim=c(0, 0.1))

Maximum pseudo-likelihood estimator for soft-core point process

I am trying to fit a soft-core point process model on a set of point pattern using maximum pseudo-likelihood. I followed the instructions given in this paper by Baddeley and Turner
And here is the R-code I came up with
`library(deldir)
library(tidyverse)
library(fields)
#MPLE
# irregular parameter k
k <- 0.4
## Generate dummy points 50X50. "RA" and "DE" are x and y coordinates
dum.x <- seq(ramin, ramax, length = 50)
dum.y <- seq(demin, demax, length = 50)
dum <- expand.grid(dum.x, dum.y)
colnames(dum) <- c("RA", "DE")
## Combine with data and specify which is data point and which is dummy, X is the point pattern to be fitted
bind.x <- bind_rows(X, dum) %>%
mutate(Ind = c(rep(1, nrow(X)), rep(0, nrow(dum))))
## Calculate Quadrature weights using Voronoi cell area
w <- deldir(bind.x$RA, bind.x$DE)$summary$dir.area
## Response
y <- bind.x$Ind/w
# the sum of distances between all pairs of points (the sufficient statistics)
tmp <- cbind(bind.x$RA, bind.x$DE)
t1 <- rdist(tmp)^(-2/k)
t1[t1 == Inf] <- 0
t1 <- rowSums(t1)
t <- -t1
# fit the model using quasipoisson regression
fit <- glm(y ~ t, family = quasipoisson, weights = w)
`
However, the fitted parameter for t is negative which is obviously not a correct value for a softcore point process. Also, my point pattern is actually simulated from a softcore process so it does not make sense that the fitted parameter is negative. I tried my best to find any bugs in the code but I can't seem to find it. The only potential issue I see is that my sufficient statistics is extremely large (on the order of 10^14) which I fear may cause numerical issues. But the statistics are large because my observation window spans a very small unit and the average distance between a pair of points is around 0.006. So sufficient statistics based on this will certainly be very large and my intuition tells me that it should not cause a numerical problem and make the fitted parameter to be negative.
Can anybody help and check if my code is correct? Thanks very much!

How to generate a population of random numbers within a certain exponentially increasing range

I have 16068 datapoints with values that range between 150 and 54850 (mean = 3034.22). What would the R code be to generate a set of random numbers that grow in frequency exponentially between 54850 and 150?
I've tried using the rexp() function in R, but can't figure out how to set the range to between 150 and 54850. In my actual data population, the lambda value is 25.
set.seed(123)
myrange <- c(54850, 150)
rexp(16068, 1/25, myrange)
The call produces an error.
Error in rexp(16068, 1/25, myrange) : unused argument (myrange)
The hypothesized population should increase exponentially the closer the data values are to 150. I have 25 data points with a value of 150 and only one with a value of 54850. The simulated population should fall in this range.
This is really more of a question for math.stackexchange, but out of curiosity I provide this solution. Maybe it is sufficient for your needs.
First, ?rexp tells us that it has only two arguments, so we generate a random exponential distribution with the desired length.
set.seed(42) # for sake of reproducibility
n <- 16068
mr <- c(54850, 150) # your 'myrange' with less typing
y0 <- rexp(n, 1/25) # simulate exp. dist.
y <- y0[order(-y0)] # sort
Now we need a mathematical approach to rescale the distribution.
# f(x) = (b-a)(x - min(x))/(max(x)-min(x)) + a
y.scaled <- (mr[1] - mr[2]) * (y - min(y)) / (max(y) - min(y)) + mr[2]
Proof:
> range(y.scaled)
[1] 150.312 54850.312
That's not too bad.
Plot:
plot(y.scaled, type="l")
Note: There might be some mathematical issues, see therefore e.g. this answer.

Errors running Maximum Likelihood Estimation on a three parameter Weibull cdf

I am working with the cumulative emergence of flies over time (taken at irregular intervals) over many summers (though first I am just trying to make one year work). The cumulative emergence follows a sigmoid pattern and I want to create a maximum likelihood estimation of a 3-parameter Weibull cumulative distribution function. The three-parameter models I've been trying to use in the fitdistrplus package keep giving me an error. I think this must have something to do with how my data is structured, but I cannot figure it out. Obviously I want it to read each point as an x (degree days) and a y (emergence) value, but it seems to be unable to read two columns. The main error I'm getting says "Non-numeric argument to mathematical function" or (with slightly different code) "data must be a numeric vector of length greater than 1". Below is my code including added columns in the df_dd_em dataframe for cumulative emergence and percent emergence in case that is useful.
degree_days <- c(998.08,1039.66,1111.29,1165.89,1236.53,1293.71,
1347.66,1387.76,1445.47,1493.44,1553.23,1601.97,
1670.28,1737.29,1791.94,1849.20,1920.91,1967.25,
2036.64,2091.85,2152.89,2199.13,2199.13,2263.09,
2297.94,2352.39,2384.03,2442.44,2541.28,2663.90,
2707.36,2773.82,2816.39,2863.94)
emergence <- c(0,0,0,1,1,0,2,3,17,10,0,0,0,2,0,3,0,0,1,5,0,0,0,0,
0,0,0,0,1,0,0,0,0,0)
cum_em <- cumsum(emergence)
df_dd_em <- data.frame (degree_days, emergence, cum_em)
df_dd_em$percent <- ave(df_dd_em$emergence, FUN = function(df_dd_em) 100*(df_dd_em)/46)
df_dd_em$cum_per <- ave(df_dd_em$cum_em, FUN = function(df_dd_em) 100*(df_dd_em)/46)
x <- pweibull(df_dd_em[c(1,3)],shape=5)
dframe2.mle <- fitdist(x, "weibull",method='mle')
Here's my best guess at what you're after:
Set up data:
dd <- data.frame(degree_days=c(998.08,1039.66,1111.29,1165.89,1236.53,1293.71,
1347.66,1387.76,1445.47,1493.44,1553.23,1601.97,
1670.28,1737.29,1791.94,1849.20,1920.91,1967.25,
2036.64,2091.85,2152.89,2199.13,2199.13,2263.09,
2297.94,2352.39,2384.03,2442.44,2541.28,2663.90,
2707.36,2773.82,2816.39,2863.94),
emergence=c(0,0,0,1,1,0,2,3,17,10,0,0,0,2,0,3,0,0,1,5,0,0,0,0,
0,0,0,0,1,0,0,0,0,0))
dd <- transform(dd,cum_em=cumsum(emergence))
We're actually going to fit to an "interval-censored" distribution (i.e. probability of emergence between successive degree day observations: this version assumes that the first observation refers to observations before the first degree-day observation, you could change it to refer to observations after the last observation).
library(bbmle)
## y*log(p) allowing for 0/0 occurrences:
y_log_p <- function(y,p) ifelse(y==0 & p==0,0,y*log(p))
NLLfun <- function(scale,shape,x=dd$degree_days,y=dd$emergence) {
prob <- pmax(diff(pweibull(c(-Inf,x), ## or (c(x,Inf))
shape=shape,scale=scale)),1e-6)
## multinomial probability
-sum(y_log_p(y,prob))
}
library(bbmle)
I should probably have used something more systematic like the method of moments (i.e. matching the mean and variance of a Weibull distribution with the mean and variance of the data), but I just hacked around a bit to find plausible starting values:
## preliminary look (method of moments would be better)
scvec <- 10^(seq(0,4,length=101))
plot(scvec,sapply(scvec,NLLfun,shape=1))
It's important to use parscale to let R know that the parameters are on very different scales:
startvals <- list(scale=1000,shape=1)
m1 <- mle2(NLLfun,start=startvals,
control=list(parscale=unlist(startvals)))
Now try with a three-parameter Weibull (as originally requested) -- requires only a slight modification of what we already have:
library(FAdist)
NLLfun2 <- function(scale,shape,thres,
x=dd$degree_days,y=dd$emergence) {
prob <- pmax(diff(pweibull3(c(-Inf,x),shape=shape,scale=scale,thres)),
1e-6)
## multinomial probability
-sum(y_log_p(y,prob))
}
startvals2 <- list(scale=1000,shape=1,thres=100)
m2 <- mle2(NLLfun2,start=startvals2,
control=list(parscale=unlist(startvals2)))
Looks like the three-parameter fit is much better:
library(emdbook)
AICtab(m1,m2)
## dAIC df
## m2 0.0 3
## m1 21.7 2
And here's the graphical summary:
with(dd,plot(cum_em~degree_days,cex=3))
with(as.list(coef(m1)),curve(sum(dd$emergence)*
pweibull(x,shape=shape,scale=scale),col=2,
add=TRUE))
with(as.list(coef(m2)),curve(sum(dd$emergence)*
pweibull3(x,shape=shape,
scale=scale,thres=thres),col=4,
add=TRUE))
(could also do this more elegantly with ggplot2 ...)
These don't seem like spectacularly good fits, but they're sane. (You could in principle do a chi-squared goodness-of-fit test based on the expected number of emergences per interval, and accounting for the fact that you've fitted a three-parameter model, although the values might be a bit low ...)
Confidence intervals on the fit are a bit of a nuisance; your choices are (1) bootstrapping; (2) parametric bootstrapping (resample parameters assuming a multivariate normal distribution of the data); (3) delta method.
Using bbmle::mle2 makes it easy to do things like get profile confidence intervals:
confint(m1)
## 2.5 % 97.5 %
## scale 1576.685652 1777.437283
## shape 4.223867 6.318481
dd <- data.frame(degree_days=c(998.08,1039.66,1111.29,1165.89,1236.53,1293.71,
1347.66,1387.76,1445.47,1493.44,1553.23,1601.97,
1670.28,1737.29,1791.94,1849.20,1920.91,1967.25,
2036.64,2091.85,2152.89,2199.13,2199.13,2263.09,
2297.94,2352.39,2384.03,2442.44,2541.28,2663.90,
2707.36,2773.82,2816.39,2863.94),
emergence=c(0,0,0,1,1,0,2,3,17,10,0,0,0,2,0,3,0,0,1,5,0,0,0,0,
0,0,0,0,1,0,0,0,0,0))
dd$cum_em <- cumsum(dd$emergence)
dd$percent <- ave(dd$emergence, FUN = function(dd) 100*(dd)/46)
dd$cum_per <- ave(dd$cum_em, FUN = function(dd) 100*(dd)/46)
dd <- transform(dd)
#start 3 parameter model
library(FAdist)
## y*log(p) allowing for 0/0 occurrences:
y_log_p <- function(y,p) ifelse(y==0 & p==0,0,y*log(p))
NLLfun2 <- function(scale,shape,thres,
x=dd$degree_days,y=dd$percent) {
prob <- pmax(diff(pweibull3(c(-Inf,x),shape=shape,scale=scale,thres)),
1e-6)
## multinomial probability
-sum(y_log_p(y,prob))
}
startvals2 <- list(scale=1000,shape=1,thres=100)
m2 <- mle2(NLLfun2,start=startvals2,
control=list(parscale=unlist(startvals2)))
summary(m2)
#graphical summary
windows(5,5)
with(dd,plot(cum_per~degree_days,cex=3))
with(as.list(coef(m2)),curve(sum(dd$percent)*
pweibull3(x,shape=shape,
scale=scale,thres=thres),col=4,
add=TRUE))

Bootstrapping to compare two groups

In the following code I use bootstrapping to calculate the C.I. and the p-value under the null hypothesis that two different fertilizers applied to tomato plants have no effect in plants yields (and the alternative being that the "improved" fertilizer is better). The first random sample (x) comes from plants where a standard fertilizer has been used, while an "improved" one has been used in the plants where the second sample (y) comes from.
x <- c(11.4,25.3,29.9,16.5,21.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)
total <- c(x,y)
library(boot)
diff <- function(x,i) mean(x[i[6:11]]) - mean(x[i[1:5]])
b <- boot(total, diff, R = 10000)
ci <- boot.ci(b)
p.value <- sum(b$t>=b$t0)/b$R
What I don't like about the code above is that resampling is done as if there was only one sample of 11 values (separating the first 5 as belonging to sample x leaving the rest to sample y).
Could you show me how this code should be modified in order to draw resamples of size 5 with replacement from the first sample and separate resamples of size 6 from the second sample, so that bootstrap resampling would mimic the “separate samples” design that produced the original data?
EDIT2 :
Hack deleted as it was a wrong solution. Instead one has to use the argument strata of the boot function :
total <- c(x,y)
id <- as.factor(c(rep("x",length(x)),rep("y",length(y))))
b <- boot(total, diff, strata=id, R = 10000)
...
Be aware you're not going to get even close to a correct estimate of your p.value :
x <- c(1.4,2.3,2.9,1.5,1.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)
total <- c(x,y)
b <- boot(total, diff, strata=id, R = 10000)
ci <- boot.ci(b)
p.value <- sum(b$t>=b$t0)/b$R
> p.value
[1] 0.5162
How would you explain a p-value of 0.51 for two samples where all values of the second are higher than the highest value of the first?
The above code is fine to get a -biased- estimate of the confidence interval, but the significance testing about the difference should be done by permutation over the complete dataset.
Following John, I think the appropriate way to use bootstrap to test if the sums of these two different populations are significantly different is as follows:
x <- c(1.4,2.3,2.9,1.5,1.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)
b_x <- boot(x, sum, R = 10000)
b_y <- boot(y, sum, R = 10000)
z<-(b_x$t0-b_y$t0)/sqrt(var(b_x$t[,1])+var(b_y$t[,1]))
pnorm(z)
So we can clearly reject the null that they are the same population. I may have missed a degree of freedom adjustment, I am not sure how bootstrapping works in that regard, but such an adjustment will not change your results drastically.
While the actual soil beds could be considered a stratified variable in some instances this is not one of them. You only have the one manipulation, between the groups of plants. Therefore, your null hypothesis is that they really do come from the exact same population. Treating the items as if they're from a single set of 11 samples is the correct way to bootstrap in this case.
If you have two plots, and in each plot tried the different fertilizers over different seasons in a counterbalanced fashion then the plots would be statified samples and you'd want to treat them as such. But that isn't the case here.

Resources