I want to test a model where the distribution of a random variable, assumed normal, is conditional on the regime of another random variable, that switches state according to a Markov chain. The first step would be:
Assuming the simple linear model:
lm(y~x, data=data)
I want to find the parameters of the distribution assuming that x switches regime.
For example:
mkt.bull <- rnorm(150, 2, 1.5)
mkt.bear <- rnorm(150, -1, 2.5)
x <- c(mkt.bear,mkt.bull)
portfolio.bull <- rnorm(150, 1.75, 1.6)
portfolio.bear <- rnorm(150, -0.5, 2.3)
y <- c(portfolio.bear,portfolio.bull)
In the example above, x can be modelled as a Markov switching model (msmFit) with two states, one bull and one bear. Instead of approaching the problem with a lm,
lm(y~x)
since the two series are clearly non-linear, I want to run a regression where the parameters are conditional on the regime. This can be done with maximum likelihood, but the first step is to define the distribution of y as:
y_i | x, S_t ~ N(alpha + beta_{i,s_t}); sigma^2)
How can I code the above formula? I guess this cannot be done using rnorm. Is there another way?
Thanks
Data
Here I prepared and visualized the data.
# Load packages
library(tidyverse)
library(rjags)
# Set seed for reproduciblility
set.seed(199)
mkt.bull <- rnorm(150, 2, 1.5)
mkt.bear <- rnorm(150, -1, 2.5)
x <- c(mkt.bear,mkt.bull)
portfolio.bull <- rnorm(150, 1.75, 1.6)
portfolio.bear <- rnorm(150, -0.5, 2.3)
y <- c(portfolio.bear,portfolio.bull)
# Create example data frame
dat <- data.frame(x = x, y = y, regime = c(rep("bear", 150), rep("bull", 150)),
stringsAsFactors = FALSE)
# Plot the sample distribution
dat$regime <- factor(dat$regime, levels = c("bear", "bull"))
# Create a plot
ggplot(dat, aes(x = y, color = regime)) +
geom_density()
There are two regimes, bear and bull. The y for these regimes are both normally distributed. It seems like the OP wants to estimate the mean and standard deviation of y conditioned on these states.
Maximum Likelihood
Here is one way to use maximum likelihood to estimate the parameters using the stats4 package.
# Load the infer package
library(stats4)
# Split the data
y_bull <- dat %>% filter(regime %in% "bull") %>% pull("y")
y_bear <- dat %>% filter(regime %in% "bear") %>% pull("y")
# Define the log-likelihood function
LogLike_bull <- function(Mean, Sigma){
R <- suppressWarnings(dnorm(y_bull, Mean, Sigma))
return(-sum(log(R)))
}
LogLike_bear <- function(Mean, Sigma){
R <- suppressWarnings(dnorm(y_bear, Mean, Sigma))
return(-sum(log(R)))
}
mle(minuslogl = LogLike_bull, start = list(Mean = 1, Sigma = 1))
# Call:
# mle(minuslogl = LogLike_bull, start = list(Mean = 1, Sigma = 1))
#
# Coefficients:
# Mean Sigma
# 1.703099 1.482619
mle(minuslogl = LogLike_bear, start = list(Mean = 1, Sigma = 1))
# Call:
# mle(minuslogl = LogLike_bear, start = list(Mean = 1, Sigma = 1))
#
# Coefficients:
# Mean Sigma
# -0.616106 2.340852
The parameters for bull are mean = 1.703 and standard deviation = 1.483. The parameters for bear are mean = -0.616 and standard deviation = 2.341. They are close to the true values.
Bayseian Analysis
Here is an attempt to use Bayesian analysis to solve this question with jags and the rjags package.
I ran a Bayesian model to estimate the alpha (mean of y on bear), beta (The difference of y on bear and bull), and sigma (standard deviation of y on bear and bull) using 10000 iterations.
# Define the Bayesian model
model <- "model{
for(i in 1:length(Y)) {
Y[i] ~ dnorm(Mean[i], s[X[i]]^(-2))
Mean[i] <- alpha + beta[X[i]]
}
alpha ~ dnorm(0, 5^(-2))
beta[1] <- 0
beta[2] ~ dnorm(0, 5^(-2))
s[1] ~ dunif(0, 10)
s[2] ~ dunif(0, 10)
}"
# Compile the model
jags_model <- jags.model(
textConnection(model),
data = list(Y = dat$y, X = dat$regime),
n.chains = 3,
inits = list(.RNG.name = "base::Wichmann-Hill", .RNG.seed = 10)
)
# Simulate the posterior
jags_sim <- coda.samples(model = jags_model,
variable.names = c("alpha", "beta", "s"),
n.iter = 10000)
# Plot the posterior
plot(jags_sim)
The plot shows that estimates are well mixed.
# See the summary
summary(jags_sim)
Iterations = 1001:11000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 10000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
alpha -0.614 0.19436 0.0011222 0.0027201
beta[1] 0.000 0.00000 0.0000000 0.0000000
beta[2] 2.315 0.23099 0.0013336 0.0032666
s[1] 2.369 0.13768 0.0007949 0.0010393
s[2] 1.500 0.08836 0.0005102 0.0006727
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
alpha -0.9838 -0.7471 -0.6147 -0.4857 -0.2225
beta[1] 0.0000 0.0000 0.0000 0.0000 0.0000
beta[2] 1.8582 2.1622 2.3174 2.4760 2.7586
s[1] 2.1225 2.2722 2.3632 2.4564 2.6611
s[2] 1.3368 1.4390 1.4959 1.5579 1.6813
The mean of alpha on bear is -0.614, which is similar to the actual value -0.5. The mean of beta[2] is 2.315. If we add alpha and beta[2], we got 1.701, while the actual value is 1.7. We also got s[1] and s[2] as 2.369 and 1.5, which are similar to 2.3 and 1.6, respectively.
Bootstrapping
Here is another approch to use bootstrap to estimate alpha, beta, and standard deviation, which is based on the infer package.
# Load the infer package
library(infer)
set.seed(199)
# Split the data
dat_bull <- dat %>% filter(regime %in% "bull")
dat_bear <- dat %>% filter(regime %in% "bear")
# Calcualte the values in bull
dat_bull2 <- dat_bull %>%
# Specify the response variable
specify(response = y) %>%
# Generate 10000 bootstrap samples
generate(reps = 10000, type = "bootstrap")
summary_bull <- dat_bull2 %>%
summarise(mean_y = mean(y), sd_y = sd(y))
# Calcualte the values in bear
dat_ear2 <- dat_bear %>%
# Specify the response variable
specify(response = y) %>%
# Generate 10000 bootstrap samples
generate(reps = 10000, type = "bootstrap")
summary_bear <- dat_ear2 %>%
summarise(mean_y = mean(y), sd_y = sd(y))
Now we can print the results. The are all similar to the true values.
# The mean of bull
mean(summary_bull$mean_y)
# [1] 1.702693
# The standard deviation of bear
mean(summary_bull$sd_y)
# [1] 1.480158
# The mean of bear
mean(summary_bear$mean_y)
# [1] -0.6165585
# The standard deviation of bear
mean(summary_bear$sd_y)
# [1] 2.337042
Related
I'm trying to demonstrate that there is an important difference between two ways of making linear model predictions. The first way, which my heart tells me is more correct, uses predict.lm which as I understand preserves the correlations between coefficients. The second approach tries to use the parameters independently.
Is this the correct way to show the difference? The two approaches seem somewhat close.
Also, is the StdErr of the coefficients the same as the standard deviation of their distributions? Or have I misunderstood what the model table is saying.
Below is a quick reprex to show what I mean:
# fake dataset
xs <- runif(200, min = -1, max = 1)
true_inter <- -1.3
true_slope <- 3.1
ybar <- true_inter + true_slope*xs
ys <- rnorm(200, ybar, sd = 1)
model <- lm(ys~xs)
# predictions
coef_sterr <- summary(model)$coefficients
inters <- rnorm(500, mean = coef_sterr[1,1], sd = coef_sterr[1,2])
slopes <- rnorm(500, mean = coef_sterr[2,1], sd = coef_sterr[2,2])
newx <- seq(from = -1, to= 1, length.out = 20)
avg_predictions <- cbind(1, newx) %*% rbind(inters, slopes)
conf_predictions <- apply(avg_predictions, 1, quantile, probs = c(.25, .975), simplify = TRUE)
# from confint
conf_interval <- predict(model, newdata=data.frame(xs = newx),
interval="confidence",
level = 0.95)
# plot to visualize
plot(ys~xs)
# averages are exactly the same
abline(model)
abline(a = coef(model)[1], b = coef(model)[2], col = "red")
# from predict, using parameter covariance
matlines(newx, conf_interval[,2:3], col = "blue", lty=1, lwd = 3)
# from simulated lines, ignoring parameter covariance
matlines(newx, t(conf_predictions), col = "orange", lty = 1, lwd = 2)
Created on 2022-04-05 by the reprex package (v2.0.1)
In this case, they would be close because there is very little correlation between the model parameters, so drawing them from two independent normals versus a multivariate normal is not that different:
set.seed(519)
xs <- runif(200, min = -1, max = 1)
true_inter <- -1.3
true_slope <- 3.1
ybar <- true_inter + true_slope*xs
ys <- rnorm(200, ybar, sd = 1)
model <- lm(ys~xs)
cov2cor(vcov(model))
# (Intercept) xs
# (Intercept) 1.00000000 -0.08054106
# xs -0.08054106 1.00000000
Also, it is probably worth calculating both of the intervals the same way, though it shouldn't make that much difference. That said, 500 observations may not be enough to get reliable estimates of the 2.5th and 97.5th percentiles of the distribution. Let's consider a slightly more complex example. Here, the two X variables are correlated - the correlation of the parameters derives in part from the correlation of the columns of the design matrix, X.
set.seed(519)
X <- MASS::mvrnorm(200, c(0,0), matrix(c(1,.65,.65,1), ncol=2))
b <- c(-1.3, 3.1, 2.5)
ytrue <- cbind(1,X) %*% b
y <- ytrue + rnorm(200, 0, .5*sd(ytrue))
dat <- data.frame(y=y, x1=X[,1], x2=X[,2])
model <- lm(y ~ x1 + x2, data=dat)
cov2cor(vcov(model))
# (Intercept) x1 x2
# (Intercept) 1.00000000 0.02417386 -0.01515887
# x1 0.02417386 1.00000000 -0.73228003
# x2 -0.01515887 -0.73228003 1.00000000
In this example, the coefficients for x1 and x2 are correlated around -0.73. As you'll see, this still doesn't result in a huge difference. Let's calculate the relevant statistics.
First, we draw B1 using the multivariate method that you rightly suspect is correct. Then, we'll draw B2 from a bunch of independent normals (actually, I'm using a multivariate normal with a diagonal variance-covariance matrix, which is the same thing).
b_est <- coef(model)
v <- vcov(model)
B1 <- MASS::mvrnorm(2500, b_est, v, empirical=TRUE)
B2 <- MASS::mvrnorm(2500, b_est, diag(diag(v)), empirical = TRUE)
Now, let's make a hypothetical X matrix and generate the relevant predictions:
hypX <- data.frame(x1=seq(-3,3, length=50),
x2 = mean(dat$x2))
yhat1 <- as.matrix(cbind(1, hypX)) %*% t(B1)
yhat2 <- as.matrix(cbind(1, hypX)) %*% t(B2)
Then we can calculate confidence intervals, etc...
yh1_ci <- t(apply(yhat1, 1, function(x)unname(quantile(x, c(.025,.975)))))
yh2_ci <- t(apply(yhat2, 1, function(x)unname(quantile(x, c(.025,.975)))))
yh1_ci <- as.data.frame(yh1_ci)
yh2_ci <- as.data.frame(yh2_ci)
names(yh1_ci) <- names(yh2_ci) <- c("lwr", "upr")
yh1_ci$fit <- c(as.matrix(cbind(1, hypX)) %*% b_est)
yh2_ci$fit <- c(as.matrix(cbind(1, hypX)) %*% b_est)
yh1_ci$method <- factor(1, c(1,2), labels=c("Multivariate", "Independent"))
yh2_ci$method <- factor(2, c(1,2), labels=c("Multivariate", "Independent"))
yh1_ci$x1 <- hypX[,1]
yh2_ci$x1 <- hypX[,1]
yh <- rbind(yh1_ci, yh2_ci)
We could then plot the two confidence intervals as you did.
ggplot(yh, aes(x=x1, y=fit, ymin=lwr, ymax=upr, fill=method)) +
geom_ribbon(colour="transparent", alpha=.25) +
geom_line() +
theme_classic()
Perhaps a better visual would be to compare the widths of the intervals.
w1 <- yh1_ci$upr - yh1_ci$lwr
w2 <- yh2_ci$upr - yh2_ci$lwr
ggplot() +
geom_point(aes(x=hypX[,1], y=w2-w1)) +
theme_classic() +
labs(x="x1", y="Width (Independent) - Width (Multivariate)")
This shows that for small values of x1, the independent confidence intervals are wider than the multivariate ones. For values of x1 above 0, it's a more mixed bag.
This tells you that there is some difference, but you don't need the simulation to know which one is 'right'. That's because the prediction is a linear combination of constants and random variables.
In this case, the b terms are the random variables and the x values are the constants. We know that the variance of a linear combination can be calculated this way:
All that is to say that your intuition is correct.
Question
Say I have an unknown density a.
All I know is a probability grid (probs) of quantiles (quants).
How can I generate random samples from the unknown density?
This is what I have so far.
I am giving rejection sampling a try, but I am not tied to this method. Here I fit a polynomial (6 degress) to the quantiles. The purpose of this is to convert discrete quantiles to a smooth continuous function. This gives me an empirical CDF. I then use rejection sampling to get actual samples from the CDF. Is there a convenient way in R to convert samples from the CDF to density samples, or did I go about this in a convoluted way when there is a better alternative?
# unknown and probably not normal, but I use rnorm here because it is easy
a <- c(exp(rnorm(200, 5, .8)))
probs <- seq(0.05, 0.95, 0.05)
quants <- quantile(a, probs)
df_quants <- tibble::tibble(cum_probs, quants)
df_quants <- df_quants
fit <- lm(quants ~ poly(cum_probs, 6), df_quants)
df_quants$fit <- predict(fit, df_quants)
p <- df_quants %>%
ggplot(aes(x = cum_probs, y = quants))+
geom_line(aes(y = quants), color = "black", size = 1) +
geom_line(aes(y = fit), color = "red", size = 1)
CDF
count = 1
accept = c()
X <- runif(50000, 0, 1)
U <- runif(50000, 0, 1)
estimate <- function(x){
new_x <- predict(fit, data.frame(cum_probs = c(x)))
return(new_x)
while(count <= 50000 & length(accept) < 40000){
test_u = U[count]
test_x = estimate(X[count])/(1000*dunif(X[count], 0, 1))
if(test_u <= test_x){
accept = rbind(accept, X[count])
count = count + 1
}
count = count + 1
}
p2 <- as_tibble(accept, name = V1) %>%
ggplot(aes(x = V1)) +
geom_histogram(bins = 45)
}
CDF Samples
I don't think rejection sampling is needed, with a Bspline fit I was able to generate sensible samples via Inverse Transform, but I also needed a higher resolution grid. The tails are a little off.
The assumption I am making here is that a Bspline fit to a tight grid of quantiles approximate the inverse CDF function. Once this curve is fut I can just use random uniforms U[0,1]
library(splines2)
a <- c(exp(rnorm(200, 5, .8)))
cum_probs <- seq(0.01, 0.99, 0.01)
quants <- quantile(a, cum_probs)
df_quants <- tibble::tibble(cum_probs, quants)
fit_spline <- lm(quants ~ bSpline(cum_probs, df = 9), df_quants)
df_quants$fit_spline <- predict(fit_spline, df_quants)
estimate <- function(x){
new_x <- predict(fit_spline, data.frame(cum_probs = c(x)))
return(new_x)
}
e <- runif(10000, 0, 1)
y <-(estimate(e))
df_density <- tibble(y)
df_densitya <- tibble(a)
py <- df_density %>%
ggplot(aes(x = y)) +
geom_histogram()
pa <- df_densitya %>%
ggplot(aes(x = a)) +
geom_histogram(bins = 45)
original density
Inverse Transformation samples
summary stats
original dist a
Min. 1st Qu. Median Mean 3rd Qu. Max.
20.36 80.84 145.25 195.72 241.22 1285.24
generated from quantiles y
Min. 1st Qu. Median Mean 3rd Qu. Max.
28.09 81.78 149.53 189.07 239.62 667.27
I'm testing the Maximum Likelihood method, when only the maximum value of a sample is provided. I'm assuming the sample is from a Gaussian Distribution.
First I generate 10.000 random number with mean = 2.45 & sd = 1
library(tidyverse)
set.seed(91)
n <- 10000
mean <- 2.45
sd <- 1
random_numbers <- rnorm(n, mean, sd)
Then I extract the max value, and I assume that's the only value I know.
maximum <- random_numbers[which.max(random_numbers)]
Then, I estimate the density value for that maximum value using different mean values. I use the formula:
mean_space <- seq(0, 10, by = 0.01)
densities <- n * (pnorm(maximum, mean_space, sd)^(n-1)) * dnorm(maximum,
mean_space,1)
df <- data.frame(x = mean_space, y = densities)
g <- ggplot(df, aes(x = x, y = y)) +
geom_line() +
geom_vline(xintercept = mean)
print(g)
df %>% filter(y == max(y))
However, I'm getting density values higher than 1 which I think are not correct.
In general: I want to calculate the (log) likelihood of data N given the estimated model parameters from data O.
More specifically, I want to know if my ll_given_modPars function below exists in one of the may R packages dealing with data modeling (lme4, glmm, etc.) as shown in this abstract example (not run):
library(lme4)
o_model <- lmer(observed ~ fixed.id + (1|random.id), data = O, REML = F)
n_logLik <- ll_given_modPars(model.estimates = o_model, data = N)
The fictional example above is on a linear mixed model for simplicity but I would like to eventually do this in a generalized linear mixed model which deals with the Poisson family or directly the negative binomial (for lme4: glmer(..., family="poisson") or glmer.nb ).
From what I could see most packages deal with parameter estimation (great, I need that) but then compare models on the same data with different combinations of fixed and random effects using anova or something to that extent which is not what I want to do.
I want the log likelihood for the same parameters on different data.
The main attempts made:
After not finding a function which seems to be doing that I thought of 'simply' tweaking the lme4 code to my purposes: it calculates the log likelihood for parameters given the data so I thought I could use the same framework but not have it optimize over different parameters but isolate the likelihood calculation function and just give it the parameters and the data. Unfortunately the code is a bit above my current skills https://github.com/lme4/lme4/blob/master/R/nbinom.R (I get a bit lost in how they use the objects over which they optimize).
I thought of doing the likelihood calculation myself, starting with a linear mixed model and then working my way up to more involved ones. But already with this example I'm having a hard time following the math and even when using the formula as specified the obtained log-likelihood is still different (I don't know why, see code in appendix) and I fear it will take me too long before I'll be able to do it for the more involved models (such as Poisson or negative binomial)
At this point I'm not sure what avenue is best to pursue and would appreciate any input you might have.
Appendix: Trying to calculate the log-likelihood (or finding a closed form approximation) based on How does lmer (from the R package lme4) compute log likelihood?. lmer (from lme4) gives a log-likelihood of -17.8 and I get -45.56
library(lme4)
set.seed(7)
n <- 2 # number of groups
m <- 4 # number of instances per group
fixed.effect <- c(0, -2, -1, 1)
tau <- 5 # standard deviation of random effects
sigma <- 2 # standard deviation of error
random.effect <- rnorm(n, mean=0, sd=tau)
sim.data <- data.frame(GROUP.ID=as.factor(rep(1:n, each=m)),
GROUP.EFFECT=rep(random.effect, each=m),
INSTANCE.ID=as.factor(rep(1:m, times=n)),
INSTANCE.EFFECT=rep(fixed.effect, times=n))
# calculate expected Y value
sim.data$EXPECT.Y <- sim.data$GROUP.EFFECT + sim.data$INSTANCE.EFFECT
# now observe Y value, assuming normally distributed with fixed std. deviation
sim.data$OBS.Y <- rnorm(nrow(sim.data), mean=sim.data$EXPECT.Y, sigma)
model <- lmer(OBS.Y ~ INSTANCE.ID + (1|GROUP.ID), data = sim.data, REML=F)
summary(model)
toy.model.var <- VarCorr(model)
toy.model.sigma <- attr(toy.model.var, 'sc') # corresponds to the epsilon, residual standard deviation
toy.model.tau.squared <- toy.model.var[[1]][1] # corresponds to variance of random effects
toy.model.betas <- model#beta
# left product, spread within gropus
toy.data <- rbind(sim.data$OBS.Y[1:4], sim.data$OBS.Y[5:8])
toy.mean.adj <- rbind(toy.data[1,] - mean(unlist(toy.data[1,])), toy.data[2,] - mean(unlist(toy.data[2,])))
toy.mean.adj.prod1 <- prod(dnorm(unlist(toy.mean.adj[1,]), mean = 0, sd = toy.model.sigma))
toy.mean.adj.prod2 <- prod(dnorm(unlist(toy.mean.adj[2,]), mean = 0, sd = toy.model.sigma))
toy.mean.adj.final.prod <- toy.mean.adj.prod1 * toy.mean.adj.prod2
# right product, spread between gropus
toy.mean.beta.adj <- rbind(mean(unlist(toy.data[1,])) - toy.model.betas, mean(unlist(toy.data[2,])) - toy.model.betas)
toy.mean.beta.adj[1,] <- toy.mean.beta.adj[1,] - c(0, toy.model.betas[1], toy.model.betas[1], toy.model.betas[1])
toy.mean.beta.adj[2,] <- toy.mean.beta.adj[2,] - c(0, toy.model.betas[1], toy.model.betas[1], toy.model.betas[1])
toy.mean.beta.adj.prod1 <- prod(dnorm(unlist(toy.mean.beta.adj[1,]), mean = 0, sd = sqrt(toy.model.sigma^2/4 + toy.model.tau.squared)) * sqrt(2/4*pi*toy.model.sigma^2))
toy.mean.beta.adj.prod2 <- prod(dnorm(unlist(toy.mean.beta.adj[2,]), mean = 0, sd = sqrt(toy.model.sigma^2/4 + toy.model.tau.squared)) * sqrt(2/4*pi*toy.model.sigma^2))
toy.mean.beta.adj.final.prod <- toy.mean.beta.adj.prod1 * toy.mean.beta.adj.prod2
toy.total.prod <- toy.mean.adj.final.prod * toy.mean.beta.adj.final.prod
log(toy.total.prod)
EDIT: A helpful link was provided in the comments (https://stats.stackexchange.com/questions/271903/understand-marginal-likelihood-of-mixed-effects-models). Converting my example from above I can replicate the log-likelihood
library(mvtnorm)
z = getME(model, "Z")
zt = getME(model, "Zt")
psi = bdiag(replicate(2, toy.model.tau.squared, simplify=FALSE))
betw = z%*%psi%*%zt
err = Diagonal(8, sigma(model)^2)
v = betw + err
dmvnorm(sim.data$OBS.Y, predict(model, re.form=NA), as.matrix(v), log=TRUE)
While I did not manage to come up with a closed form solution for all of them, I did manage to reproduce the log-likelihoods using numerical integration. I have posted below small examples for how it works in the LMM setting (assuming normal residuals random effects) as well as the GLMM with Poisson and Negative-Binomial. Note that especially the latter one tends so differ ever so slightly when you increase the sample size. My guess is that there is some rounding happening somewhere but for my purposes the precision achieved here is good enough. I will for now accept my own answer but if someone posts a closed form for the Poisson or the Negative-Binomial I will happily accept your answer :)
library(lme4)
library(mvtnorm)
################################################################################
# LMM numerical integration
set.seed(7)
n <- 2 # number of groups
m <- 4 # number of instances per group
fixed.effect <- c(0, -2, -1, 1)
tau <- 5 # standard deviation of random effects
sigma <- 2 # standard deviation of error
random.effect <- rnorm(n, mean=0, sd=tau)
normal.data <- data.frame(GROUP.ID=as.factor(rep(1:n, each=m)),
GROUP.EFFECT=rep(random.effect, each=m),
INSTANCE.ID=as.factor(rep(1:m, times=n)),
INSTANCE.EFFECT=rep(fixed.effect, times=n))
# calculate expected Y value
normal.data$EXPECT.Y <- normal.data$GROUP.EFFECT + normal.data$INSTANCE.EFFECT
# now observe Y value, assuming normally distributed with fixed std. deviation
normal.data$OBS.Y <- rnorm(nrow(normal.data), mean=normal.data$EXPECT.Y, sigma)
normal.model <- lmer(OBS.Y ~ INSTANCE.ID + (1|GROUP.ID), data = normal.data, REML=F)
summary(normal.model)
normal.model.var <- VarCorr(normal.model)
normal.model.sigma <- attr(normal.model.var, 'sc') # corresponds to the epsilon, residual standard deviation
normal.model.tau.squared <- normal.model.var[[1]][1] # corresponds to variance of random effects
normal.model.betas <- normal.model#beta
normal.group.tau <- sqrt(normal.model.tau.squared)
normal.group.sigma <- sigma(normal.model)
normal.group.beta <- predict(normal.model, re.form=NA)[1:4]
integrate_group1 <- function(x){
p1 <- dnorm(normal.data$OBS.Y[1] - normal.group.beta[1] - x, mean = 0, sd = normal.group.sigma) * dnorm(x, mean = 0, sd = normal.group.tau)
p2 <- dnorm(normal.data$OBS.Y[2] - normal.group.beta[2] - x, mean = 0, sd = normal.group.sigma)
p3 <- dnorm(normal.data$OBS.Y[3] - normal.group.beta[3] - x, mean = 0, sd = normal.group.sigma)
p4 <- dnorm(normal.data$OBS.Y[4] - normal.group.beta[4] - x, mean = 0, sd = normal.group.sigma)
p_out <- p1 * p2 * p3 * p4
p_out
}
normal.group1.integration <- integrate(integrate_group1, lower = -10*normal.group.tau, upper = 10*normal.group.tau, subdivisions = 10000L, rel.tol = 1e-10, abs.tol = 1e-50)$value[1]
integrate_group2 <- function(x){
p1 <- dnorm(normal.data$OBS.Y[5] - normal.group.beta[1] - x, mean = 0, sd = normal.group.sigma) * dnorm(x, mean = 0, sd = normal.group.tau)
p2 <- dnorm(normal.data$OBS.Y[6] - normal.group.beta[2] - x, mean = 0, sd = normal.group.sigma)
p3 <- dnorm(normal.data$OBS.Y[7] - normal.group.beta[3] - x, mean = 0, sd = normal.group.sigma)
p4 <- dnorm(normal.data$OBS.Y[8] - normal.group.beta[4] - x, mean = 0, sd = normal.group.sigma)
p_out <- p1 * p2 * p3 * p4
p_out
}
normal.group2.integration <- integrate(integrate_group2, lower = -10*normal.group.tau, upper = 10*normal.group.tau, subdivisions = 10000L, rel.tol = 1e-10, abs.tol = 1e-50)$value[1]
log(normal.group1.integration) + log(normal.group2.integration)
#################################
# Poisson numerical integration
set.seed(13) #13
n <- 2 # number of groups
m <- 4 # number of instances per group
# effect sizes are much smaller since they are exponentiated
fixed.effect <- c(0, -0.2, -0.1, 0.2)
tau <- 1.5 # standard deviation of random effects
# sigma <- 1.5 # standard deviation of error
random.effect <- rnorm(n, mean=0, sd=tau) # guide effect
poisson.data <- data.frame(GROUP.ID=as.factor(rep(1:n, each=m)),
GROUP.EFFECT=rep(random.effect, each=m),
INSTANCE.ID=as.factor(rep(1:m, times=n)),
INSTANCE.EFFECT=rep(fixed.effect, times=n))
# calculate expected Y value
poisson.data$EXPECT.Y <- exp(poisson.data$GROUP.EFFECT + poisson.data$INSTANCE.EFFECT)
# now observe Y value, assuming normally distributed with fixed std. deviation
poisson.data$OBS.Y <- rpois(nrow(poisson.data), poisson.data$EXPECT.Y)
poisson.model <- glmer(OBS.Y ~ INSTANCE.ID + (1|GROUP.ID), data = poisson.data, family="poisson")
summary(poisson.model)
poisson.model.var <- VarCorr(poisson.model)
poisson.model.sigma <- attr(poisson.model.var, 'sc') # corresponds to the epsilon, residual standard deviation
poisson.model.tau.squared <- poisson.model.var[[1]][1] # corresponds to variance of random effects
poisson.model.betas <- poisson.model#beta
poisson.group.tau <- sqrt(poisson.model.tau.squared)
poisson.group.sigma <- sigma(poisson.model)
poisson.group.beta <- predict(poisson.model, re.form=NA)[1:4]
integrate_group1 <- function(x){
p1 <- dpois(poisson.data$OBS.Y[1], lambda = exp(poisson.group.beta[1] + x)) * dnorm(x, mean = 0, sd = poisson.group.tau)
p2 <- dpois(poisson.data$OBS.Y[2], lambda = exp(poisson.group.beta[2] + x))
p3 <- dpois(poisson.data$OBS.Y[3], lambda = exp(poisson.group.beta[3] + x))
p4 <- dpois(poisson.data$OBS.Y[4], lambda = exp(poisson.group.beta[4] + x))
p_out <- p1 * p2 * p3 * p4
p_out
}
poisson.group1.integration <- integrate(integrate_group1, lower = -10*poisson.group.tau, upper = 10*poisson.group.tau, subdivisions = 10000L, rel.tol = 1e-10, abs.tol = 1e-50)$value[1]
integrate_group2 <- function(x){
p1 <- dpois(poisson.data$OBS.Y[5], lambda = exp(poisson.group.beta[1] + x)) * dnorm(x, mean = 0, sd = poisson.group.tau)
p2 <- dpois(poisson.data$OBS.Y[6], lambda = exp(poisson.group.beta[2] + x))
p3 <- dpois(poisson.data$OBS.Y[7], lambda = exp(poisson.group.beta[3] + x))
p4 <- dpois(poisson.data$OBS.Y[8], lambda = exp(poisson.group.beta[4] + x))
p_out <- p1 * p2 * p3 * p4
p_out
}
poisson.group2.integration <- integrate(integrate_group2, lower = -10*poisson.group.tau, upper = 10*poisson.group.tau, subdivisions = 10000L, rel.tol = 1e-10, abs.tol = 1e-50)$value[1]
log(poisson.group1.integration) + log(poisson.group2.integration)
#############
# Negative-Binomial numerical integration
set.seed(13) #13
n <- 100 # number of groups
m <- 4 # number of instances per group
# effect sizes are much smaller since they are exponentiated
fixed.effect <- c(0, -0.2, -0.1, 0.2)
tau <- 1.5 # standard deviation of random effects
theta <- 0.5
# sigma <- 1.5 # standard deviation of error
random.effect <- rnorm(n, mean=0, sd=tau) # guide effect
nb.data <- data.frame(GROUP.ID=as.factor(rep(1:n, each=m)),
GROUP.EFFECT=rep(random.effect, each=m),
INSTANCE.ID=as.factor(rep(1:m, times=n)),
INSTANCE.EFFECT=rep(fixed.effect, times=n))
# calculate expected Y value
nb.data$EXPECT.Y <- exp(nb.data$GROUP.EFFECT + nb.data$INSTANCE.EFFECT)
# now observe Y value, assuming normally distributed with fixed std. deviation
nb.data$OBS.Y <- rnbinom(nrow(nb.data), mu = nb.data$EXPECT.Y, size = theta)
nb.model <- glmer.nb(OBS.Y ~ INSTANCE.ID + (1|GROUP.ID), data = nb.data)
summary(nb.model)
nb.model.var <- VarCorr(nb.model)
nb.model.sigma <- attr(nb.model.var, 'sc') # corresponds to the epsilon, residual standard deviation
nb.model.tau.squared <- nb.model.var[[1]][1] # corresponds to variance of random effects
nb.model.betas <- nb.model#beta
nb.group.tau <- sqrt(nb.model.tau.squared)
nb.group.beta <- predict(nb.model, re.form=NA)[1:4]
nb.group.dispersion <- getME(nb.model, "glmer.nb.theta")
integration_function_generator <- function(input.obs, input.beta, input.dispersion, input.tau){
function(x){
p1 <- dnbinom(input.obs[1], mu = exp(input.beta[1] + x), size = input.dispersion) * dnorm(x, mean = 0, sd = input.tau)
p2 <- dnbinom(input.obs[2], mu = exp(input.beta[2] + x), size = input.dispersion)
p3 <- dnbinom(input.obs[3], mu = exp(input.beta[3] + x), size = input.dispersion)
p4 <- dnbinom(input.obs[4], mu = exp(input.beta[4] + x), size = input.dispersion)
p_out <- p1 * p2 * p3 * p4
p_out
}
}
nb.all.group.integrations <- c()
for(i in 1:n){
temp.obs <- nb.data$OBS.Y[(1:4)+(i-1)*4]
temp_integrate_function <- integration_function_generator(temp.obs, nb.group.beta, nb.group.dispersion, nb.group.tau)
temp.integration <- integrate(temp_integrate_function, lower = -10*nb.group.tau, upper = 10*nb.group.tau, subdivisions = 10000L, rel.tol = 1e-10, abs.tol = 1e-50)$value[1]
nb.all.group.integrations <- c(nb.all.group.integrations, temp.integration)
}
sum(log(nb.all.group.integrations))
Say I have 200 subjects, 100 in group A and 100 in group B, and for each I measure some continuous parameter.
require(ggplot2)
set.seed(100)
value <- c(rnorm(100, mean = 5, sd = 3), rnorm(100, mean = 10, sd = 3))
group <- c(rep('A', 100), rep('B', 100))
data <- data.frame(value, group)
ggplot(data = data, aes(x = value)) +
geom_bar(aes(color = group))
I would like to determine the value (Threshold? Breakpoint?) that maximizes separation and minimizes misclassification between the groups. Does such a function exist in R?
I've tried searching along the lines of "r breakpoint maximal separation between groups," and "r threshold minimize misclassification," but my google-foo seems to be off today.
EDIT:
Responding to #Thomas's comment, I have tried to fit the data using logistic regression and then solve for the threshold, but I haven't gotten very far.
lr <- glm(group~value)
coef(lr)
# (Intercept) value
# 1.1857435 -0.0911762
So Bo = 1.1857435 and B1 = -0.0911762
From Wikipedia, I see that F(x) = 1/(1+e^-(Bo + B1x)), and solving for x:
x = (ln(F(x) / (1 - F(x))) - Bo)/B1
But trying this in R, I get an obviously incorrect answer:
(log(0.5/(1 - 0.5)) - 1.1857435)/-0.0911762 # 13.00497
A simple approach is to write a function that calculates the accuracy given a threshold:
accuracy = Vectorize(function(th) mean(c("A", "B")[(value > th) + 1] == group))
Then find the maximum using optimize:
optimize(accuracy, c(min(value), max(value)), maximum=TRUE)
# $maximum
# [1] 8.050888
#
# $objective
# [1] 0.86
I've gotten the answer I need thanks to help from #Thomas and #BenBolker.
Summary
The problem with my attempt at solving it through logistic regression was that I hadn't specified family = binomial
The dose.p() function in MASS will do the work for me given a glm fit
Code
# Include libraries
require(ggplot2)
require(MASS)
# Set seed
set.seed(100)
# Put together some dummy data
value <- c(rnorm(100, mean = 5, sd = 3), rnorm(100, mean = 10, sd = 3))
group <- c(rep(0, 100), rep(1, 100))
data <- data.frame(value, group)
# Plot the distribution -- visually
# The answer appears to be b/t 7 and 8
ggplot(data = data, aes(x = value)) +
geom_bar(aes(color = group))
# Fit a glm model, specifying the binomial distribution
my.glm <- glm(group~value, data = data, family = binomial)
b0 <- coef(my.glm)[[1]]
b1 <- coef(my.glm)[[2]]
# See what the probability function looks like
lr <- function(x, b0, b1) {
prob <- 1 / (1 + exp(-1*(b0 + b1*x)))
return(prob)
}
# The line appears to cross 0.5 just above 7.5
x <- -0:12
y <- lr(x, b0, b1)
lr.val <- data.frame(x, y)
ggplot(lr.val, aes(x = x, y = y)) +
geom_line()
# The inverse of this function computes the threshold for a given probability
inv.lr <- function(p, b0, b1) {
x <- (log(p / (1 - p)) - b0)/b1
return(x)
}
# With the betas from this function, we get 7.686814
inv.lr(0.5, b0, b1)
# Or, feeding the glm model into dose.p from MASS, we get the same answer
dose.p(my.glm, p = 0.5)
Thanks, everyone, for your help!