What formula does prop.test use? - r

The prop.test function apparently doesn't use the formula given here to create a confidence interval, so what formula is being used? Below is a confidence interval CI computed with prop.test and a confidence interval CI.2 computed using the formula given here.
CI <- prop.test(5,10)$conf.int
se.hat <- 0.5/sqrt(10)
z <- qnorm(0.975)
CI.2 <- 0.5 + c(-1,1)*z*se.hat
CI
CI.2 # not the same

It uses the Wilson score interval with continuity correction, i.e. the Yates chi-squared test.

We can confirm Ryan's answer comparing the results from IC.wc and prop.test using the example given below:
IC.wc <- function(x, n, conf.level=0.95){
p <- x/n ; q <- 1-p
alpha <- 1- conf.level
z <- qnorm(p=alpha/2, lower.tail=F)
const1 <- z * sqrt(z^2 - 2 - 1/n + 4*p*(n*q+1))
const2 <- z * sqrt(z^2 + 2 - 1/n + 4*p*(n*q-1))
L <- (2*n*p + z^2 - 1 - const1) / (2*(n+z^2))
U <- (2*n*p + z^2 + 1 + const2) / (2*(n+z^2))
c(L, U)
}
IC.wc(x=35, n=50)
prop.test(x=35, n=50, correct=TRUE)$conf.int

I figure out a way to get a CI exactly the same as by formula:
library(BSDA)
x=35
n=50
phat=x/n
xvar=c(rep(1,x),rep(0,n-x))## replicate the original variable!
s=sqrt(phat*(1-phat))
z.test(xvar,sigma.x=s)

Related

Dirichlet Process manually

I am implementing the Dirichlet Mixture Model using the EM algorithm in R, but am experiencing issues with the results. I generated two binomial distributions with fractions of (70%, 30%) and means of (0.05, 0.18), and trimmed 5% of the data set near 0. However, I am using a Beta distribution for clustering instead of a binomial distribution. Additionally, I am updating the mean and variance of the distributions rather than the alpha and beta parameters in order to impose constraints on the variance of each distribution.
I expected to obtain results similar to the ground truth settings, but instead I am getting pi values of (1, 0) and means of (0.09, 0.21). I am not sure if there are errors in my EM algorithm implementation or issues with parameter initialization.
I am including my R code for the data generation and DMM below. I would appreciate any help in identifying the cause of the problem and suggestions for how to resolve it.
library(dplyr)
library(data.table)
library(tidyverse)
set.seed(42)
#read count
cover <- 100
#Ground Truth Setting
subclone_f <- c(0.7, 0.3) # Ground Truth Setting - proportion
subclone_vaf <- c(0.05, 0.18) # Ground Truth Setting - mean
n_muts <- 45000
n_clone <-length(subclone_f)
#generating the virtual mutation notation: subclonal if 2, clonal if 1
mut_type <- sample.int(2, n_muts, prob = subclone_f, replace = TRUE)
mut_type
#generating negative binomial distribution(read count) for the given coverage
mut_reads <- rbinom(n_muts, cover, prob = subclone_vaf[mut_type]) %>% data.frame()
mut_reads
vaf <- (mut_reads/cover) %>% data.frame()
# Truncate the low count reads
n <- 0.95 * nrow(vaf) # cut-off setting
vaf_trim <- sapply(vaf, function(x) sort(x, decreasing = TRUE)[1:n])
colnames(vaf_trim) <- c("vaf")
hist(vaf_trim, breaks=seq(0,0.75,by=0.0001))
# Mixture Model
# Parameter Initialization (for 2 subclonality)
pi <- c(0.5, 0.5) # Mixture proportion weight: sums up to 1
alpha <- c(2,3)
beta <- c(20,5)
Mu[1] <- alpha[1] / (alpha[1] + beta[1])
Mu[2] <- alpha[2] / (alpha[2] + beta[2])
var[1] <- alpha[1]*beta[1] / ((alpha[1] + beta[1])^2 * (alpha[1] + beta[1] +1))
var[2] <- alpha[2]*beta[2] / ((alpha[2] + beta[2])^2 * (alpha[2] + beta[2] +1))
tau <-c(0.05, 0.05)
loglike[1] <- 0.5
loglike[2] <- 0.5
k <- 2
Nu <- 1/ (alpha + beta + 1) # control the variance: same across the distributions> Originally wanted to implement the same Nu for 2 distributions but I don't know how to do that.
n_cluster <- nrow(data.frame(pi))
logdbeta <- function(x, alpha, beta) {
sum(sapply(x, function(x) {dbeta(x, alpha, beta, log = TRUE)}))
}
estBetaParams <- function(mu, var) {
alpha <- ((1 - mu) / var - (1 / mu)) * mu ^ 2
beta <- alpha * (1 / mu - 1)
return(params = list(alpha = alpha, beta = beta))
}
# Loop for the EM algorithm
while(abs(loglike[k]-loglike[k-1]) >= 0.00001) {
# E step
total <- (pi[1]*dbeta(vaf_trim, alpha[1], beta[1])) + (pi[2]*dbeta(vaf_trim, alpha[2], beta[2]))
tau1 <- pi[1]*(dbeta(vaf_trim, alpha[1], beta[1]))/ total
tau2 <- pi[2]*(dbeta(vaf_trim, alpha[2], beta[2]))/ total
# M step
pi[1] <- sum(tau1)/length(vaf_trim) # Update Pi(weight)
pi[2] <- sum(tau2)/length(vaf_trim)
Mu[1] <- sum(tau1*vaf_trim)/sum(tau1) # Update Mu
Mu[2] <- sum(tau2*vaf_trim)/sum(tau2)
#Nu <- alpha + beta
Nu <- 1/ (alpha + beta + 1)
# Our main aim was to share the same coefficient for all dist
var[1] <- Mu[1] * (1-Mu[1]) * Nu[1] # Update Variance
var[2] <- Mu[2] * (1-Mu[2]) * Nu[2]
#Update in terms of alpha and beta
estBetaParams(Mu[1], var[1])
estBetaParams(Mu[2], var[2])
# Maximize the loglikelihood
loglike[k+1]<-sum(tau1*(log(pi[1])+logdbeta(vaf_trim,Mu[1],var[1])))+sum(tau2*(log(pi[2])+logdbeta(vaf_trim,Mu[2],var[2])))
k<-k+1
}
# Print estimates
EM <- data.table(param = c("pi", "Mean"), pi = pi, Mean = Mu)
knitr::kable(EM)

Bridge sampling Monte-carlo method in R studio for variance gamma

I am using trying to use bridge sampling in R studio to simulate paths for the variance gamma process. My code is:
sigma = 0.5054
theta = 0.2464
nu = 0.1184
mu=1
N=2^(k)
k=5
V_<-rep(NA,252)
V_[0]<-0
G_[N]<-rgamma(1, shape=N*1/nu, scale=nu)
G_<-0
V<-rnorm(theta*G[N],sigma^2*G[N])
for(l in 1:k){
n<-2^(k-l)
for(j in 1:2^i-1){
i<-(2*j-1)*n
d1<-(n)*mu^2/nu
d2<-(n)*mu^2/nu
Y<-rbeta(1,d1,d2)
G_[i]<-G_[i-1]+(G[i+n]-G[i-n])*Y
G[i]
print(G_[i])
Z<-rnorm(0,(G_[i+n]-G_[i])*sigma^2*Y)
V_[i]<-Y*V_[i+n]+(1-Y)*V_[i-n]+Z
print(V_[i])
}
}
ts.plot(V[i])
I'm not sure what I've done wrong. The algorithm I am trying to follow is as below in the picture:
Based on your code, a numerical sequence was simulated. And it can be roughly validated by using VarianceGamma::vgFit to estimate the parameters.
Note that the time index starts from 1 due to R syntax. The sqrt of variance was used for the standard deviation in rnorm. And I probably shouldn't add the change due to interest rate vgC in the end, since it is not included in your algorithm. Please set it as 0 if it doesn't make sense.
Simulation by Brownian bridge:
# Brownian-Gamma Bridge Sampling (BGBS) of a VG process
set.seed(1)
M <- 10
nt <- 2^M + 1 #number of observations
T <- nt - 1 #total time
T_ <- seq(0, T, length.out=nt) #fixed time increments
#random time increments
#T_ = c(0, runif(nt-2), 1)
#T_ = sort(T_) * T
r <- 1 + 0.2 #interest rate
vgC <- (r-1)
sigma <- 0.5054
theta <- 0.2464
nu <- 0.1184
V_ <- G_ <- rep(NA,nt)
V_[1] <- 0
G_[1] <- 0
G_[nt] <- rgamma(1, shape=T/nu, scale=nu)
V_[nt] <- rnorm(1, theta*G_[nt], sqrt(sigma^2*G_[nt]))
for (k in 1:M)
{
n <- 2^(M-k)
for (j in 1:2^(k-1))
{
i <- (2*j-1) * n
Y <- rbeta(1, (T_[i+1]-T_[i-n+1])/nu, (T_[i+n+1]-T_[i+1])/nu)
G_[i+1] <- G_[i-n+1] + (G_[i+n+1] - G_[i-n+1]) * Y
Z <- rnorm(1, sd=sqrt((G_[i+n+1] - G_[i+1]) * sigma^2 * Y))
V_[i+1] <- Y * V_[i+n+1] + (1-Y) * V_[i-n+1] + Z
}
}
V_ <- V_ + vgC*T_ # changes due to interest rate
plot(T_, V_)
The results roughly match with the estimation:
#Estimated parameters:
library(VarianceGamma)
dV <- V_[2:nt] - V_[1:(nt-1)]
vgFit(dV)
> vgC sigma theta nu
> 0.2996 0.5241 0.1663 0.1184
#Real parameters:
c(vgC, sigma, theta, nu)
> vgC sigma theta nu
> 0.2000 0.5054 0.2464 0.1184
EDIT
As you commented, there is another similar algorithm and can be implemented in a similar way.
Your code could be modified as below:
set.seed(1)
M <- 7
nt <- 2^M + 1
T <- nt - 1
T_ <- seq(0, T, length.out=nt)
sigma=0.008835
theta= -0.003856
nu=0.263743
vgc=0.004132
V_ <- G_ <- rep(1,nt)
G_[T+1] <- rgamma(1, shape=T/nu, scale=nu) #
V_[T+1] <- rnorm(1, theta*G_[T+1], sqrt(sigma^2*G_[T+1])) #
V_[1] <- 0
G_[1] <- 0
for (m in 1:M){ #
Y <- rbeta(1,T/(2^m*nu), T/(2^m*nu))
for (j in 1:2^(m-1)){ #
i <- (2*j-1)
G_[i*T/(2^m)+1] = G_[(i-1)*T/(2^m)+1]+(-G_[(i-1)*T/(2^m)+1]+G_[(i+1)*T/(2^m)+1])*Y #
b=G_[T*(i+1)/2^m+1] - G_[T*(i)/2^m+1] #
Z_i <- rnorm(1, sd=b*sigma^2*Y)
#V_[i] <- Y* V_[i+1] + (1-Y)*V_[i-1] + Z_i
V_[i*T/(2^m)+1] <- Y* V_[(i+1)*T/(2^m)+1] + (1-Y)*V_[(i-1)*T/(2^m)+1] + Z_i
}
}
V_ <- V_ + vgc*T_
V_
ts.plot(V_, main="BRIDGE", xlab="Time increment")
Ryan again, I have found another algorithm for bridge sampling which I tried on my own, But I am not convinced that my answers are correct. I have added my code, output and algorithm below and also the output I think it should loom like? I have used a similar format to your code:
set.seed(1)
M <- 7
nt <- 2^M + 1 #number of observations
T <- nt - 1 #total time
T_ <- seq(0, T, length.out=nt) #fixed time increments
sigma=0.008835
theta= -0.003856
nu=0.263743
vgc=0.004132
V_ <- G_ <- rep(1,nt)
G_[T] <- rgamma(1, shape=T/nu, scale=nu)
V_[T] <- rnorm(1, theta*G_[T], sqrt(sigma^2*G_[T]))
V_[1] <- 0
G_[1] <- 0
for (m in 2:M){
Y <- rbeta(1,T/(2^m*nu), T/(2^m*nu))
for (j in 2:2^(m-1)){
i <- (2*j-1)
G_[i*T/(2^m)] = G_[(i-1)*T/(2^m)]+(G_[(i-1)*T/(2^m)]+G_[(i+1)*T/(2^m)])*Y
b=G_[T*(i)/2^m] - G_[T*(i-1)/2^m]
Z_i <- rnorm(1, sd=b*sigma^2*Y)
V_[i] <- Y* V_[i+1] + (1-Y)*V_[i-1] + Z_i
}
}
V_ <- V_ + vgc*T_ # changes due to interest rate
V_
ts.plot(V_, main="BRIDGE", xlab="Time increment")
However this is how my plot from my ouput, in figure 1:
Bu as Variance gamma is a jump process with finite activity, the path should look like this: , this is just an image from google for variance gamma paths, the sequential sampling one looks like this and my aim is to compare it to Bridge sampling for simulating paths. But my output looks really different. Please let me know your thoughts. If there is an issue in my code let me know thanks. Here is algortihm for it, much similar to the one above but slightly different:

Equation for 95% CI on regression?

I have calculated/plotted a linear and a 95% CI on the model parameters as follows
lm <- lm(cars$speed~cars$dist)
conf <- predict(lm, interval='confidence')
conf <- cbind(cars,conf)
CI <- as.data.frame(confint(lm))
library(ggplot2)
plot<-ggplot(conf,aes(dist,speed)) +
geom_line(aes(y=fit),color='black') +
geom_line(aes(y=lwr),color='red',linetype='dashed') +
geom_line(aes(y=upr),color='red',linetype='dashed')
plot
I am wondering what the equation is to calculate lower and upper limits (the red lines) on the plot? I assumed these could be calculated using the values from the confint() function?
I tried calculating the lwr and upr values like so but I did not get the same result.
lower <- CI[1,1] + CI[2,1]*cars$dist
upper <- CI[1,2] + CI[2,2]*cars$dist
Here is how the confidence interval is calculated in lm.predict using the following equation:
which can be implemented as follows:
my.lm <- lm(cars$speed~cars$dist)
intercept <- model.matrix(delete.response(terms(my.lm)), cars)
fit.values <- c(intercept %*% coef(my.lm))
data.fit <- data.frame(x=cars$dist, fit=fit.values)
# compute t-value
tval <- qt((1-0.95)/2, df=nrow(data.fit)-2)
# compute Sxx
Sxx <- sum((data.fit$x - mean(data.fit$x))^2)
# compute MSres
MSres <- sum(my.lm$residuals^2)/(nrow(data.fit)-2)
# calculate confidence interval
CI <- data.frame(t(apply(data.fit, 1, FUN = function(row){
sqrt(MSres * (1/nrow(data.fit) + (as.numeric(row[1]) - mean(data.fit$x))^2/Sxx)) * tval * c(1, -1) + as.numeric(row[2])
})))
names(CI) <- c("lwr","upr")
head(CI)
# lwr upr
#1 6.917090 10.31299
#2 8.472965 11.40620
#3 7.307526 10.58483
#4 10.764584 13.08820
#5 9.626909 12.23906
#6 8.472965 11.40620
You may compare the results with the ones you obtained from predict.
Hope it helps.

How Implement matrix equation in R?

I'm studying 'Latent Aspect Rating Analysis' and
I'm trying to implement the method in r.
But I have no idea how to solve those in r programming.
Here is the equation:
Here is the r code so far:
-(t( alpha ) %*% Sd - rd) / delta) * Sd - sigma %*% (alpha - mu)
I have to figure out the alpha which makes this equation to zero.
Delta and rd is constant, alpha, Sd and mu are matrix ( k x 1 ).
And sigma is a matrix (k x k ). In this case, k = 3.
Define a function f as follows which does the calculations of your equation
f <- function(alpha) {
y <- numeric(length(alpha))
z <- matrix(alpha,nrow=k)
# or as.numeric((t(z) %*% sd - rd))
y <- - ((t(z) %*% sd - rd)[1,1]/delta^2) * matrix(sd,nrow=k) - solve(sigma) %*% (z - mu)
y
}
Note: the expression you gave in R had at least one mistake; delta should have been delta^2.
Create some fake data:
# some fake data
set.seed(401)
k <- 3
sd <- runif(k)
rd <- runif(k)
delta <- 1
rd <- .04
mu <- 1
sigma <- matrix(runif(k*k,1,4),nrow=k,ncol=3)
sigma
alpha <- rep(1,k)
Show the value of f for this constellation of variables
f(alpha)
Use a non linear equation solver to solve for alpha as follows
library(nleqslv)
nleqslv(alpha,f)
If you are going to evaluate f many times it is advisable to compute solve(sigma) (the inverse of sigma) once beforehand.
We want
((alpha'*s - r)*s)/(d*d) + inv(Sigma)*(alpha - mu)
noting that
alpha'*s = s'*alpha
we can rearrange to
(s*s')*alpha/(d*d) -r*s/(d*d) + inv(Sigma)*alpha - inv(Sigma)*mu
and then to
(inv(Sigma) + (s*s')/(d*d))*alpha = (r/(d*d))*s + inv(Sigma)*mu
so
alpha = inv( (inv(Sigma) + (s*s')/(d*d)))* ( (r/(d*d))*s + inv(Sigma)*mu)

R - Fitting a constrained AutoRegression time series

I have a time-series which I need to fit onto an AR (auto-regression) model.
The AR model has the form:
x(t) = a0 + a1*x(t-1) + a2*x(t-2) + ... + aq*x(t-q) + noise.
I have two contraints:
Find the best AR fit when lag.max = 50.
Sum of all coefficients a0 + a1 + ... + aq = 1
I wrote the below code:
require(FitAR)
data(lynx) # my real data comes from the stock market.
z <- -log(lynx)
#find best model
step <- SelectModel(z, ARModel = "AR" ,lag.max = 50, Criterion = "AIC",Best=10)
summary(step) # display results
# fit the model and get coefficients
arfit <- ar(z,p=1, order.max=ceil(mean(step[,1])), aic=FALSE)
#check if sum of coefficients are 1
sum(arfit$ar)
[1] 0.5784978
My question is, how to add the constraint: sum of all coefficients = 1?
I looked at this question, but I do not realize how to use it.
**UPDATE**
I think I manage to solve my question as follow.
library(quadprog)
coeff <- arfit$ar
y <- 0
for (i in 1:length(coeff)) {
y <- y + coeff[i]*c(z[(i+1):length(z)],rep(0,i))
ifelse (i==1, X <- c(z[2:length(z)],0), X <- cbind(X,c(z[(i+1):length(z)],rep(0,i))))
}
Dmat <- t(X) %*% X
s <- solve.QP(Dmat , t(y) %*% X, matrix(1, nr=15, nc=1), 1, meq=1 )
s$solution
# The coefficients should sum up to 1
sum(s$solution)

Resources