I have code that calculate Kaplan-Meier product..
km_mean <- function(x,nd) {
library(tidyverse)
# first remove any missing data
df <- tibble(x,nd) %>% filter(!is.na(x))
x <- df %>% pull(x); nd <- df %>% pull(nd)
# handle cases of all detects or all nondetects; in these situations, no Kaplan-Meier
# estimate is possible or necessary; instead treat all detects as actual concentration estimates
# and all NDs as imputed at half their reporting limits
if (all(nd==0)) return(tibble(mean=mean(x),sd=sd(x)))
if (all(nd==1)) return(tibble(mean=mean(x/2),sd=sd(x/2)))
# for cases with mixed detects and NDs, table by nd status;
# determine unique x values; first subtract epsilon to each nondetect to associate
# larger rank for detects tied with NDs with same reporting limits
eps <- 1e-6
x <- x - nd*eps
nn <- nlevels(factor(x))
# determine number of at-risk values; build kaplan-meier CDF and survival function;
# note: need to augment and adjust <tab> for calculation below to work correctly
km.lev <- as.numeric(levels(factor(x)))
xa <- c(x,max(x)+1); nda <- c(nd,0)
tab <- table(xa,nda)
tab[nn+1,1] <- 0
km.rsk <- cumsum(tab[,1] + tab[,2])
km.cdf <- rev(cumprod(1 - rev(tab[,1])/rev(km.rsk)))[-1]
names(km.cdf) <- as.character(km.lev)
km.surv <- 1 - km.cdf
km.out <- tibble(km.lev,km.rsk=km.rsk[-length(km.rsk)],km.cdf,km.surv)
row.names(km.out) <- NULL
# estimate adjusted mean and SD
xm <- km.lev[1] + sum(diff(km.lev)*km.surv[-length(km.surv)])
dif <- diff(c(0,km.cdf))
xsd <- sqrt(sum(dif*(km.lev - xm)^2))
names(xm) <- NULL; names(xsd) <- NULL
tibble(mean=xm,sd=xsd)
}
My data has three columns, a sample-ID, value (x), and detect/non-detect flag (nd).
a1 0.23 0
a1 2.3 0
a1 1.6 0
a2 3.0 1
a2 3.1 0
a2 2.76 0
How can I adapt the function to run on all a1 samples as a group, then a2, etc.?
I've tried group_by commands, but can't seem to break through.
Related
I can do it for the two sample t test but not for Median test or Wilcoxon test or Hodges Lehmann test
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
mean(data_2000)
mean(data_2019)
mean(data_2019) - mean(data_2000)
combined_data <- c(data_2000, data_2019)
set.seed(123)
null_dist <- c()
for (i in 1:100000) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
null_dist[i] <- mean(shuffled_2019) - mean(shuffled_2000)
}
(p_value <- (sum(null_dist >= 49.57143) + sum(null_dist <=
`enter code here`-49.57143))/length(null_dist))
I think this is what you're trying to do. I altered your code as little as possible. There are packages like infer that will do this for you and the for loop is not the most efficient but it's plenty good enough and may help you learn. As long as we're looping I did mean and median at the same time since all other parts of the code are identical. ifelse is a nice easy way to make 1s and 0s to sum.
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
delta_mean <- mean(data_2019) - mean(data_2000)
delta_median <- median(data_2019) - median(data_2000)
combined_data <- c(data_2000, data_2019)
trials <- 100000
set.seed(123)
mean_diff <- c()
median_diff <- c()
for (i in 1:trials) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
mean_diff[i] <- mean(shuffled_2019) - mean(shuffled_2000)
median_diff[i] <- median(shuffled_2019) - median(shuffled_2000)
}
p_mean <- sum(ifelse(mean_diff > delta_mean | mean_diff < -1 * delta_mean, 1, 0)) / trials
p_median <- sum(ifelse(median_diff > delta_median | median_diff < -1 * delta_median, 1, 0)) / trials
p_mean
#> [1] 0.31888
p_median
#> [1] 0.24446
Following up on your question about HL test. Quoting Wikipedia
The Hodges–Lehmann statistic also estimates the difference between two populations. For two sets of data with m and n observations, the set of two-element sets made of them is their Cartesian product, which contains m × n pairs of points (one from each set); each such pair defines one difference of values. The Hodges–Lehmann statistic is the median of the m × n differences.
You could run it on your data with the following code...
Do NOT run it 100,000 times the answer is the same everytime because you're already making all 49 possible pairings
hl_df <- expand.grid(data_2019, data_2000)
hl_df$pair_diffs <- hl_df$Var1 - hl_df$Var2
median(hl_df$pair_diffs)
[1] 49
You can do the Wilcoxon test with wilcox.test in the stats package (loaded by default as part of R core). You need to set exact = FALSE because an exact p-value is not possible if there are ties.
wilcox.test(data_2019, data_2000, exact = FALSE)
Wilcoxon rank sum test with continuity correction
data: data_2019 and data_2000
W = 33.5, p-value = 0.2769
alternative hypothesis: true location shift is not equal to 0
I'll update this when I figure out how to do the other tests.
I am in interested in finding Pearson correlation coefficients between a list of genes. Basically, I have Affymetrix gene level expression matrix (genes in the rows and sample ID on the columns), and I have annotation data of microarray experiment observation where sample ID in the rows and description identification on the columns.
data
> expr_mat[1:8, 1:3]
Tarca_001_P1A01 Tarca_003_P1A03 Tarca_004_P1A04
1_at 6.062215 6.125023 5.875502
10_at 3.796484 3.805305 3.450245
100_at 5.849338 6.191562 6.550525
1000_at 3.567779 3.452524 3.316134
10000_at 6.166815 5.678373 6.185059
100009613_at 4.443027 4.773199 4.393488
100009676_at 5.836522 6.143398 5.898364
10001_at 6.330018 5.601745 6.137984
> anodat[1:8, 1:3]
V1 V2 V3
1 SampleID GA Batch
2 Tarca_001_P1A01 11 1
3 Tarca_013_P1B01 15.3 1
4 Tarca_025_P1C01 21.7 1
5 Tarca_037_P1D01 26.7 1
6 Tarca_049_P1E01 31.3 1
7 Tarca_061_P1F01 32.1 1
8 Tarca_051_P1E03 19.7 1
goal:
I intend to see how the genes in each sample are correlated with GA value of corresponding samples in the annotation data, then generate sub expression matrix of keeping high correlated genes with target observation data anodat$GA.
my attempt:
gene_corrs <- function(expr_mat, anno_mat){
stopifnot(ncol(expr_mat)==nrow(anno_mat))
res <- list()
lapply(colnames(expr_mat), function(x){
lapply(x, rownames(y){
if(colnames(x) %in% rownames(anno_mat)){
cor_mat <- stats::cor(y, anno_mat$GA, method = "pearson")
ncor <- ncol(cor_mat)
cmatt <- col(cor_mat)
ord <- order(-cmat, cor_mat, decreasing = TRUE)- (ncor*cmatt - ncor)
colnames(ord) <- colnames(cor_mat)
res <- cbind(ID=c(cold(ord), ID2=c(ord)))
res <- as.data.frame(cbind(out, cor=cor_mat[res]))
res <- cbind(res, cor=cor_mat[out])
res <- as.dara.frame(res)
}
})
})
return(res)
}
however, my above implementation didn't return what I expected, I need to filter out the genes by finding genes which has a strong correlation with anodat$GA.
Another attempt:
I read few post about similar issue and some people discussed about using limma package. Here is my attempt by using limma. Here I used anodat$GA as a covariate to fit limma linear model:
library(limma)
fit <- limma::lmFit(expr_mat, design = model.matrix( ~ 0 + anodat$GA)
fit <- eBayes(fit)
topTable(fit, coef=2)
then I am expecting to get a correlation matrix from the above code, and would like to do following in order to get filtered sub expression matrix:
idx <- which( (abs(cor) > 0.8) & (upper.tri(cor)), arr.ind=TRUE)
idx <- unique(c(idx[, 1],idx[, 2])
correlated.genes <- matrix[idx, ]
but I still didn't get the right answer. I am confident about using limma approach but I couldn't figure out what went wrong above code again. Can anyone point me out how to make this work? Is there any efficient way to make this happen?
Don't have your data so hard to double check, but in the abstract I would try this:
library(matrixTests)
cors <- row_cor_pearson(expr_mat, anodat$GA)
which(cors$cor > 0.9) # to get the indeces of genes with correlation > 0.9
When using the matchit-function for full matching, the results differ by the order of the input dataframe. That is, if the order of the data is changed, results change, too. This is surprising, because in my understanding, the optimal full algorithm should yield only one single best solution.
Am I missing something or is this an error?
Similar differences occur with the optimal algorithm.
Below you find a reproducible example. Subclasses should be identical for the two data sets, which they are not.
Thank you for your help!
# create data
nr <- c(1:100)
x1 <- rnorm(100, mean=50, sd=20)
x2 <- c(rep("a", 20),rep("b", 60), rep("c", 20))
x3 <- rnorm(100, mean=230, sd=2)
outcome <- rnorm(100, mean=500, sd=20)
group <- c(rep(0, 50),rep(1, 50))
df <- data.frame(x1=x1, x2=x2, outcome=outcome, group=group, row.names=nr, nr=nr)
df_neworder <- df[order(outcome),] # re-order data.frame
# perform matching
model_oldorder <- matchit(group~x1, data=df, method="full", distance ="logit")
model_neworder <- matchit(group~x1, data=df_neworder, method="full", distance ="logit")
# store matching results
matcheddata_oldorder <- match.data(model_oldorder, distance="pscore")
matcheddata_neworder <- match.data(model_neworder, distance="pscore")
# Results based on original data.frame
head(matcheddata_oldorder[order(nr),], 10)
x1 x2 outcome group nr pscore weights subclass
1 69.773776 a 489.1769 0 1 0.5409943 1.0 27
2 63.949637 a 529.2733 0 2 0.5283582 1.0 32
3 52.217666 a 526.7928 0 3 0.5028106 0.5 17
4 48.936397 a 492.9255 0 4 0.4956569 1.0 9
5 36.501507 a 512.9301 0 5 0.4685876 1.0 16
# Results based on re-ordered data.frame
head(matcheddata_neworder[order(matcheddata_neworder$nr),], 10)
x1 x2 outcome group nr pscore weights subclass
1 69.773776 a 489.1769 0 1 0.5409943 1.0 25
2 63.949637 a 529.2733 0 2 0.5283582 1.0 31
3 52.217666 a 526.7928 0 3 0.5028106 0.5 15
4 48.936397 a 492.9255 0 4 0.4956569 1.0 7
5 36.501507 a 512.9301 0 5 0.4685876 2.0 14
Apparently, the assignment of objects to subclasses differs. In my understanding, this should not be the case.
The developers of the optmatch package (which the matchit function calls) provided useful help:
I think what we're seeing here is the result of the tolerance argument
that fullmatch has. The matching algorithm requires integer distances,
so we have to scale then truncate floating point distances. For a
given set of integer distances, there may be multiple matchings that
achieve the minimum, so the solver is free to pick among these
non-unique solutions.
Developing your example a little more:
> library(optmatch)
> nr <- c(1:100) x1 <- rnorm(100, mean=50, sd=20)
> outcome <- rnorm(100, mean=500, sd=20) group <- c(rep(0, 50),rep(1, 50))
> df_oldorder <- data.frame(x1=x1, outcome=outcome, group=group, row.names=nr, nr=nr) > df_neworder <- df_oldorder[order(outcome),] # > re-order data.frame
> glm_oldorder <- match_on(glm(group~x1, > data=df_oldorder), data = df_oldorder)
> glm_neworder <- > match_on(glm(group~x1, data=df_neworder), data = df_neworder)
> fm_old <- fullmatch(glm_oldorder, data=df_oldorder)
> fm_new <- fullmatch(glm_neworder, data=df_neworder)
> mean(sapply(matched.distances(fm_old, glm_oldorder), mean))
> ## 0.06216174
> mean(sapply(matched.distances(fm_new, glm_neworder), mean))
> ## 0.062058 mean(sapply(matched.distances(fm_old, glm_oldorder), mean)) -
> mean(sapply(matched.distances(fm_new, glm_neworder), mean))
> ## 0.00010373
which we can see is smaller than the default tolerance of 0.001. You can always decrease the tolerance level, which may
require increased run time, in order to get closer to the true
floating put minimum. We found 0.001 seemed to work well in practice,
but there is nothing special about this value.
For each reported study, I want to do 1000 simulations of a parameter X using normal or log-normal distribution (based on a flag) and then combine all the simulations in one data frame. I am looking for an automated way of doing this.
What I have is a data frame with the following columns:
SOURCE NSUB MEAN SD DIST
Study1 10 1.5 0.3 0
Study2 5 2.5 0.4 1
Study1 4 3.5 0.3 0
when DIST==0 then it is normal distribution, if DIST==1 then it is log-normal.
I am able to do the simulations and combine them using hard coding: for example:
#for Study1:
set.seed <-1
NSUB <- 10
MEAN <- 1.5
SD <- 0.3
DIST <- 0 #Normal distribution
df1 <- data.frame("SOURCE"="Study1","NSUB"=NSUB,"DIST"=DIST, "VALUE" = rnorm(1000, mean=MEAN, sd=SD))
#For study2
set.seed <-2
NSUB <- 5
MEAN <- 2.5
SD <- 0.4
DIST <- 1 #log-normal distribution
df2 <- data.frame("SOURCE"="Study2","NSUB"=NSUB,"DIST"=DIST, "VALUE" = rlnorm(1000, meanlog=log(MEAN), sdlog=SD))
#Combine all
dfall <- rbind(df1,df2)
However, this would be tedious to me I have alot of reported means and SD for the parameter. I need help in how to make this automated so it does 1000 simulation for each row (using MEAN and SD) and then combine all simulated data in one data frame.
In the interest of implementing readable and general code, you should do two things here:
Write a function that takes each row of your simulation configuration dataset and returns the simulated values as a data_frame (doSim below). This makes it easier to test your simulation code separately from your iteration over simulation configurations.
Use dplyr to pass each row of the function to this function, and collect up the results as a data_frame.
Here is some sample code:
library(dplyr)
# read in the simultation configuration dataset
dfX = read.table(textConnection("
SOURCE NSUB MEAN SD DIST
Study1 10 1.5 0.3 0
Study2 5 2.5 0.4 1
Study1 4 3.5 0.3 0"),
header = TRUE, stringsAsFactors = FALSE)
# write a function that takes each row of the configuration
# data.frame and returns the simulations
doSim = function(simConfig, seed = 12345) {
set.seed(seed)
dist = if(simConfig[["DIST"]] == 0) rnorm else rlnorm
mean = if(simConfig[["DIST"]] == 0) simConfig[["MEAN"]] else log(simConfig[["MEAN"]])
return(
data_frame(
source = simConfig[["SOURCE"]],
nsub = simConfig[["NSUB"]],
value = dist(1000, mean = mean, sd = simConfig[["SD"]])
)
)
}
# test the function
doSim(dfX[1, ])
# apply over dfX
dfX %>%
rowwise() %>%
do(doSim(.))
I have some data showing a long list of regions, the population of each region and the number of people in each region with a certain disease. I'm trying to show the confidence intervals for each proportion (but I'm not testing whether the proportions are statistically different).
One approach is to manually calculate the standard errors and confidence intervals but I'd like to use a built-in tool like prop.test, because it has some useful options. However, when I use prop.test with vectors, it runs a chi-square test across all the proportions.
I've solved this with a while loop (see dummy data below), but I sense there must be a better and simpler way to approach this problem. Would apply work here, and how? Thanks!
dat <- data.frame(1:5, c(10, 50, 20, 30, 35))
names(dat) <- c("X", "N")
dat$Prop <- dat$X / dat$N
ConfLower = 0
x = 1
while (x < 6) {
a <- prop.test(dat$X[x], dat$N[x])$conf.int[1]
ConfLower <- c(ConfLower, a)
x <- x + 1
}
ConfUpper = 0
x = 1
while (x < 6) {
a <- prop.test(dat$X[x], dat$N[x])$conf.int[2]
ConfUpper <- c(ConfUpper, a)
x <- x + 1
}
dat$ConfLower <- ConfLower[2:6]
dat$ConfUpper <- ConfUpper[2:6]
Here's an attempt using Map, essentially stolen from a previous answer here:
https://stackoverflow.com/a/15059327/496803
res <- Map(prop.test,dat$X,dat$N)
dat[c("lower","upper")] <- t(sapply(res,"[[","conf.int"))
# X N Prop lower upper
#1 1 10 0.1000000 0.005242302 0.4588460
#2 2 50 0.0400000 0.006958623 0.1485882
#3 3 20 0.1500000 0.039566272 0.3886251
#4 4 30 0.1333333 0.043597084 0.3164238
#5 5 35 0.1428571 0.053814457 0.3104216