I am recently working on survival analysis with R. I have two data frames, geneDf for gene expression, survDf for the follow-up. As the following samples:
#Data frame:geneID
geneID=c("EGFR","Her2","E2F1","PTEN")
patient1=c(12,23,56,23)
patient2=c(23,34,11,6)
patient3=c(56,44,32,45)
patient4=c(23,64,45,23)
geneDf=data.frame(patient1,patient2,patient3,patient4,geneID)
> geneDf
patient1 patient2 patient3 patient4 geneID
1 12 23 56 23 EGFR
2 23 34 44 64 Her2
3 56 11 32 45 E2F1
4 23 6 45 23 PTEN
#Data frame:survDf
ID=c("patient1","patient2","patient3","patient4")
time=c(23,7,34,56)
status=c(1,0,1,1)
survDf=data.frame(ID,time,status)
#
> survDf
ID time status
1 patient1 23 1
2 patient1 7 0
3 patient1 34 1
4 patient1 56 1
I extract the expression data of specific gene from geneDf, and use the median of its expression as cut off value to perform survival analysis by “survival”package, and gain the p value by survdiff. In the following codes I use "EGFR" gene as an example.
#extract expression of a certain gene
targetGene<-subset(geneDf,grepl("EGFR",geneDf$geneID))
targetGene$geneID<-NULL
#Transpose the table and adjust its format
targetGene<-t(targetGene[,1:ncol(targetGene)])
targetGene<-data.frame(as.factor(rownames(targetGene)),targetGene)
colnames(targetGene)<-c("ID","Expression")
rownames(targetGene)<-NULL
targetGene$Expression1<-targetGene$Expression
targetGene$Expression1[ targetGene$Expression<median( targetGene$Expression)]<-1
targetGene$Expression1[ targetGene$Expression>=median( targetGene$Expression)]<-2
#Survival analysis
library(survival)
##Add survival object
survDf$SurvObj<-with(survDf, Surv(time,status==1))
## Kaplan-Meier estimator for stage
km<-survfit(SurvObj~targetGene$Expression1, data=survDf, conf.type = "log-log")
sdf<-survdiff(Surv(time, status) ~targetGene$Expression1, data=survDf)
#gain p value
p.val <-1-pchisq(sdf$chisq, length(sdf$n) - 1)
> p.val
[1] 0.1572992
I can do this through different genes one by one. But the question is: There are more than 10,000 gene need to be analyzed. I want gain all the p-values of them and put them to a new data frame. Do I need use loop or apply?
This is an ugly scritp but working.
In the Data10, in the first column you need to have the time, in the second one the status and in the next any treatments that you want.(patients as rownames)
loopsurff<-function(Data10){combos<-
rbind.data.frame(rep(1,ncol(Data10)- 2),
rep(2,ncol(Data10)-2),rep(3:(ncol(Data10)-2),1))
combos<-as.matrix(sapply(combos, as.numeric));library(plyr);
library(survival)
vv<-adply(combos, 2, function(x) {
fit <-survdiff(Surv(Data10[,1], Data10[,2]) ~ Data10[, x[3]],data=Data10)
p<-1 - pchisq(fit$chisq, 1)
out <- data.frame("var1"=colnames(Data10)[x[3]],"p.value" =
as.numeric(sprintf("%.3f", p)))
return(out)
})
}`
You will get a data frame with the column names of yourdata[,3:ncol(yourdata)] and the p value to each one.
Related
I am trying to loop over multiple variables in a mixed model (using the rptGaussian function from the rptR package) but I am unable to do it despite several efforts. I am trying the following code. I use the following code without a loop and it works fine:
(rptGaussian(Arg ~ (1|class)+(1|kit)+(1|sex),
grname=c("class","kit","sex","Fixed"),
data=ggm2, nboot=10, npermut=10, adjusted=FALSE)
However, when I try to loop more variables I get the error
Error in terms.default(formula) : no terms component nor attribute
I am trying the following code for the loop.
varlist<-c("var1", "var2")
blups.models <- lapply(varlist, function(x) {
rptGaussian(substitute(i ~ (1|class)+(1|kit)+(1|sex),
list(i = as.name(x))),
grname=c("class","kit","lab","Fixed"),
data=ggm2, nboot=10, npermut=10, adjusted=FALSE)
})
Here is a dummy data table:
sex class kit var1 var2 var3 var4
Female A Cont 10.79730768 10 20 18
Female A Exp 11.2474347 17 1 17
Female A Cont 11.64820939 10 5 17
Female A Exp 15.62800413 20 8 4
Female B Cont 12.41705885 5 16 8
Female B Exp 12.80249244 9 10 1
Female B Cont 10.76949177 6 13 2
Female B Exp 14.71370141 7 12 11
Male A Cont 8.931529823 8 3 6
Male A Exp 10.46899683 3 12 13
Male A Cont 8.363257621 3 13 17
Male A Exp 8.753117911 10 16 10
Male B Cont 9.110946315 9 13 4
Male B Exp 9.595131886 18 10 17
Male B Cont 9.454670188 1 10 11
Male B Exp 10.59379123 11 1 3
In general this kind of looping is easier (IMO) with string-based solutions, especially the reformulate() wrapper function, than with substitute().
I used read.table(header=TRUE,text="...") to read the data above and this slightly modified code for the single model:
library(rptR)
r1 <- rptGaussian(var1 ~ (1|class)+(1|kit)+(1|sex),
grname=c("class","kit","sex","Fixed"),
data=ggm2, nboot=10, npermut=10, adjusted=FALSE)
For multiple models:
varlist <- c("var1", "var2")
Make list of formulas:
formulas <- lapply(varlist,
reformulate,
termlabels="(1|class)+(1|kit)+(1|sex)")
Apply rptGaussian to formulas:
blups.models <- lapply(formulas,
rptGaussian,
grname=c("class","kit","sex","Fixed"),
data=ggm2, nboot=10, npermut=10, adjusted=FALSE)
If you want to collapse the results to a nice form, you have to figure out how to extract the results from a single fit into a data frame or similar structure. In this case the result is a rpt object and methods(class="rpt") tells you that there are only print, plot, and summary methods, but the summary() method returns an object that has lots of potentially useful bits. Here's an example:
## extract estimates and standard errors of estimates as a 1-row data frame
sumfun <- function(x) {
ss <- summary(x)
se.names <- paste(rownames(ss$se),"se",sep=".")
cbind(ss$R,setNames(as.data.frame(t(ss$se)),se.names))
}
A possibly-better alternative would be to return data.frame(term=names(ss$R),rpt=unlist(ss$R),se=ss$se) (a 3-column by n-row data frame) instead.
I'm going to use dplyr::bind_rows() because it's handy, but you could use base-R tools (do.call(rbind(...))) instead if you prefer.
names(blups.models) <- varlist
dplyr::bind_rows(lapply(blups.models,sumfun),
.id="var")
var class kit sex Fixed class.se kit.se sex.se Fixed.se
1 var1 0 0.1444659 0.65887365 0 0.04992624 0.2136589 0.2954982 0
2 var2 0 0.3322780 0.01734343 0 0.01981748 0.2243989 0.1158878 0
Are you sure it makes sense to do repeatability scores across sexes and other categories with small numbers of levels?
This is my dummy data:
income <- as.data.frame.vector <- sample(1000:10000, 1000, replace=TRUE)
individuals <- as.data.frame.vector <- sample(1:50,1000,replace=TRUE)
datatest <- as.data.frame (cbind (income, individuals))
I know I can sample by individual rows with this code:
sample <- datatest[sample(nrow(datatest), replace=TRUE),]
Now, I want to extract random samples with replacement and equal probabilities of the dataset but sampling complete blocks of observations with the same individual code.
Note that there are 50 individuals, but 1000 observations. Some observations belong to the same individual, so I want to sample by individuals (clusters, in this case), not observations. I don't mind if the extracted samples differ slightly in the number of observations. How can I do that?
I have tried:
library(sampling)
samplecluster <- cluster (datatest, clustername=c("individuals"), size=50,
method="srswr")
But the outcome is not the sampled data. Am I missing something?
Well, it seems I was indeed missing something. After the cluster command you need to apply the getdata command (all from the Sampling Package). This way I do get the sample as I wanted, plus some additional columns.
samplecluster <- cluster (datatest, clustername=c("personid"), size=50, method="srswr")
Gives you:
head(samplecluster)
individuals ID_unit Replicates Prob
1 1 259 2 0.63583
2 1 178 2 0.63583
3 1 110 2 0.63583
4 1 153 2 0.63583
5 1 941 2 0.63583
6 1 667 2 0.63583
Then using getdata, I also get the original data on income sampled by whole clusters:
datasample <- getdata (datatest, samplecluster)
head(datasample)
income individuals ID_unit Replicates Prob
1 8567 1 259 2 0.63583
2 2701 1 178 2 0.63583
3 4998 1 110 2 0.63583
4 3556 1 153 2 0.63583
5 2893 1 941 2 0.63583
6 7581 1 667 2 0.63583
I am not sure if I am missing something. If you just want some of your individuals, you can create a smaller sample of them:
ind.sample <- sample(1:50, size = 10)
print(ind.sample)
# [1] 17 43 38 39 28 23 35 47 9 13
my.sample <- datatest[datatest$individuals %in% ind.sample) ,]
head(my.sample)
# income individuals
#21 9072 17
#97 5928 35
#122 9130 43
#252 4388 43
#285 8083 28
#287 1065 35
I guess a more generic approach would be to generate random indexes;
ind.unique <- unique(individuals)
ind.sample.index <- sample(1:length(ind.unique), size = 10)
ind.sample <- ind.unique[ind.sample.index]
print(ind.sample[order(ind.sample)])
my.sample <- datatest[datatest$individuals %in% ind.sample, ]
ind.counts <- aggregate(income ~ individuals, my.sample, FUN = length)
print(ind.counts)
I think its important to note that the dataset still needs to be expanded to include all the replicates.
sw<-data.frame(datasample[rep(seq_len(dim(datasample)[1]), datasample$Replicates),, drop = FALSE], row.names=NULL)
Might be helpful to someone
I have a dataframe where I have values, and for each value I have the counts associated with that value. So, plotting counts against values gives me the histogram. I have three types, a, b, and c.
value counts type
0 139648267 a
1 34945930 a
2 5396163 a
3 1400683 a
4 485924 a
5 204631 a
6 98599 a
7 53056 a
8 30929 a
9 19556 a
10 12873 a
11 8780 a
12 6200 a
13 4525 a
14 3267 a
15 2489 a
16 1943 a
17 1588 a
... ... ...
How do I get from this to a CDF?
So far, my approach is super inefficient: I first write a function that sums up the counts up to that value:
get_cumulative <- function(x) {
result <- numeric(nrow(x))
for (i in seq_along(result)) {
result[i] = sum(x[x$num_groups <= x$num_groups[i], ]$count)
}
x$cumulative <- result
x
}
Then I wrap this in a ddply that splits by the type. This is obviously not the best way, and I'd love any suggestions on how to proceed.
You can use ave and cumsum (assuming your data is in df and sorted by value):
transform(df, cdf=ave(counts, type, FUN=function(x) cumsum(x) / sum(x)))
Here is a toy example:
df <- data.frame(counts=sample(1:100, 10), type=rep(letters[1:2], each=5))
transform(df, cdf=ave(counts, type, FUN=function(x) cumsum(x) / sum(x)))
that produces:
counts type cdf
1 55 a 0.2750000
2 61 a 0.5800000
3 27 a 0.7150000
4 20 a 0.8150000
5 37 a 1.0000000
6 45 b 0.1836735
7 79 b 0.5061224
8 12 b 0.5551020
9 63 b 0.8122449
10 46 b 1.0000000
If your data is in data.frame DF then following should do
do.call(rbind, lapply(split(DF, DF$type), FUN=cumsum))
The HistogramTools package on CRAN has several functions for converting between Histograms and CDFs, calculating information loss or error margins, and plotting functions to help with this.
If you have a histogram h then calculating the Empirical CDF of the underlying dataset is as simple as:
library(HistogramTools)
h <- hist(runif(100), plot=FALSE)
plot(HistToEcdf(h))
If you first need to convert your input data of breaks and counts into an R Histogram object, then see the PreBinnedHistogram function first.
I searched SO, but I could not seem to find the right code that is applicable to my question. It is similar to this question: Linear Regression calculation several times in one dataframe
I got a dataframe of LR coefficients following Andrie's code:
Cddply <- ddply(test, .(sumtest), function(test)coef(lm(Area~Conc, data=test)))
sumtest (Intercept) Conc
1 -108589.2726 846.0713372
2 -49653.18701 811.3982918
3 -102598.6252 832.6419926
4 -72607.4017 727.0765558
5 54224.28878 391.256075
6 -42357.45407 357.0845661
7 -34171.92228 367.3962888
8 -9332.569856 289.8631555
9 -7376.448899 335.7047756
10 -37704.92277 359.1457617
My question is how to apply each of these LR models (1-10) to specific row intervals in another dataframe in order to get x, the independent variable, into a 3rd column. For example, I would like to apply sumtest1 to Samples 6:29, sumtest2 to samples 35:50, sumtest3 to samples 56:79, etc.. in intervals of 24 and 16 samples. The sample numbers repeats after 200, so sumtest9 will be for Samples 6:29 again.
Sample Area
6 236211
7 724919
8 1259814
9 1574722
10 268836
11 863818
12 1261768
13 1591845
14 220322
15 608396
16 980182
17 1415859
18 276276
19 724532
20 1130024
21 1147840
22 252051
23 544870
24 832512
25 899457
26 285093
27 4291007
28 825922
29 865491
35 246707
36 538092
37 767269
38 852410
39 269152
40 971471
41 1573989
42 1897208
43 261321
44 481486
45 598617
46 769240
47 229695
48 782691
49 1380597
50 1725419
The resulting dataframe would look like this:
Sample Area Calc
6 236211 407.5312917
7 724919 985.1525288
8 1259814 1617.363812
9 1574722 1989.564693
10 268836 446.0919309
...
35 246707 365.2452551
36 538092 724.3591324
37 767269 1006.805521
38 852410 1111.736505
39 269152 392.9073207
Thank you for your assistance.
Is this what you want? I made up a slightly larger dummy data set of 'area' to make it easier to see how the code worked when I tried it out.
# create 400 rows of area data
set.seed(123)
df <- data.frame(area = round(rnorm(400, mean = 1000000, sd = 100000)))
# "sample numbers repeats after 200" -> add a sample nr 1-200, 1-200
df$sample_nr <- 1:200
# create a factor which cuts the vector of sample_nr into pieces of length 16, 24, 16, 24...
# repeat to a total length of the pieces is 200
# i.e. 5 repeats of (16, 24)
grp <- cut(df$sample_nr, breaks = c(-Inf, cumsum(rep(c(16, 24), 5))))
# add a numeric version of the chunks to data frame
# this number indicates the model from which coefficients will be used
# row 1-16 (16 rows): model 1; row 17-40 (24 rows): model 2;
# row 41-56 (16 rows): model 3; and so on.
df$mod <- as.numeric(grp)
# read coefficients
coefs <- read.table(text = "intercept beta_conc
1 -108589.2726 846.0713372
2 -49653.18701 811.3982918
3 -102598.6252 832.6419926
4 -72607.4017 727.0765558
5 54224.28878 391.256075
6 -42357.45407 357.0845661
7 -34171.92228 367.3962888
8 -9332.569856 289.8631555
9 -7376.448899 335.7047756
10 -37704.92277 359.1457617", header = TRUE)
# add model number
coefs$mod <- rownames(coefs)
head(df)
head(coefs)
# join area data and coefficients by model number
# (use 'join' instead of merge to avoid sorting)
library(plyr)
df2 <- join(df, coefs)
# calculate conc from area and model coefficients
# area = intercept + beta_conc * conc
# conc = (area - intercept) / beta_conc
df2$conc <- (df2$area - df2$intercept) / df2$beta_conc
head(df2, 41)
I'm having a dataframe as like below. I need to extract df based on the region which is availabe in RL
>avg_data
region SN value
beta 1 32
alpha 2 44
beta 3 55
beta 4 60
atp 5 22
> RL
V1
1 beta
2 alpha
That dataframe should be in array something like REGR[beta] which should contain beta related information as like below
region SN value
beta 1 32
beta 3 55
beta 4 60
Similarly for REGR[alpha]
region SN value
alpha 2 44
So that I can pass REGR as a argument for plotting graph.
REGR <- data.frame()
for (i in levels(RL$V1)){
REGR[i,] <- avg_data[avg_data$region==i, ];
}
I did some mistake in the above code. Please correct me.. Thank you
The split function may be of interest to you. From the help page, split divides the data in the vector x into the groups defined by f.
So for your data, it may look something like:
> split(avg_data, avg_data$region)
$alpha
region SN value
2 alpha 2 44
$atp
region SN value
5 atp 5 22
$beta
region SN value
1 beta 1 32
3 beta 3 55
4 beta 4 60
If you want to filter out the records that do not occur in RL, I'd probably do that in a preprocessing step using the %in% function and [ for extraction:
x <- avg_data[avg_data$region %in% RL$V1,]
#-----
region SN value
1 beta 1 32
2 alpha 2 44
3 beta 3 55
That's what I'd feed to split if you want to drop atp.
The approach above may be overkill if you are just wanting to plot. Here's an example using sapply to iterate through each level of region and make a plot:
sapply(unique(x$region), function(z)
plot(x[x$region == z,"value"], main=z[1]))