I have 7 dataframes of experiments which are each subdivided into 15 repetition (or iteration). I am now interested in all 105 variable x for calculation later on in the analysis.
Imagine you have the following dataframes with randomized numbers and, for the sake of simplicity, pretend that all dataframes contain different numbers:
set.seed(2)
a <- runif(100, -1.5, 1.5)
b <- pnorm(rnorm(100))
c <- rnorm(100)
d <- rnorm(100)
e <- dnorm(rnorm(100))
iteration <- sort(sample(1:7, 100, replace=T), decreasing=F)
x <- f <- sample(1:1000, 100, replace=T)
df1 <- data.frame(a,b,c,d,e,iteration,x)
df2 <- data.frame(a,b,c,d,e,iteration,x)
df3 <- data.frame(a,b,c,d,e,iteration,x)
df4 <- data.frame(a,b,c,d,e,iteration,x)
df5 <- data.frame(a,b,c,d,e,iteration,x)
df6 <- data.frame(a,b,c,d,e,iteration,x)
df7 <- data.frame(a,b,c,d,e,iteration,x)
How can I break down all 105 variable x combination (df1$x of iteration 1, df1$x of iteration 2, ..., df7$x of iteration 7) so that I can calculate the following example nonsense equation for all 105 variable combination?
mean(df1$x of iteration 1) - sd(df1$x of iteration 1)
mean(df1$x of iteration 2) - sd(df1$x of iteration 2)
...
mean(df7$x of iteration 7) - sd(df7$x of iteration 7)
I have the following command in order to "extract" variable df1$x of iteration 1 but this would involve 208 more lines to come for the remaining variables:
df_1 <- df1[which(df1$iteration=='1'),]
df_1_final <- df_1[grepl("1", df_1$iteration), c(6, 7)]
Does this make sense? Is there not a better way to do that in Gnu R?
A possibility using dplyr. Probably easier to work with your data.frames in a list (from comments by #akrun)
library(dplyr)
bind_rows(mget(paste0('df', 1:7))) %>% # put your data.frames in a list -> data.frame
mutate(group=rep(1:7, each=100)) %>% # add a grouping column
group_by(group, iteration) %>% # group
summarise(mean(x) - sd(x)) # do your stuff
or in data.table
rbindlist(mget(paste0('df', 1:7)))[,mean(x)-sd(x) ,.(gr=rep(1:7,each=100),iteration)]
You could create a nonsense equation function and then utilize it in tapply() with, iteration as the INDEX argument, for each df. So for df1: tapply(df1$x, INDEX = df1$iteration, nonsenseFunction), which will return a list/array with all computations for each group(iteration) of df1.
Related
I'm looking for an easy way to make a table in R that shows each variable as a row in the dataframe and then each variable category as the column of the dataframe. In each cell the frequency of that category should be displayed and then the sum is the last column. The point is to display distribution for different variables with the same categories easily. I have included to a picture to show what I'm looking for.
I have managed to produce some code that achieves what I want, but it takes a lot of time to do this for each variable i want to include in the table.
mydata <- as.data.frame((table(mydat$var)))
mydata <- as.data.frame(t(mydata))
mydata <- lapply(mydata, as.numeric)
mydata <- as.data.frame(mydata)
mydata$sum <- (mydata$category 1 + mydata$category 2 + mydata$category 3)
mydata[-c(1), ]
The result looks like this:
To add more variables I imagine that i could use rbind(), but there might be some easier way to achieve something similar?
Here is a reproducible example using the mtcars dataset.
data("mtcars")
tdata <- as.data.frame(table(mtcars$cyl))
tdata1 <- as.data.frame(t(tdata))
tdata2 <- lapply(tdata1, as.numeric)
tdata3 <- as.data.frame(tdata2)
tdata3$sum <- (tdata3$V1 + tdata3$V2 + tdata3$V3)
tdata3 <- tdata3[-c(1),]
tdata3
Assuming you have a data.frame where each variable has the same categories (as in your example):
df <- data.frame(Var1 = c(rep("Cat1", 30),
rep("Cat2", 10),
rep("Cat3", 20) ),
Var2 = c(rep("Cat1", 10),
rep("Cat2", 20),
rep("Cat3", 30) ),
Var3 = c(rep("Cat1", 5),
rep("Cat2", 25),
rep("Cat3", 30) ) )
You could use lapply() to apply the table() function to every column in your data.frame:
tab <- lapply(colnames(df), function(x) table(df[, x]))
As lapply() outputs a list, use do.call to bind them, and rowSums() to create the sum column:
tab <- data.frame(do.call(rbind, t(tab)))
tab$Sum <- rowSums(tab)
# add variable labels as rows
rownames(tab) <- colnames(df)
The output will look like this:
Cat1 Cat2 Cat3 Sum
Var1 30 10 20 60
Var2 10 20 30 60
Var3 5 25 30 60
And, you could throw all this in a function:
my_tab_fun <- function(df) {
tab <- lapply(colnames(df),
function(x) table(df[, x]))
tab <- data.frame(
do.call(rbind, t(tab)))
tab$Sum <- rowSums(tab)
rownames(tab) <- colnames(df)
return(tab)
}
my_tab_fun(df)
Say I have some data of the following kind:
df<-as.data.frame(matrix(rnorm(10*10000, 1, .5), ncol=10))
I want a new dataframe that keeps the 10 original columns, but for every column retains only the highest 10 and lowest 10 values. Importantly, the rows have names corresponding to id values that need to be kept in the new data frame.
Thus, the end result data.frame is gonna be of dimensions m by 10, where m is very likely to be more than 20. But for every column, I want only 20 valid values.
The only way I can think of doing this is doing it manually per column, using dplyr and arrange, grabbing the top and bottom rows, and then creating a matrix from all the individual vectors. Clearly this is inefficient. Help?
Assuming you want to keep all the rows from the original dataset, where there is at least one value satisfying your condition (value among ten largest or ten smallest in the given column), you could do it like this:
# create a data frame
df<-as.data.frame(matrix(rnorm(10*10000, 1, .5), ncol=10))
# function to find lowes 10 and highest 10 values
lowHigh <- function(x)
{
test <- x
test[!(order(x) <= 10 | order(x) >= (length(x)- 10))] <- NA
test
}
# apply the function defined above
test2 <- apply(df, 2, lowHigh)
# use the original rownames
rownames(test2) <- rownames(df)
# keep only rows where there is value of interest
finalData <- test2[apply(apply(test2, 2, is.na), 1, sum) < 10, ]
Please note that there is definitely some smarter way of doing it...
Here is the data matrix with 10 highest and 10 lowest in each column,
x<-apply(df,2,function(k) k[order(k,decreasing=T)[c(1:10,(length(k)-9):length(k))]])
x is your 20 by 10 matrix.
Your requirement of rownames is conflicting column by column, altogether you only have 20 rownames in this matrix and it can not be same for all 10 columns. Instead, here is your order matrix,
x_roworder<-apply(df,2,function(k) order(k,decreasing=T)[c(1:10,(length(k)-9):length(k))])
This will give you corresponding rows in original data matrix within each column.
I offer a couple of answers to this.
A base R implementation ( I have used %>% to make it easier to read)
ix = lapply(df, function(x) order(x)[-(1:(length(x)-20)+10)]) %>%
unlist %>% unique %>% sort
df[ix,]
This abuses the fact that data frames are lists, finds the row id satisfying the condition for each column, then takes the unique ones in order as the row indices you want to keep. This should retain any row names attached to df
An alternative using dplyr (since you mentioned it) which if I remember correctly doesn't particular like row names
# add id as a variable
df$id = 1:nrow(df) # or row names
df %>%
gather("col",value,-id) %>%
group_by(col) %>%
filter(min_rank(value) <= 10 | min_rank(desc(value)) <= 10) %>%
ungroup %>%
select(id) %>%
left_join(df)
Edited: To fix code alignment and make a neater filter
I'm not entirely sure what you're expecting for your return / output. But this will get you the appropriate indices
# example data
set.seed(41234L)
N <- 1000
df<-data.frame(id= 1:N, matrix(rnorm(10*N, 1, .5), ncol=10))
# for each column, extract ID's for top 10 and bottom 10 values
l1 <- lapply(df[,2:11], function(x,y, n) {
xy <- data.frame(x,y)
xy <- xy[order(xy[,1]),]
return(xy[c(1:10, (n-9):n),2])
}, y= df[,1], n = N)
# check:
xx <- sort(df[,2])
all.equal(sort(df[l1[[1]], 2]), xx[c(1:10, 991:1000)])
[1] TRUE
If you want an m * 10 matrix with these unique values, where m is the number of unique indices, you could do:
l2 <- do.call("c", l1)
l2 <- unique(l2)
df2 <- df[l2,] # in this case, m == 189
This doesn't 0 / NA the columns which you're not searching on for each row. But it's unclear what your question is trying to do.
Note
This isn't as efficient as using data.table since you're going to get a copy of the data in xy <- data.frame(x,y)
Benchmark
library(microbenchmark)
microbenchmark(ira= {
test2 <- apply(df[,2:11], 2, lowHigh);
rownames(test2) <- rownames(df);
finalData <- test2[apply(apply(test2, 2, is.na), 1, sum) < 10, ]
},
alex= {
l1 <- lapply(df[,2:11], function(x,y, n) {
xy <- data.frame(x,y)
xy <- xy[order(xy[,1]),]
return(xy[c(1:10, (n-9):n),2])
}, y= df[,1], n = N);
l2 <- unique(do.call("c", l1));
df2 <- df[l2,]
}, times= 50L)
Unit: milliseconds
expr min lq mean median uq max neval cld
ira 4.360452 4.522082 5.328403 5.140874 5.560295 8.369525 50 b
alex 3.771111 3.854477 4.054388 3.936716 4.158801 5.654280 50 a
I want have a dataframe with something like 90 variables, and over 1 million observations. I want to calculate the percentage of NA rows on each variable. I have the following code:
sum(is.na(dataframe$variable) / nrow(dataframe) * 100)
My question is, how can I apply this function to all 90 variables, without having to type all variable names in the code?
Use lapply() with your method:
lapply(df, function(x) sum(is.na(x))/nrow(df)*100)
If you want to return a data.frame rather than a list (via lapply()) or a vector (via sapply()), you can use summarise_each from the dplyr package:
library(dplyr)
df %>%
summarise_each(funs(sum(is.na(.)) / length(.)))
or, even more concisely:
df %>% summarise_each(funs(mean(is.na(.))))
data
df <- data.frame(
x = 1:10,
y = 1:10,
z = 1:10
)
df$x[c(2, 5, 7)] <- NA
df$y[c(4, 5)] <- NA
I have the following dataframe:
df = data.frame(id=c("A","A","A","A","B","B","B","B","C","C","C","C","D","D","D","D"),
sub=rep(c(1:4),4),
acc1=runif(16,0,3),
acc2=runif(16,0,3),
acc3=runif(16,0,3),
acc4=runif(16,0,3))
What I want is to obtain the mean rows for each ID, which is to say I want to obtain the mean acc1, acc2, acc3 and acc4 for each level A, B, C and D by averaging the values for each sub (4 levels for each id), which would give something like this in the end (with the NAs replaced by the means I want of course):
dfavg = data.frame(id=c("A","B","C","D"),meanacc1=NA,meanacc2=NA,meanacc3=NA,meanacc4=NA)
Thanks in advance!
Try:
You can use any of the specialized packages dplyr or data.table or using base R. Because you have a lot of columns that starts with acc to get the mean of, I choose dplyr. Here, the idea is to first group the variable by id and then use summarise_each to get the mean of each column by id that starts_with acc
library(dplyr)
df1 <- df %>%
group_by(id) %>%
summarise_each(funs(mean=mean(., na.rm=TRUE)), starts_with("acc")) %>%
rename(meanacc1=acc1, meanacc2=acc2, meanacc3=acc3, meanacc4=acc4) #this works but it requires more typing.
I would rename using paste
# colnames(df1)[-1] <- paste0("mean", colnames(df1)[-1])
gives the result
# id meanacc1 meanacc2 meanacc3 meanacc4
#1 A 1.7061929 2.401601 2.057538 1.643627
#2 B 1.7172095 1.405389 2.132378 1.769410
#3 C 1.4424233 1.737187 1.998414 1.137112
#4 D 0.5468509 1.281781 1.790294 1.429353
Or using data.table
library(data.table)
nm1 <- paste0("acc", 1:4) #names of columns to do the `means`
dt1 <- setDT(df)[, lapply(.SD, mean, na.rm=TRUE), by=id, .SDcols=nm1]
Here.SD implies Subset of Data.table, .SDcols are the columns to which we apply the mean operation.
setnames(dt1, 2:5, paste0("mean", nm1)) #change the names of the concerned columns in the result
dt1
(This must have been asked at least 20 times.) The `aggregate function applies the same function (given as the third argument) to all the columns of its first argument within groups defined by its second argument:
aggregate(df[-(1:2)], df[1],mean)
If you want to append the letters "mean" to the column names:
names(df2) <- paste0("mean", names(df2)
If you had wanted to do the column selection automatically then grep or grepl would work:
aggregate(df[ grepl("acc", names(df) )], df[1], mean)
Here are a couple of other base R options:
split + vapply (since we know vapply would simplify to a matrix whenever possible)
t(vapply(split(df[-c(1, 2)], df[, 1]), colMeans, numeric(4L)))
by (with a do.call(rbind, ...) to get the final structure)
do.call(rbind, by(data = df[-c(1, 2)], INDICES = df[[1]], FUN = colMeans))
Both will give you something like this as your result:
# acc1 acc2 acc3 acc4
# A 1.337496 2.091926 1.978835 1.799669
# B 1.287303 1.447884 1.297933 1.312325
# C 1.870008 1.145385 1.768011 1.252027
# D 1.682446 1.413716 1.582506 1.274925
The sample data used here was (with set.seed, for reproducibility):
set.seed(1)
df = data.frame(id = rep(LETTERS[1:4], 4),
sub = rep(c(1:4), 4),
acc1 = runif(16, 0, 3),
acc2 = runif(16, 0, 3),
acc3 = runif(16, 0, 3),
acc4 = runif(16, 0, 3))
Scaling up to 1M rows, these both perform quite well (though obviously not as fast as "dplyr" or "data.table").
You can do this in base package itself using this:
a <- list();
for (i in 1:nlevels(df$id))
{
a[[i]] = colMeans(subset(df, id==levels(df$id)[i])[,c(3,4,5,6)]) ##select columns of df of which you want to compute the means. In your example, 3, 4, 5 and 6 are the columns
}
meanDF <- cbind(data.frame(levels(df$id)), data.frame(matrix(unlist(a), nrow=4, ncol=4, byrow=T)))
colnames(meanDF) = c("id", "meanacc1", "meanacc2", "meanacc3", "meanacc4")
meanDF
id meanacc1 meanacc2 meanacc3 meanacc4
A 1.464635 1.645898 1.7461862 1.026917
B 1.807555 1.097313 1.7135346 1.517892
C 1.350708 1.922609 0.8068907 1.607274
D 1.458911 0.726527 2.4643733 2.141865
After using the G.test on all rows of my data subset
apply(datamixG +1 , 1, G.test)
I get an output for each row that looks like this
[[1]]
G-test for given probabilities
data: [(newX,,i)
G = 3.9624, df = 1, p-value = 0.04653
I have 46 rows. I need to sum the df and G-values. Is there a way to have R report the G-values differently and/or sum all of the G-values and df?
I'll assume you're using the G.test function from the RVAideMemoire package:
# Sample data (always a good idea to post!)
dat <- matrix(1:4, nrow=2)
library(RVAideMemoire)
tests <- apply(dat, 1, G.test)
You can use unlist and lapply to extract a single value from each element in a list and to return a vector of the results:
dfs <- unlist(lapply(tests, "[[", "parameter"))
dfs
# df df
# 1 1
sum(dfs)
# [1] 2
Gs <- unlist(lapply(tests, "[[", "statistic"))
Gs
# G G
# 1.0464963 0.6795961
sum(Gs)
# [1] 1.726092