Related
Following a previous question I asked, I got an awesome answer.
Here is a quick summary:
I want to compute a multidimensional development index based on South Africa Data for several years. My list is composed of individual information for each year, so basically df1 is about year 1 and df2 about year2.
df1<-data.frame(var1=c(1, 1,1), var2=c(0,0,1), var3=c(1,1,0))
df2<-data.frame(var1=c(1, 0,1), var2=c(1,0,1), var3=c(0,1,0))
mylist <-list (df1,df2)
var1 could be the stance on religion of each person, var2 how she voted in last national election, etc. In my very simple case, I have the data for 3 different persons each year.
From there, I compute an index based on a number of variables (not all of them)
You can find here a very simplified working index function, with only 2 of 3 variables, named dimX and dimY:
myindex <- function(x, dimX, dimY){
econ_i<- ( x[dimX]+ x[dimY] )
return ( (1/length(econ_i))*sum(econ_i) )
}
myindex(df1, "var2", "var3")
and
myindex2 = function(x, d) {
myindex(x, d[1], d[2])
}
Then I have my dataframe of variables I want to use for my index. I am trying to compute the index for several sets of variables.
args <- data.frame(set1=c("var1", "var2"), set2=c("var2", "var3"), stringsAsFactors = F)
I'd like to have the result as follows : (a)list(set1 = list(df1, df2), set2 = (df1, df2))instead of (b) list(df1 = list(set1, set2), df2 = list(set1, set2)).
Case (a) represents a time series, meaning I have a list of results of my indexes each year for only one set of variables. Case (b) is the opposite where I have the index results of one year for every set of variables. Each individual result should be a unique numeric value. Hence, I am expecting to get a list of 2 sublists df1 and df2, each sublist containing 3 numeric values.
I've been adviced to do use that great command:
lapply(mylist, function(m) lapply(args, myindex2, x = m))
It's working great, but I get the result in the "wrong" format, namely the second one (b) I showed.
How could I get the results ordered per set (i.e. case (a) as time series) instead of per year?
Thanks a lot for your help!
PJ
EDIT: I've managed to find a solution that doesn't answer the question, but still allows me to get my data in desired order.
Namely, I'm transforming my list of lists to a matrix that I simply transpose.
This answer will be edited!
Currently, your function index() does this
myindex <- function(x, dimX, dimY){
econ_i<- ( x[dimX]+ x[dimY] )
return ( (1/length(econ_i))*sum(econ_i) )
}
Aren't you after this, however?
myindex <- function(x, dimX, dimY){
econ_i<- ( x[,dimX]+ x[,dimY] )
return ( (1/length(econ_i))*sum(econ_i) )
}
The way you have it right now, length(econ_i) always returns 1 because econ_i is a data.frame() and not a vector. The length of a data.frame() is always 1, while the length of a vector is the number of elements within it.
Kindly note that here is what the output looks like in R.
df1["var1"]
var1
1 1
2 1
3 1
returns a data.frame()
df1[,"var1"]
[1] 1 1 1
returns a vector.
I will adjust this post to answer your question when you respond. I think it's important to solve this part first.
If that may provide any help, from this article, here my actual index function:
RCI_a_3det <-function(x, econ1, econ2, econ3, perso1, perso2, perso3, civic1, civic2, civic3){
econ_i<- (1/3) *( x[econ1]+ x[econ2] + x[econ3])
perso_i<- (1/3)*( x[perso1] + x[perso2] + x[perso3])
civic_i<- (1/3)*(x[civic1] + x[civic2] + x[civic3])
daf <- data.frame(econ_i, perso_i, civic_i)
colnames(daf)<- c("econ_i", "perso_i", "civic_i")
df1 <- subset(daf, daf$econ_i !=1 & daf$perso_i !=1 & daf$civic_i!=1 )
sum_xik <- (df1$econ_i + df1$perso_i + df1$civic_i)
return ( 1/(3*nrow(df1)) * sum(sum_xik, na.rm=T))
}
Edit:
x is a list of all personal information, for every variable and for every year. It is pretty large.
I am using 9 variables to compute this index, but I actually have 30 such variables in my data, so I have set up a dataframe of sets of variables I could use to compute this index. This is the equivalent of my args df in the simple example. I am actually using 200 such combinations.
I need to count mutations in the genome that occur at certain spots or rather ranges. The mutations have a genomic position (chromosome and basepair, e.g. Chr1, 10658324). The range or spot, respectively, is defined as 10000 basepairs up- and downstream (+-) of a given position in the genome. Both, positions of mutations and position of "spots" are stored in data frames.
Example:
set.seed(1)
Chr <- 1
Pos <- as.integer(runif(5000 , 0, 1e8))
mutations <- data.frame(Pos, Chr)
Chr <- 1
Pos <- as.integer(runif(50 , 0, 1e8))
spots <- data.frame(Pos, Chr)
So the question I am asking is: How many mutations are present +-10k basepairs around the positions given in "spots". (e.g. if the spot is 100k, the range would be 90k-110k)
The real data would of course contain all 24 chromosomes, but for the sake of simplicity we can focus on one chromosome for now.
The final data should contain the "spot" and the number of mutations in it's vicinity, ideally in a data frame or matrix.
Many thanks in advance for any suggestions or help!
Here's a first attempt, but I am pretty shure there is a way more elegant way of doing it.
w <- 10000 #setting range to 10k basepairs
loop <- spots$Pos #creating vector of positions to loop through
out <- data.frame(0,0)
colnames(out) <- c("Pos", "Count")
for (l in loop) {
temp <- nrow(filter(mutations, Pos>=l-w, Pos<=l+w))
temp2 <- cbind(l,temp)
colnames(temp2) <- c("Pos", "Count")
out <- rbind(out, temp2)
}
out <- out[-1,]
Using data.table foverlaps, then aggregate:
library(data.table)
#set the flank
myFlank <- 100000
#convert to ranges with flank
spotsRange <- data.table(
chr = spots$Chr,
start = spots$Pos - myFlank,
end = spots$Pos + myFlank,
posSpot = spots$Pos,
key = c("chr", "start", "end"))
#convert to ranges start end same as pos
mutationsRange <- data.table(
chr = mutations$Chr,
start = mutations$Pos,
end = mutations$Pos,
key = c("chr", "start", "end"))
#merge by overlap
res <- foverlaps(mutationsRange, spotsRange, nomatch = 0)
#count mutations
resCnt <- data.frame(table(res$posSpot))
colnames(resCnt) <- c("Pos", "MutationCount")
merge(spots, resCnt, by = "Pos")
# Pos Chr MutationCount
# 1 3439618 1 10
# 2 3549952 1 15
# 3 4375314 1 11
# 4 7337370 1 13
# ...
I'm not familiar with bed manipulations in R, so I'm going propose an answer with bedtools and someone here can try to convert to GRanges or other R bioinformatics library.
Essentially, you have two bed files, one with your spots and other with your mutations (I'm assuming a 1bp coordinate for each in the latter). In this case, you'd use closestBed to get the closest spot and the distance in bp of each mutation and then filter those that are 10KB from the spots. The code in a UNIX environment would look something like this:
# Assuming 4-column file structure (chr start end name)
closestBed -d -a mutations.bed -b spots.bed | awk '$9 <= 10000 {print}'
Where column 9 ($9) will be the distance in bp from the closest spot. Depending on how more specific you want to be, you can check the manual page at http://bedtools.readthedocs.io/en/latest/content/tools/closest.html. I'm pretty sure there's at least one bedtools-like package in R. If the functionality is similar, you can apply this exact same solution.
Hope that helps!
I am struggling a bit with a probably fairly simple task. I wanted to create a function that has arguments of dataframe(df), column names of dataframe(T and R), value of the selected column of dataframe(a and b). I know that the function reads the dataframe. but , I don't know how the columns are selected. I'm getting an error.
fun <- function(df,T,a,R,b)
{
col <- ds[c("x","y")]
omit <- na.omit(col)
data1 <- omit[omit$x == 'a',]
data2 <- omit[omit$x == 'b',]
nrow(data2)/nrow(data1)
}
fun(jugs,Place,UK,Price,10)
I'm new to r language. So, please help me.
There are several errors you're making.
col <- ds[c("x","y")]
What are x and y? Presumably they're arguments that you're passing, but you specify T and R in your function, not x and y.
data1 <- omit[omit$x == 'a',]
data2 <- omit[omit$x == 'b',]
Again, presumably, you want a and b to be arguments you passed to the function, but you specified 'a' and 'b' which are specific, not general arguments. Also, I assume that second "omit$x" should be "omit$y" (or vice versa). And actually, since you just made this into a new data frame with two columns, you can just use the column index.
nrow(data2)/nrow(data1)
You should print this line, or return it. Either one should suffice.
fun(jugs,Place,UK,Price,10)
Finally, you should use quotes on Place, UK, and Price, at least the way I've done it.
fun <- function(df, col1, val1, col2, val2){
new_cols <- df[,c(col1, col2)]
omit <- na.omit(new_cols)
data1 <- omit[omit[,1] == val1,]
data2 <- omit[omit[,2] == val2,]
print(nrow(data2)/nrow(data1))
}
fun(jugs, "Place", "UK", "Price", 10)
And if I understand what you're trying to do, it may be easier to avoid creating multiple dataframes that you don't need and just use counts instead.
fun <- function(df, col1, val1, col2, val2){
new_cols <- df[,c(col1, col2)]
omit <- na.omit(new_cols)
n1 <- sum(omit[,1] == val1)
n2 <- sum(omit[,2] == val2)
print(n2/n1)
}
fun(jugs, "Place", "UK", "Price", 10)
I would write this function as follows:
fun <- function(df,T,a,R,b) {
data <- na.omit(df[c(T,R)]);
sum(data[[R]]==b)/sum(data[[T]]==a);
};
As you can see, you can combine the first two lines into one, because in your code col was not reused anywhere. Secondly, since you only care about the number of rows of the two subsets of the intermediate data.frame, you don't actually need to construct those two data.frames; instead, you can just compute the logical vectors that result from the two comparisons, and then call sum() on those logical vectors, which naturally treats FALSE as 0 and TRUE as 1.
Demo:
fun <- function(df,T,a,R,b) { data <- na.omit(df[c(T,R)]); sum(data[[R]]==b)/sum(data[[T]]==a); };
df <- data.frame(place=c(rep(c('p1','p2'),each=4),NA,NA), price=c(10,10,20,NA,20,20,20,NA,20,20), stringsAsFactors=F );
df;
## place price
## 1 p1 10
## 2 p1 10
## 3 p1 20
## 4 p1 NA
## 5 p2 20
## 6 p2 20
## 7 p2 20
## 8 p2 NA
## 9 <NA> 20
## 10 <NA> 20
fun(df,'place','p1','price',20);
## [1] 1.333333
I have a follow up to this question.
I am creating a data.frame conditional on the column names and specific row entries of an existing data.frame. Below is how I resolved it using a for loop (thanks to #Roland's suggestion... the real data violated requirements of #eddi's answer), but it has been running on the actual data set (200x500,000+ rows.cols) for more than two hours now...
(The following generated data.frames are very similar to the actual data.)
set.seed(1)
a <- data.frame(year=c(1986:1990),
events=round(runif(5,0,5),digits=2))
b <- data.frame(year=c(rep(1986:1990,each=2,length.out=40),1986:1990),
region=c(rep(c("x","y"),10),rep(c("y","z"),10),rep("y",5)),
state=c(rep(c("NY","PA","NC","FL"),each=10),rep("AL",5)),
events=round(runif(45,0,5),digits=2))
d <- matrix(rbinom(200,1,0.5),10,20, dimnames=list(c(1:10), rep(1986:1990,each=4)))
e <- data.frame(id=sprintf("%02d",1:10), as.data.frame(d),
region=c("x","y","x","z","z","y","y","z","y","y"),
state=c("PA","AL","NY","NC","NC","NC","FL","FL","AL","AL"))
for (i in seq_len(nrow(d))) {
for (j in seq_len(ncol(d))) {
d[i,j] <- ifelse(d[i,j]==0,
a$events[a$year==colnames(d)[j]],
b$events[b$year==colnames(d)[j] &
b$state==e$state[i] &
b$region==e$region[i]])
}
}
Is there a better/faster way to do this?
A simpler way to do it (I think - it does not involve melting, dcasting and merging) is as follows:
First, your a and b arrays, should be indexed by year (for a) and by year/state/region (for b):
at = a$events; names(at) = a$year
bt = tapply(b$events,list(b$year,b$state,b$region),function(x) min(x))
# note, I used min(x) in tapply just to be on the safe side, that the functions always returns a scalar
# we now create the result of the more complex case (lookup in b)
ids = cbind(colnames(d)[col(d)],
as.character(e$state[row(d)]),
as.character(e$region[row(d)])
)
vals=bt[ids]; dim(vals)=dim(d)
# and compute your desired result with the ifelse
result = ifelse(d==0,at[colnames(d)[col(d)]],vals)
# and that's it!
This should be faster (avoiding the nested loops), but I haven't profiled that. Let us know how that works for you on the full data
# This will require a couple of merges,
# but first let's convert the data to long form and extract year as integer
# I convert result to data.table, since that's easier and faster to deal with
# Note: it *is* possible to do the melt/dcast entirely in data.table framework,
# but it's a hassle right now - there is a FR iirc about that
library(reshape2)
library(data.table)
dt = data.table(melt(e))[, year := as.integer(sub('X([0-9]*).*','\\1',variable))]
# set key for merging and merge with b and a
setkey(dt, year, region, state)
dt.result = data.table(a, key = 'year')[
data.table(b, key = c('year', 'region', 'state'))[dt]]
# now we can compute the value we want
dt.result[, final.value := value * events.1 + (!value) * events]
# dcast back
e.result = dcast(dt.result, id + region + state ~ variable,
value.var = 'final.value')
How can I use apply or a related function to create a new data frame that contains the results of the row averages of each pair of columns in a very large data frame?
I have an instrument that outputs n replicate measurements on a large number of samples, where each single measurement is a vector (all measurements are the same length vectors). I'd like to calculate the average (and other stats) on all replicate measurements of each sample. This means I need to group n consecutive columns together and do row-wise calculations.
For a simple example, with three replicate measurements on two samples, how can I end up with a data frame that has two columns (one per sample), one that is the average each row of the replicates in dat$a, dat$b and dat$c and one that is the average of each row for dat$d, dat$e and dat$f.
Here's some example data
dat <- data.frame( a = rnorm(16), b = rnorm(16), c = rnorm(16), d = rnorm(16), e = rnorm(16), f = rnorm(16))
a b c d e f
1 -0.9089594 -0.8144765 0.872691548 0.4051094 -0.09705234 -1.5100709
2 0.7993102 0.3243804 0.394560355 0.6646588 0.91033497 2.2504104
3 0.2963102 -0.2911078 -0.243723116 1.0661698 -0.89747522 -0.8455833
4 -0.4311512 -0.5997466 -0.545381175 0.3495578 0.38359390 0.4999425
5 -0.4955802 1.8949285 -0.266580411 1.2773987 -0.79373386 -1.8664651
6 1.0957793 -0.3326867 -1.116623982 -0.8584253 0.83704172 1.8368212
7 -0.2529444 0.5792413 -0.001950741 0.2661068 1.17515099 0.4875377
8 1.2560402 0.1354533 1.440160168 -2.1295397 2.05025701 1.0377283
9 0.8123061 0.4453768 1.598246016 0.7146553 -1.09476532 0.0600665
10 0.1084029 -0.4934862 -0.584671816 -0.8096653 1.54466019 -1.8117459
11 -0.8152812 0.9494620 0.100909570 1.5944528 1.56724269 0.6839954
12 0.3130357 2.6245864 1.750448404 -0.7494403 1.06055267 1.0358267
13 1.1976817 -1.2110708 0.719397607 -0.2690107 0.83364274 -0.6895936
14 -2.1860098 -0.8488031 -0.302743475 -0.7348443 0.34302096 -0.8024803
15 0.2361756 0.6773727 1.279737692 0.8742478 -0.03064782 -0.4874172
16 -1.5634527 -0.8276335 0.753090683 2.0394865 0.79006103 0.5704210
I'm after something like this
X1 X2
1 -0.28358147 -0.40067128
2 0.50608365 1.27513471
3 -0.07950691 -0.22562957
4 -0.52542633 0.41103139
5 0.37758930 -0.46093340
6 -0.11784382 0.60514586
7 0.10811540 0.64293184
8 0.94388455 0.31948189
9 0.95197629 -0.10668118
10 -0.32325169 -0.35891702
11 0.07836345 1.28189698
12 1.56269017 0.44897971
13 0.23533617 -0.04165384
14 -1.11251880 -0.39810121
15 0.73109533 0.11872758
16 -0.54599850 1.13332286
which I did with this, but is obviously no good for my much larger data frame...
data.frame(cbind(
apply(cbind(dat$a, dat$b, dat$c), 1, mean),
apply(cbind(dat$d, dat$e, dat$f), 1, mean)
))
I've tried apply and loops and can't quite get it together. My actual data has some hundreds of columns.
This may be more generalizable to your situation in that you pass a list of indices. If speed is an issue (large data frame) I'd opt for lapply with do.call rather than sapply:
x <- list(1:3, 4:6)
do.call(cbind, lapply(x, function(i) rowMeans(dat[, i])))
Works if you just have col names too:
x <- list(c('a','b','c'), c('d', 'e', 'f'))
do.call(cbind, lapply(x, function(i) rowMeans(dat[, i])))
EDIT
Just happened to think maybe you want to automate this to do every three columns. I know there's a better way but here it is on a 100 column data set:
dat <- data.frame(matrix(rnorm(16*100), ncol=100))
n <- 1:ncol(dat)
ind <- matrix(c(n, rep(NA, 3 - ncol(dat)%%3)), byrow=TRUE, ncol=3)
ind <- data.frame(t(na.omit(ind)))
do.call(cbind, lapply(ind, function(i) rowMeans(dat[, i])))
EDIT 2
Still not happy with the indexing. I think there's a better/faster way to pass the indexes. here's a second though not satisfying method:
n <- 1:ncol(dat)
ind <- data.frame(matrix(c(n, rep(NA, 3 - ncol(dat)%%3)), byrow=F, nrow=3))
nonna <- sapply(ind, function(x) all(!is.na(x)))
ind <- ind[, nonna]
do.call(cbind, lapply(ind, function(i)rowMeans(dat[, i])))
A similar question was asked here by #david: averaging every 16 columns in r (now closed), which I answered by adapting #TylerRinker's answer above, following a suggestion by #joran and #Ben. Because the resulting function might be of help to OP or future readers, I am copying that function here, along with an example for OP's data.
# Function to apply 'fun' to object 'x' over every 'by' columns
# Alternatively, 'by' may be a vector of groups
byapply <- function(x, by, fun, ...)
{
# Create index list
if (length(by) == 1)
{
nc <- ncol(x)
split.index <- rep(1:ceiling(nc / by), each = by, length.out = nc)
} else # 'by' is a vector of groups
{
nc <- length(by)
split.index <- by
}
index.list <- split(seq(from = 1, to = nc), split.index)
# Pass index list to fun using sapply() and return object
sapply(index.list, function(i)
{
do.call(fun, list(x[, i], ...))
})
}
Then, to find the mean of the replicates:
byapply(dat, 3, rowMeans)
Or, perhaps the standard deviation of the replicates:
byapply(dat, 3, apply, 1, sd)
Update
by can also be specified as a vector of groups:
byapply(dat, c(1,1,1,2,2,2), rowMeans)
mean for rows from vectors a,b,c
rowMeans(dat[1:3])
means for rows from vectors d,e,f
rowMeans(dat[4:6])
all in one call you get
results<-cbind(rowMeans(dat[1:3]),rowMeans(dat[4:6]))
if you only know the names of the columns and not the order then you can use:
rowMeans(cbind(dat["a"],dat["b"],dat["c"]))
rowMeans(cbind(dat["d"],dat["e"],dat["f"]))
#I dont know how much damage this does to speed but should still be quick
The rowMeans solution will be faster, but for completeness here's how you might do this with apply:
t(apply(dat,1,function(x){ c(mean(x[1:3]),mean(x[4:6])) }))
Inspired by #joran's suggestion I came up with this (actually a bit different from what he suggested, though the transposing suggestion was especially useful):
Make a data frame of example data with p cols to simulate a realistic data set (following #TylerRinker's answer above and unlike my poor example in the question)
p <- 99 # how many columns?
dat <- data.frame(matrix(rnorm(4*p), ncol = p))
Rename the columns in this data frame to create groups of n consecutive columns, so that if I'm interested in the groups of three columns I get column names like 1,1,1,2,2,2,3,3,3, etc or if I wanted groups of four columns it would be 1,1,1,1,2,2,2,2,3,3,3,3, etc. I'm going with three for now (I guess this is a kind of indexing for people like me who don't know much about indexing)
n <- 3 # how many consecutive columns in the groups of interest?
names(dat) <- rep(seq(1:(ncol(dat)/n)), each = n, len = (ncol(dat)))
Now use apply and tapply to get row means for each of the groups
dat.avs <- data.frame(t(apply(dat, 1, tapply, names(dat), mean)))
The main downsides are that the column names in the original data are replaced (though this could be overcome by putting the grouping numbers in a new row rather than the colnames) and that the column names are returned by the apply-tapply function in an unhelpful order.
Further to #joran's suggestion, here's a data.table solution:
p <- 99 # how many columns?
dat <- data.frame(matrix(rnorm(4*p), ncol = p))
dat.t <- data.frame(t(dat))
n <- 3 # how many consecutive columns in the groups of interest?
dat.t$groups <- as.character(rep(seq(1:(ncol(dat)/n)), each = n, len = (ncol(dat))))
library(data.table)
DT <- data.table(dat.t)
setkey(DT, groups)
dat.av <- DT[, lapply(.SD,mean), by=groups]
Thanks everyone for your quick and patient efforts!
There is a beautifully simple solution if you are interested in applying a function to each unique combination of columns, in what known as combinatorics.
combinations <- combn(colnames(df),2,function(x) rowMeans(df[x]))
To calculate statistics for every unique combination of three columns, etc., just change the 2 to a 3. The operation is vectorized and thus faster than loops, such as the apply family functions used above. If the order of the columns matters, then you instead need a permutation algorithm designed to reproduce ordered sets: combinat::permn