Split vector in quantile in R [duplicate] - r

I see a lot of questions and answers re order and sort. Is there anything that sorts vectors or data frames into groupings (like quartiles or deciles)? I have a "manual" solution, but there's likely a better solution that has been group-tested.
Here's my attempt:
temp <- data.frame(name=letters[1:12], value=rnorm(12), quartile=rep(NA, 12))
temp
# name value quartile
# 1 a 2.55118169 NA
# 2 b 0.79755259 NA
# 3 c 0.16918905 NA
# 4 d 1.73359245 NA
# 5 e 0.41027113 NA
# 6 f 0.73012966 NA
# 7 g -1.35901658 NA
# 8 h -0.80591167 NA
# 9 i 0.48966739 NA
# 10 j 0.88856758 NA
# 11 k 0.05146856 NA
# 12 l -0.12310229 NA
temp.sorted <- temp[order(temp$value), ]
temp.sorted$quartile <- rep(1:4, each=12/4)
temp <- temp.sorted[order(as.numeric(rownames(temp.sorted))), ]
temp
# name value quartile
# 1 a 2.55118169 4
# 2 b 0.79755259 3
# 3 c 0.16918905 2
# 4 d 1.73359245 4
# 5 e 0.41027113 2
# 6 f 0.73012966 3
# 7 g -1.35901658 1
# 8 h -0.80591167 1
# 9 i 0.48966739 3
# 10 j 0.88856758 4
# 11 k 0.05146856 2
# 12 l -0.12310229 1
Is there a better (cleaner/faster/one-line) approach? Thanks!

There's a handy ntile function in package dplyr. It's flexible in the sense that you can very easily define the number of *tiles or "bins" you want to create.
Load the package (install first if you haven't) and add the quartile column:
library(dplyr)
temp$quartile <- ntile(temp$value, 4)
Or, if you want to use dplyr syntax:
temp <- temp %>% mutate(quartile = ntile(value, 4))
Result in both cases is:
temp
# name value quartile
#1 a -0.56047565 1
#2 b -0.23017749 2
#3 c 1.55870831 4
#4 d 0.07050839 2
#5 e 0.12928774 3
#6 f 1.71506499 4
#7 g 0.46091621 3
#8 h -1.26506123 1
#9 i -0.68685285 1
#10 j -0.44566197 2
#11 k 1.22408180 4
#12 l 0.35981383 3
data:
Note that you don't need to create the "quartile" column in advance and use set.seed to make the randomization reproducible:
set.seed(123)
temp <- data.frame(name=letters[1:12], value=rnorm(12))

The method I use is one of these or Hmisc::cut2(value, g=4):
temp$quartile <- with(temp, cut(value,
breaks=quantile(value, probs=seq(0,1, by=0.25), na.rm=TRUE),
include.lowest=TRUE))
An alternate might be:
temp$quartile <- with(temp, factor(
findInterval( val, c(-Inf,
quantile(val, probs=c(0.25, .5, .75)), Inf) , na.rm=TRUE),
labels=c("Q1","Q2","Q3","Q4")
))
The first one has the side-effect of labeling the quartiles with the values, which I consider a "good thing", but if it were not "good for you", or the valid problems raised in the comments were a concern you could go with version 2. You can use labels= in cut, or you could add this line to your code:
temp$quartile <- factor(temp$quartile, levels=c("1","2","3","4") )
Or even quicker but slightly more obscure in how it works, although it is no longer a factor, but rather a numeric vector:
temp$quartile <- as.numeric(temp$quartile)

I'll add the data.table version for anyone else Googling it (i.e., #BondedDust's solution translated to data.table and pared down a tad):
library(data.table)
setDT(temp)
temp[ , quartile := cut(value,
breaks = quantile(value, probs = 0:4/4),
labels = 1:4, right = FALSE)]
Which is much better (cleaner, faster) than what I had been doing:
temp[ , quartile :=
as.factor(ifelse(value < quantile(value, .25), 1,
ifelse(value < quantile(value, .5), 2,
ifelse(value < quantile(value, .75), 3, 4))]
Note, however, that this approach requires the quantiles to be distinct, e.g. it will fail on rep(0:1, c(100, 1)); what to do in this case is open ended so I leave it up to you.

Adapting dplyr::ntile to take advantage of data.table optimizations provides a faster solution.
library(data.table)
setDT(temp)
temp[order(value) , quartile := floor( 1 + 4 * (.I-1) / .N)]
Probably doesn't qualify as cleaner, but it's faster and one-line.
Timing on bigger data set
Comparing this solution to ntile and cut for data.table as proposed by #docendo_discimus and #MichaelChirico.
library(microbenchmark)
library(dplyr)
set.seed(123)
n <- 1e6
temp <- data.frame(name=sample(letters, size=n, replace=TRUE), value=rnorm(n))
setDT(temp)
microbenchmark(
"ntile" = temp[, quartile_ntile := ntile(value, 4)],
"cut" = temp[, quartile_cut := cut(value,
breaks = quantile(value, probs = seq(0, 1, by=1/4)),
labels = 1:4, right=FALSE)],
"dt_ntile" = temp[order(value), quartile_ntile_dt := floor( 1 + 4 * (.I-1)/.N)]
)
Gives:
Unit: milliseconds
expr min lq mean median uq max neval
ntile 608.1126 647.4994 670.3160 686.5103 691.4846 712.4267 100
cut 369.5391 373.3457 375.0913 374.3107 376.5512 385.8142 100
dt_ntile 117.5736 119.5802 124.5397 120.5043 124.5902 145.7894 100

You can use the quantile() function, but you need to handle rounding/precision when using cut(). So
set.seed(123)
temp <- data.frame(name=letters[1:12], value=rnorm(12), quartile=rep(NA, 12))
brks <- with(temp, quantile(value, probs = c(0, 0.25, 0.5, 0.75, 1)))
temp <- within(temp, quartile <- cut(value, breaks = brks, labels = 1:4,
include.lowest = TRUE))
Giving:
> head(temp)
name value quartile
1 a -0.56047565 1
2 b -0.23017749 2
3 c 1.55870831 4
4 d 0.07050839 2
5 e 0.12928774 3
6 f 1.71506499 4

Sorry for being a bit late to the party. I wanted to add my one liner using cut2 as I didn't know max/min for my data and wanted the groups to be identically large. I read about cut2 in an issue which was marked as duplicate (link below).
library(Hmisc) #For cut2
set.seed(123) #To keep answers below identical to my random run
temp <- data.frame(name=letters[1:12], value=rnorm(12), quartile=rep(NA, 12))
temp$quartile <- as.numeric(cut2(temp$value, g=4)) #as.numeric to number the factors
temp$quartileBounds <- cut2(temp$value, g=4)
temp
Result:
> temp
name value quartile quartileBounds
1 a -0.56047565 1 [-1.265,-0.446)
2 b -0.23017749 2 [-0.446, 0.129)
3 c 1.55870831 4 [ 1.224, 1.715]
4 d 0.07050839 2 [-0.446, 0.129)
5 e 0.12928774 3 [ 0.129, 1.224)
6 f 1.71506499 4 [ 1.224, 1.715]
7 g 0.46091621 3 [ 0.129, 1.224)
8 h -1.26506123 1 [-1.265,-0.446)
9 i -0.68685285 1 [-1.265,-0.446)
10 j -0.44566197 2 [-0.446, 0.129)
11 k 1.22408180 4 [ 1.224, 1.715]
12 l 0.35981383 3 [ 0.129, 1.224)
Similar issue where I read about cut2 in detail

temp$quartile <- ceiling(sapply(temp$value,function(x) sum(x-temp$value>=0))/(length(temp$value)/4))

Try this function
getQuantileGroupNum <- function(vec, group_num, decreasing=FALSE) {
if(decreasing) {
abs(cut(vec, quantile(vec, probs=seq(0, 1, 1 / group_num), type=8, na.rm=TRUE), labels=FALSE, include.lowest=T) - group_num - 1)
} else {
cut(vec, quantile(vec, probs=seq(0, 1, 1 / group_num), type=8, na.rm=TRUE), labels=FALSE, include.lowest=T)
}
}
> t1 <- runif(7)
> t1
[1] 0.4336094 0.2842928 0.5578876 0.2678694 0.6495285 0.3706474 0.5976223
> getQuantileGroupNum(t1, 4)
[1] 2 1 3 1 4 2 4
> getQuantileGroupNum(t1, 4, decreasing=T)
[1] 3 4 2 4 1 3 1

I would like to propose a version, which seems to be more robust, since I ran into a lot of problems using quantile() in the breaks option cut() on my dataset.
I am using the ntile function of plyr, but it also works with ecdf as input.
temp[, `:=`(quartile = .bincode(x = ntile(value, 100), breaks = seq(0,100,25), right = TRUE, include.lowest = TRUE)
decile = .bincode(x = ntile(value, 100), breaks = seq(0,100,10), right = TRUE, include.lowest = TRUE)
)]
temp[, `:=`(quartile = .bincode(x = ecdf(value)(value), breaks = seq(0,1,0.25), right = TRUE, include.lowest = TRUE)
decile = .bincode(x = ecdf(value)(value), breaks = seq(0,1,0.1), right = TRUE, include.lowest = TRUE)
)]
Is that correct?

Take care with ntile() if your original values are clustered at some values. To create equally sized groups, it will allocate rows with the same original value into different groups. This may not be desirable.
I had a case where scores of individuals were clustered at certain values and it was important that individuals with the same original score were placed in the same group (e.g. allocating students to groups based on test score). ntile() allocated individuals with the same score to different groups (unfair in this case), but cut() with quantile() does not (but groups are only approximately equal in size).
library(dplyr)
library(reshape2)
library(ggplot2)
# awkward data: cannot be fairly and equally divided into quartiles or quintiles
# (similar results are obtained from more realistic cases of clustered values)
example <- data.frame(id = 1:49, x = c(rep(1:7, each=7))) %>%
mutate(ntileQuartile = ntile(x, 4),
cutQuartile = cut(x, breaks=quantile(x, seq(0, 1, by=1/4)),
include.lowest=T, label=1:4),
ntileQuintile = ntile(x, 5),
cutQuintile = cut(x, breaks=quantile(x, seq(0, 1, by=1/5)),
include.lowest=T, label=1:5))
# graph: x axis is original score, colour is group allocation
# ntile creates equal groups, but some values of original score are split
# into separate groups. cut creates different sized groups, but score
# exactly determines the group.
melt(example, id.vars=c("id", "x"),
variable.name = "method", value.name="groupNumber") %>%
ggplot(aes(x, fill=groupNumber)) +
geom_histogram(colour="black", bins=13) +
facet_wrap(vars(method))

There is possibly a quicker way, but I would do:
a <- rnorm(100) # Our data
q <- quantile(a) # You can supply your own breaks, see ?quantile
# Define a simple function that checks in which quantile a number falls
getQuant <- function(x)
{
for (i in 1:(length(q)-1))
{
if (x>=q[i] && x<q[i+1])
break;
}
i
}
# Apply the function to the data
res <- unlist(lapply(as.matrix(a), getQuant))

Related

How to make a fuzzy join in R using more than one variable on each side

I would like to join the two data frames :
a <- data.frame(x=c(1,3,5))
b <- data.frame(start=c(0,4),end=c(2,6),y=c("a","b"))
with a condition like (x>start)&(x<end) in order to get such a result:
# x y
#1 1 a
#2 2 <NA>
#3 3 b
I don't want to make a potentially large cartesian product and then select only the few rows matching the condition and I'd like a solution using the tidyverse (I am not interested in a solution using SQL which would be a confession of failure). I thought of the 'fuzzyjoin' package but I cannot find examples fitting my need : the function to apply for the condition has only two arguments. I also tried to put 'start' and 'end' into a single argument with data.frame(z=I(purrr::map2(b$start,b$end,list)),y=b$y)
# z y
#1 0, 2 a
#2 4, 6 b
but although the data looks fine fuzzy_left_join doesn't accept it.
I search for solutions working in more general cases (n variables on the LHS, m on the RHS, not necessarily numeric with arbitrary conditions).
UPDATE
I also want to be able to express conditions like (x=start+1)|(x=end+1) giving here:
# x y
#1 1 a
#2 3 a
#3 5 b
For this case you don't need multi_by or multy_match_fun, this works :
library(fuzzyjoin)
fuzzy_left_join(a, b, by = c(x = "start", x = "end"), match_fun = list(`>`, `<`))
# x start end y
# 1 1 0 2 a
# 2 3 NA NA <NA>
# 3 5 4 6 b
I eventually went to the code of fuzzy_join and found a way to make what I want even without proper documentation. fuzzy_let_join doesn't work but there is the following way (not really pretty and it actually does a cartesian product):
g <- function(x,y) (x>y[,"start"])&(x<y[,"end"])
fuzzy_join(a,b, multi_by = list(x="x",y=c("start","end"))
, multi_match_fun = g, mode = "left") %>% select(x,y)
data.table approach could be
library(data.table)
name1 <- setdiff(names(setDT(b)), names(setDT(a)))
#perform left outer join and then select required columns
a[b, (name1) := mget(name1), on = .(x > start, x < end)][, .(x, y)]
which gives
x y
1: 1 a
2: 3 <NA>
3: 5 b
Sample data:
a <- data.frame(x = c(1, 3, 5))
b <- data.frame(start = c(0, 4), end = c(2, 6), y = c("a", "b"))
Update: In case you want to join both dataframes on (x=start+1)|(x=end+1) condition then you can try
library(data.table)
DT1 <- as.data.table(a)
DT2 <- as.data.table(b)
#Perform 1st join on "x = start+1" and then another on "x = end+1". Finally row-bind both results.
DT <- rbindlist(list(DT1[DT2[, start_temp := start+1], on = c(x = "start_temp"), .(x, y), nomatch = 0],
DT1[DT2[, end_temp := end+1], on = c(x = "end_temp"), .(x, y), nomatch = 0]))
DT
# x y
#1: 1 a
#2: 5 b
#3: 3 a
A possible answer to explain what I am trying to do : extending dplyr in some way. And I will be happy to know if there are ways to improve this solution or some problems I didn't see.
The solution avoids the cartesian product, but duplicates into lists of data frames both one of the input data frame and the result. I didn't include the final column selection of x and y that is easy to code.
my_left_join <- function(.DATA1,.DATA2,.WHERE)
{
call = as.list(match.call())
df1 <- .DATA1
df1$._row_ <- 1:nrow(df1)
dfl1 <- replyr::replyr_split(df1,"._row_")
eval(substitute(
dfl2 <- mapply(function(.x)
{filter(.DATA2,with(.x,WHERE)) %>%
mutate(._row_=.x$._row_)}
, dfl1, SIMPLIFY=FALSE)
,list(WHERE=call$.WHERE)))
df2 <- replyr::replyr_bind_rows(dfl2)
left_join(df1,df2,by="._row_") %>% select(-._row_)
}
my_left_join(a,b,(x>start)&(x<end))
# x start end y
#1 1 0 2 a
#2 3 NA NA <NA>
#3 5 4 6 b
my_left_join(a,b,(x==(start+1))|(x==(end+1)))
# x start end y
#1 1 0 2 a
#2 3 0 2 a
#3 5 4 6 b
You can try a GenomicRanges solution
library(GenomicRanges)
# setup GRanges objects
a_gr <- GRanges(1, IRanges(a$x,a$x))
b_gr <- GRanges(1, IRanges(b$start, b$end))
# find overlaps between the two data sets
res <- as.data.frame(findOverlaps(a_gr,b_gr))
# create the expected output
a$y <- NA
a$y[res$queryHits] <- as.character(b$y)[res$subjectHits]
a
x y
1 1 a
2 3 <NA>
3 5 b

Generate random numbers by group with replacement

** edited because I'm a doofus - with replacement, not without **
I have a large-ish (>500k rows) dataset with 421 groups, defined by two grouping variables. Sample data as follows:
df<-data.frame(group_one=rep((0:9),26), group_two=rep((letters),10))
head(df)
group_one group_two
1 0 a
2 1 b
3 2 c
4 3 d
5 4 e
6 5 f
...and so on.
What I want is some number (k = 12 at the moment, but that number may vary) of stratified samples, by membership in (group_one x group_two). Membership in each group should be indicated by a new column, sample_membership, which has a value of 1 through k (again, 12 at the moment). I should be able to subset by sample_membership and get up to 12 distinct samples, each of which is representative when considering group_one and group_two.
Final data set would thus look something like this:
group_one group_two sample_membership
1 0 a 1
2 0 a 12
3 0 a 5
4 1 a 5
5 1 a 7
6 1 a 9
Thoughts? Thanks very much in advance!
Maybe something like this?:
library(dplyr)
df %>%
group_by(group_one, group_two) %>%
mutate(sample_membership = sample(1:12, n(), replace = FALSE))
Here's a one-line data.table approach, which you should definitely consider if you have a long data.frame.
library(data.table)
setDT(df)
df[, sample_membership := sample.int(12, .N, replace=TRUE), keyby = .(group_one, group_two)]
df
# group_one group_two sample_membership
# 1: 0 a 9
# 2: 0 a 8
# 3: 0 c 10
# 4: 0 c 4
# 5: 0 e 9
# ---
# 256: 9 v 4
# 257: 9 x 7
# 258: 9 x 11
# 259: 9 z 3
# 260: 9 z 8
For sampling without replacement, use replace=FALSE, but as noted elsewhere, make sure you have fewer than k members per group. OR:
If you want to use "sampling without unnecessary replacement" (making this up -- not sure what the right terminology is here) because you have more than k members per group but still want to keep the groups as evenly sized as possible, you could do something like:
# example with bigger groups
k <- 12L
big_df <- data.frame(group_one=rep((0:9),260), group_two=rep((letters),100))
setDT(big_df)
big_df[, sample_round := rep(1:.N, each=k, length.out=.N), keyby = .(group_one, group_two)]
big_df[, sample_membership := sample.int(k, .N, replace=FALSE), keyby = .(group_one, group_two, sample_round)]
head(big_df, 15) # you can see first repeat does not occur until row k+1
Within each "sampling round" (first k observations in the group, second k observations in the group, etc.) there is sampling without replacement. Then, if necessary, the next sampling round makes all k assignments available again.
This approach would really evenly stratify the sample (but perfectly even is only possible if you have a multiple of k members in each group).
Here is a base R method, that assumes that your data.frame is sorted by groups:
# get number of observations for each group
groupCnt <- with(df, aggregate(group_one, list(group_one, group_two), FUN=length))$x
# for reproducibility, set the seed
set.seed(1234)
# get sample by group
df$sample <- c(sapply(groupCnt, function(i) sample(12, i, replace=TRUE)))
Untested example using dplyr, if it doesn't work it might point you in the right direction.
library( dplyr )
set.seed(123)
df <- data.frame(
group_one = as.integer( runif( 1000, 1, 6) ),
group_two = sample( LETTERS[1:6], 1000, TRUE)
) %>%
group_by( group_one, group_two ) %>%
mutate(
sample_membership = sample( seq(1, length(group_one) ), length(group_one), FALSE)
)
Good luck!

Apply different functions to different sets of columns by group

I have a data.table with the following features:
bycols: columns that divide the data into groups
nonvaryingcols: columns that are constant within each group (so that taking the first item from within each group and carrying that through would be sufficient)
datacols: columns to be aggregated / summarized (e.g. sum them within group)
I'm curious what the most efficient way to do what you might call a mixed collapse, taking all three of the above inputs as character vectors. It doesn't have to be the absolute fastest, but fast enough with reasonable syntax would be ideal.
Example data, where the different sets of columns are stored in character vectors.
require(data.table)
set.seed(1)
bycols <- c("g1","g2")
datacols <- c("dat1","dat2")
nonvaryingcols <- c("nv1","nv2")
test <- data.table(
g1 = rep( letters, 10 ),
g2 = rep( c(LETTERS,LETTERS), each = 5 ),
dat1 = runif( 260 ),
dat2 = runif( 260 ),
nv1 = rep( seq(130), 2),
nv2 = rep( seq(130), 2)
)
Final data should look like:
g1 g2 dat1 dat2 nv1 nv2
1: a A 0.8403809 0.6713090 1 1
2: b A 0.4491883 0.4607716 2 2
3: c A 0.6083939 1.2031960 3 3
4: d A 1.5510033 1.2945761 4 4
5: e A 1.1302971 0.8573135 5 5
6: f B 1.4964821 0.5133297 6 6
I have worked out two different ways of doing it, but one is horridly inflexible and unwieldy, and one is horridly slow. Will post tomorrow if no one has come up with something better by then.
As always with this sort of programmatic use of [.data.table, the general strategy is to construct an expression e that that can be evaluated in the j argument. Once you understand that (as I'm sure you do), it just becomes a game of computing on the language to get a j-slot expression that looks like what you'd write at the command line.
Here, for instance, and given the particular values in your example, you'd like a call that looks like:
test[, list(dat1=sum(dat1), dat2=sum(dat2), nv1=nv1[1], nv2=nv2[1]),
by=c("g1", "g2")]
so the expression you'd like evaluated in the j-slot is
list(dat1=sum(dat1), dat2=sum(dat2), nv1=nv1[1], nv2=nv2[1])
Most of the following function is taken up with constructing just that expression:
f <- function(dt, bycols, datacols, nvcols) {
e <- c(sapply(datacols, function(x) call("sum", as.symbol(x))),
sapply(nvcols, function(x) call("[", as.symbol(x), 1)))
e<- as.call(c(as.symbol("list"), e))
dt[,eval(e), by=bycols]
}
f(test, bycols=bycols, datacols=datacols, nvcols=nonvaryingcols)
## g1 g2 dat1 dat2 nv1 nv2
## 1: a A 0.8403809 0.6713090 1 1
## 2: b A 0.4491883 0.4607716 2 2
## 3: c A 0.6083939 1.2031960 3 3
## 4: d A 1.5510033 1.2945761 4 4
## 5: e A 1.1302971 0.8573135 5 5
## ---
## 126: v Z 0.5627018 0.4282380 126 126
## 127: w Z 0.7588966 1.4429034 127 127
## 128: x Z 0.7060596 1.3736510 128 128
## 129: y Z 0.6015249 0.4488285 129 129
## 130: z Z 1.5304034 1.6012207 130 130
Here's what I had come up with. It works, but very slowly.
test[, {
cbind(
as.data.frame( t( sapply( .SD[, ..datacols], sum ) ) ),
.SD[, ..nonvaryingcols][1]
)
}, by = bycols ]
Benchmarks
FunJosh <- function() {
f(test, bycols=bycols, datacols=datacols, nvcols=nonvaryingcols)
}
FunAri <- function() {
test[, {
cbind(
as.data.frame( t( sapply( .SD[, ..datacols], sum ) ) ),
.SD[, ..nonvaryingcols][1]
)
}, by = bycols ]
}
FunEddi <- function() {
cbind(
test[, lapply(.SD, sum), by = bycols, .SDcols = datacols],
test[, lapply(.SD, "[", 1), by = bycols, .SDcols = nonvaryingcols][, ..nonvaryingcols]
)
}
library(microbenchmark)
identical(FunJosh(), FunAri())
# [1] TRUE
microbenchmark(FunJosh(), FunAri(), FunEddi())
#Unit: milliseconds
# expr min lq median uq max neval
# FunJosh() 2.749164 2.958478 3.098998 3.470937 6.863933 100
# FunAri() 246.082760 255.273839 284.485654 360.471469 509.740240 100
# FunEddi() 5.877494 6.229739 6.528205 7.375939 112.895573 100
At least two orders of magnitude slower than #joshobrien's solution. Edit #Eddi's solution is much faster as well, and shows that cbind wasn't optimal but could be fairly fast in the right hands. Might be all the transforming and sapplying I was doing rather than just directly using lapply.
Just for a bit of variety, here is a variant of #Josh O'brien's solution that uses the bquote operator instead of call. I did try to replace the final as.call with a bquote, but because bquote doesn't support list splicing (e.g., see this question), I couldn't get that to work.
f <- function(dt, bycols, datacols, nvcols) {
datacols = sapply(datacols, as.symbol)
nvcols = sapply(nvcols, as.symbol)
e = c(lapply(datacols, function(x) bquote(sum(.(x)))),
lapply(nvcols, function(x) bquote(.(x)[1])))
e = as.call(c(as.symbol("list"), e))
dt[,eval(e), by=bycols]
}
> f(test, bycols=bycols, datacols=datacols, nvcols=nonvaryingcols)
g1 g2 dat1 dat2 nv1 nv2
1: a A 0.8404 0.6713 1 1
2: b A 0.4492 0.4608 2 2
3: c A 0.6084 1.2032 3 3
4: d A 1.5510 1.2946 4 4
5: e A 1.1303 0.8573 5 5
---
126: v Z 0.5627 0.4282 126 126
127: w Z 0.7589 1.4429 127 127
128: x Z 0.7061 1.3737 128 128
129: y Z 0.6015 0.4488 129 129
130: z Z 1.5304 1.6012 130 130
>

R: Reuse length

I have a data frame, in which I want to find the reuse lengh of (x,y). Can someone suggest me the quickest method to analyze it. For example:
df <- data.frame(
time=c(0,1,2,3,4,5,6),
x=c(1,4,2,1,6,1,4),
y=c(2,5,3,2,7,2,5)
)
I want average or median of the re-occurence of the same (x,y)/
Here, (1,2) repeats at time 0, 3, 5. So average = ((3-0) + (5-3))/2 = 2.5
And average for (4,5) is 5.
So, overall average is 3.75.
Can someone suggest how to do this?
Thanks.
Perhaps you're looking for something like this:
out <- aggregate(time ~ x + y, df, function(blah) {
mean(diff(blah))
})
out
# x y time
# 1 1 2 2.5
# 2 2 3 NaN
# 3 4 5 5.0
# 4 6 7 NaN
sum(out$time, na.rm=TRUE)
# [1] 7.5
A data.table approach:
library(data.table)
DT <- data.table(df, key = "x,y")
DT[, mean(diff(time)), by = key(DT)][, sum(V1, na.rm=TRUE)]
# [1] 7.5

R help on aggregation function

for my question I created a dummy data frame:
set.seed(007)
DF <- data.frame(a = rep(LETTERS[1:5], each=2), b = sample(40:49), c = sample(1:10))
DF
a b c
1 A 49 2
2 A 43 3
3 B 40 7
4 B 47 1
5 C 41 9
6 C 48 8
7 D 45 6
8 D 42 5
9 E 46 10
10 E 44 4
How can I use the aggregation function on column a so that, for instance, for "A" the following value is calculated: 49-43 / 2+3?
I started like:
aggregate(DF, by=list(DF$a), FUN=function(x) {
...
})
The problem I have is that I do not know how to access the 4 different cells 49, 43, 2 and 3
I tried x[[1]][1] and similar stuff but don't get it working.
Inside aggregate, the function FUN is applied independently to each column of your data. Here you want to use a function that takes two columns as inputs, so a priori, you can't use aggregate for that.
Instead, you can use ddply from the plyr package:
ddply(DF, "a", summarize, res = (b[1] - b[2]) / sum(c))
# a res
# 1 A 1.2000000
# 2 B -0.8750000
# 3 C -0.4117647
# 4 D 0.2727273
# 5 E 0.1428571
When you aggregate the FUN argument can be anything you want. Keep in mind that the value passed will either be a vector (if x is one column) or a little data.frame or matrix (if x is more than one). However, aggregate doesn't let you access the columns of a multi-column argument. For example.
aggregate( . ~ a, data = DF, FUN = function(x) diff(x[,1]) / sum(x[,2]) )
That fails with an error even though I used . (which takes all of the columns of DF that I'm not using elsewhere). To see what aggregate is trying to do there look at the following.
aggregate( . ~ a, data = DF, FUN = sum )
The two columns, b, and c, were aggregated but from the first attempt we know that you can't do something that accesses each column separately. So, strictly sticking with aggregate you need two passes and three lines of code.
diffb <- aggregate( b ~ a, data = DF, FUN = diff )
Y <- aggregate( c ~ a, data = DF, FUN = sum )
Y$c <- diffb$b / Y$c
Now Y contains the result you want.
The by function is simpler than aggregate and all it does is split the original data.frame using the indices and then apply the FUN function.
l <- by( data = DF, INDICES = DF$a, FUN = function(x) diff(x$b)/sum(x$c), simplify = FALSE )
unlist(l)
You have to do a little to get the result back into a data.frame if you really want one.
data.frame(a = names(l), x = unlist(l))
Using data.table could be faster and easier.
library(data.table)
DT <- data.table(DF)
DT[, (-1*diff(b))/sum(c), by=a]
a V1
1: A 1.2000000
2: B -0.8750000
3: C -0.4117647
4: D 0.2727273
5: E 0.1428571
Using aggregate, not so good. I didn't a better way to do it using aggregate :( but here's an attempt.
B <- aggregate(DF$b, by=list(DF$a), diff)
C <- aggregate(DF$c, by=list(DF$a), sum)
data.frame(a=B[,1], Result=(-1*B[,2])/C[,2])
a Result
1 A 1.2000000
2 B -0.8750000
3 C -0.4117647
4 D 0.2727273
5 E 0.1428571
A data.table solution - for efficiency of time and memory.
library(data.table)
DT <- as.data.table(DF)
DT[, list(calc = diff(b) / sum(c)), by = a]
You can use the base by() function:
listOfRows <-
by(data=DF,
INDICES=DF$a,
FUN=function(x){data.frame(a=x$a[1],res=(x$b[1] - x$b[2])/(x$c[1] + x$c[2]))})
newDF <- do.call(rbind,listOfRows)

Resources