Backwards rollapply with zoo object - r

Suppose I have a zoo object:
> df <- data.frame(col1=c(1,2,3,4), col2=c("a","b","c","d"))
> v <- zoo(df, order.by = df$col2)
> v
col1 col2
a 1 a
b 2 b
c 3 c
d 4 d
I can calculate the mean as:
> rollapply(v, 2, by.column = F, function(x) { mean(as.numeric(x[,"col1"])) })
a b c
1.5 2.5 3.5
How do I rollapply mean in DESCENDING order? (please no solutions where you just reverse the results AFTER applying the regular rollapply)
I would like my output to look like:
d c b
3.5 2.5 1.5

The oo in zoo stands for ordered observations and such objects are always ordered by the index; however, what is shown in the question is not ordered by the index so it cannot be a valid zoo object.
Also, the line starting v <- in the question is not likely what is wanted since it seems to ask for a mix of numeric and character data. Fixing that line and creating a data frame with the order shown we have:
library(zoo)
v <- read.zoo(df, index = "col2", FUN = c)
r <- rollapplyr(v, 2, mean)
fortify.zoo(r)[length(r):1, ]
giving:
Index r
3 d 3.5
2 c 2.5
1 b 1.5

Per G. Grothendieck:
rollapply(rev.zoo(v), 2, by.column = F, function(x) { mean(as.numeric(x[,"col1"])) })

Related

How to make a fuzzy join in R using more than one variable on each side

I would like to join the two data frames :
a <- data.frame(x=c(1,3,5))
b <- data.frame(start=c(0,4),end=c(2,6),y=c("a","b"))
with a condition like (x>start)&(x<end) in order to get such a result:
# x y
#1 1 a
#2 2 <NA>
#3 3 b
I don't want to make a potentially large cartesian product and then select only the few rows matching the condition and I'd like a solution using the tidyverse (I am not interested in a solution using SQL which would be a confession of failure). I thought of the 'fuzzyjoin' package but I cannot find examples fitting my need : the function to apply for the condition has only two arguments. I also tried to put 'start' and 'end' into a single argument with data.frame(z=I(purrr::map2(b$start,b$end,list)),y=b$y)
# z y
#1 0, 2 a
#2 4, 6 b
but although the data looks fine fuzzy_left_join doesn't accept it.
I search for solutions working in more general cases (n variables on the LHS, m on the RHS, not necessarily numeric with arbitrary conditions).
UPDATE
I also want to be able to express conditions like (x=start+1)|(x=end+1) giving here:
# x y
#1 1 a
#2 3 a
#3 5 b
For this case you don't need multi_by or multy_match_fun, this works :
library(fuzzyjoin)
fuzzy_left_join(a, b, by = c(x = "start", x = "end"), match_fun = list(`>`, `<`))
# x start end y
# 1 1 0 2 a
# 2 3 NA NA <NA>
# 3 5 4 6 b
I eventually went to the code of fuzzy_join and found a way to make what I want even without proper documentation. fuzzy_let_join doesn't work but there is the following way (not really pretty and it actually does a cartesian product):
g <- function(x,y) (x>y[,"start"])&(x<y[,"end"])
fuzzy_join(a,b, multi_by = list(x="x",y=c("start","end"))
, multi_match_fun = g, mode = "left") %>% select(x,y)
data.table approach could be
library(data.table)
name1 <- setdiff(names(setDT(b)), names(setDT(a)))
#perform left outer join and then select required columns
a[b, (name1) := mget(name1), on = .(x > start, x < end)][, .(x, y)]
which gives
x y
1: 1 a
2: 3 <NA>
3: 5 b
Sample data:
a <- data.frame(x = c(1, 3, 5))
b <- data.frame(start = c(0, 4), end = c(2, 6), y = c("a", "b"))
Update: In case you want to join both dataframes on (x=start+1)|(x=end+1) condition then you can try
library(data.table)
DT1 <- as.data.table(a)
DT2 <- as.data.table(b)
#Perform 1st join on "x = start+1" and then another on "x = end+1". Finally row-bind both results.
DT <- rbindlist(list(DT1[DT2[, start_temp := start+1], on = c(x = "start_temp"), .(x, y), nomatch = 0],
DT1[DT2[, end_temp := end+1], on = c(x = "end_temp"), .(x, y), nomatch = 0]))
DT
# x y
#1: 1 a
#2: 5 b
#3: 3 a
A possible answer to explain what I am trying to do : extending dplyr in some way. And I will be happy to know if there are ways to improve this solution or some problems I didn't see.
The solution avoids the cartesian product, but duplicates into lists of data frames both one of the input data frame and the result. I didn't include the final column selection of x and y that is easy to code.
my_left_join <- function(.DATA1,.DATA2,.WHERE)
{
call = as.list(match.call())
df1 <- .DATA1
df1$._row_ <- 1:nrow(df1)
dfl1 <- replyr::replyr_split(df1,"._row_")
eval(substitute(
dfl2 <- mapply(function(.x)
{filter(.DATA2,with(.x,WHERE)) %>%
mutate(._row_=.x$._row_)}
, dfl1, SIMPLIFY=FALSE)
,list(WHERE=call$.WHERE)))
df2 <- replyr::replyr_bind_rows(dfl2)
left_join(df1,df2,by="._row_") %>% select(-._row_)
}
my_left_join(a,b,(x>start)&(x<end))
# x start end y
#1 1 0 2 a
#2 3 NA NA <NA>
#3 5 4 6 b
my_left_join(a,b,(x==(start+1))|(x==(end+1)))
# x start end y
#1 1 0 2 a
#2 3 0 2 a
#3 5 4 6 b
You can try a GenomicRanges solution
library(GenomicRanges)
# setup GRanges objects
a_gr <- GRanges(1, IRanges(a$x,a$x))
b_gr <- GRanges(1, IRanges(b$start, b$end))
# find overlaps between the two data sets
res <- as.data.frame(findOverlaps(a_gr,b_gr))
# create the expected output
a$y <- NA
a$y[res$queryHits] <- as.character(b$y)[res$subjectHits]
a
x y
1 1 a
2 3 <NA>
3 5 b

How to apply scale rule for many columns in new dataset

I have a next task
a = data.frame(a= c(1,2,3,4,5,6)) # dataset
range01 <- function(x){(x-min(a$a))/(max(a$a)-min(a$a))} # rule for scale
b = data.frame(a = 6) # newdaset
lapply(b$a, range01) # we can apply range01 for this dataset because we use min(a$a) in the rule
But how can I apply this when i have many columns in my dataset? like below
a = data.frame(a= c(1,2,3,4,5,6))
b = data.frame(b= c(1,2,3,3,2,1))
c = data.frame(c= c(6,2,4,4,5,6))
df = cbind(a,b,c)
df
new = data.frame(a = 1, b = 2, c = 3)
Of course I can make rules for every variable
range01a <- function(x){(x-min(df$a))/(max(df$a)-min(df$a))}
But it's very long way. How to make it convenient?
You can redefine your scale function so it takes two arguments; One to be scaled and one the scaler as follows, and then use Map on the two data frames:
scale_custom <- function(x, scaler) (x - min(scaler)) / (max(scaler) - min(scaler))
Map(scale_custom, new, df)
#$a
#[1] 0
#$b
#[1] 0.5
#$c
#[1] 0.25
If you need the data frame as result:
as.data.frame(Map(scale_custom, new, df))
# a b c
#1 0 0.5 0.25
You can exploit the fact that the column names of new and df are same. Could be helpful if the order of the columns in the two dataframes is not the same.
sapply(names(new), function(x) (new[x]-min(df[x]))/(max(df[x])-min(df[x])))
#$a.a
#[1] 0
#$b.b
#[1] 0.5
#$c.c
#[1] 0.25
to put in data.frame
data.frame(lapply(names(new), function(x) (new[x]-min(df[x]))/(max(df[x])-min(df[x]))))
# a b c
#1 0 0.5 0.25

Fast convert of many rows to json character

I have ~15 data.frames with 100K-300K rows. I want to condense on the variable v the other columns into a character json format for condensed storage reasons. Note that each group in v will have multiple rows (1 or more; likely more). I have code below that inefficiently uses the jsonlite package to convert but because of how I set up the splits it's slow and not memory efficient. How could I do this faster and more memory efficient. I don't need to use the jsonlite package just did because it's the only way I knew how to do this. I'm thinking there's a way to make the character json directly using data.table in a fast way but can't think of how to do this.
PS if it helps to know the motivation...I'm doing this to have a hash table that I can look up v in and then convert the json back to an R data.frame on the fly. Maybe there's a way to use jsonlite more directly than I am but toJSON(dat) is not what I'm after.
MWE
set.seed(10)
dat <- data.frame(
v = rep(c('red', 'blue'), each =3),
w = sample(LETTERS, 6),
x = sample(1:3, 6, T),
y = sample(1:3, 6, T),
z = sample(1:3, 6, T),
stringsAsFactors = FALSE
)
dat
Data View
v w x y z
1 red N 1 1 2
2 red H 1 2 3
3 red K 2 2 3
4 blue P 2 2 2
5 blue B 2 1 3
6 blue E 2 1 2
Coverting
library(jsonlite)
jsonlist <- lapply(split(dat[-1], dat$v), function(x) as.character(toJSON(x)))
data.frame(
v = names(jsonlist),
json = unlist(jsonlist, use.names=FALSE),
stringsAsFactors = FALSE
)
Desired Result
v json
1 blue [{"w":"P","x":2,"y":2,"z":2},{"w":"B","x":2,"y":1,"z":3},{"w":"E","x":2,"y":1,"z":2}]
2 red [{"w":"N","x":1,"y":1,"z":2},{"w":"H","x":1,"y":2,"z":3},{"w":"K","x":2,"y":2,"z":3}]
Using a data.table, you can group by v and pass .SD to toJSON:
library(data.table)
setDT(dat)
dat[, toJSON(.SD), by = v]
# v V1
#1: red [{"w":"N","x":1,"y":1,"z":2},{"w":"H","x":1,"y":2,"z":3},{"w":"K","x":2,"y":2,"z":3}]
#2: blue [{"w":"P","x":2,"y":2,"z":2},{"w":"B","x":2,"y":1,"z":3},{"w":"E","x":2,"y":1,"z":2}]
I'm still not convinced what you're doing makes sense, but:
dat %>%
group_by(v) %>%
do(json = select(., -v) %>% toJSON ) %>%
mutate(json = unlist(json))

Mutable version of apply?

I am trying to get an average value for each subset in dataframe, and incorporate that info into a column.
I can do that with lapply, but I can't make it "stick". Is there a variant of the apply family of functions with side effects? Anything in plyr library would be fine too.
data <- data.frame(
A = sample(LETTERS[1:3], 20, replace=TRUE),
B = runif(20),
C = LETTERS[1:20])
# split by A
dataByA <- split(data, factor(data$A))
# get average of B per set
lapply(dataByA, function(df) {df$Bmean <- mean(df$B)}) # does nothing!
# remerge subsets
data <- rbind.fill(dataByA)
Thanks
Try this:
data$Bmean <- ave(data$B, data$A)
There are many options for this sort of thing, but to correct your immediate mistake, your anonymous function in lapply simply isn't returning anything. Just make it return the piece it's operating on:
{df$Bmean <- mean(df$B); df}
I will leave it to the masses to show you your options using by, ddply + mutate or transform and data.table.
This may work:
library(plyr)
data1<-ddply(data,.(A),transform,Bmean=mean(B))
head(data1)
A B C Bmean
1 A 0.616156407 E 0.5492000
2 A 0.568187293 G 0.5492000
3 A 0.899395311 H 0.5492000
4 A 0.113060973 K 0.5492000
5 B 0.872838203 A 0.7885643
6 B 0.906216467 B 0.7885643
7 B 0.944196701 N 0.7885643
8 B 0.445983319 O 0.7885643
9 B 0.773586589 T 0.7885643
As per #joran, I will be one of the masses ;)
The solution in data.table is as follows
DT[ , Bmean := mean(B), by=A]
Where DT is simply
library(data.table)
DT <- data.table( <your data frame> )

R help on aggregation function

for my question I created a dummy data frame:
set.seed(007)
DF <- data.frame(a = rep(LETTERS[1:5], each=2), b = sample(40:49), c = sample(1:10))
DF
a b c
1 A 49 2
2 A 43 3
3 B 40 7
4 B 47 1
5 C 41 9
6 C 48 8
7 D 45 6
8 D 42 5
9 E 46 10
10 E 44 4
How can I use the aggregation function on column a so that, for instance, for "A" the following value is calculated: 49-43 / 2+3?
I started like:
aggregate(DF, by=list(DF$a), FUN=function(x) {
...
})
The problem I have is that I do not know how to access the 4 different cells 49, 43, 2 and 3
I tried x[[1]][1] and similar stuff but don't get it working.
Inside aggregate, the function FUN is applied independently to each column of your data. Here you want to use a function that takes two columns as inputs, so a priori, you can't use aggregate for that.
Instead, you can use ddply from the plyr package:
ddply(DF, "a", summarize, res = (b[1] - b[2]) / sum(c))
# a res
# 1 A 1.2000000
# 2 B -0.8750000
# 3 C -0.4117647
# 4 D 0.2727273
# 5 E 0.1428571
When you aggregate the FUN argument can be anything you want. Keep in mind that the value passed will either be a vector (if x is one column) or a little data.frame or matrix (if x is more than one). However, aggregate doesn't let you access the columns of a multi-column argument. For example.
aggregate( . ~ a, data = DF, FUN = function(x) diff(x[,1]) / sum(x[,2]) )
That fails with an error even though I used . (which takes all of the columns of DF that I'm not using elsewhere). To see what aggregate is trying to do there look at the following.
aggregate( . ~ a, data = DF, FUN = sum )
The two columns, b, and c, were aggregated but from the first attempt we know that you can't do something that accesses each column separately. So, strictly sticking with aggregate you need two passes and three lines of code.
diffb <- aggregate( b ~ a, data = DF, FUN = diff )
Y <- aggregate( c ~ a, data = DF, FUN = sum )
Y$c <- diffb$b / Y$c
Now Y contains the result you want.
The by function is simpler than aggregate and all it does is split the original data.frame using the indices and then apply the FUN function.
l <- by( data = DF, INDICES = DF$a, FUN = function(x) diff(x$b)/sum(x$c), simplify = FALSE )
unlist(l)
You have to do a little to get the result back into a data.frame if you really want one.
data.frame(a = names(l), x = unlist(l))
Using data.table could be faster and easier.
library(data.table)
DT <- data.table(DF)
DT[, (-1*diff(b))/sum(c), by=a]
a V1
1: A 1.2000000
2: B -0.8750000
3: C -0.4117647
4: D 0.2727273
5: E 0.1428571
Using aggregate, not so good. I didn't a better way to do it using aggregate :( but here's an attempt.
B <- aggregate(DF$b, by=list(DF$a), diff)
C <- aggregate(DF$c, by=list(DF$a), sum)
data.frame(a=B[,1], Result=(-1*B[,2])/C[,2])
a Result
1 A 1.2000000
2 B -0.8750000
3 C -0.4117647
4 D 0.2727273
5 E 0.1428571
A data.table solution - for efficiency of time and memory.
library(data.table)
DT <- as.data.table(DF)
DT[, list(calc = diff(b) / sum(c)), by = a]
You can use the base by() function:
listOfRows <-
by(data=DF,
INDICES=DF$a,
FUN=function(x){data.frame(a=x$a[1],res=(x$b[1] - x$b[2])/(x$c[1] + x$c[2]))})
newDF <- do.call(rbind,listOfRows)

Resources