why is dcast so impossible to pass a non-aggregate function? - r

I am using the data.table package for a table like this:
DT <- data.table(id=rep(1:100, each=50),
grp=rep(letters[1:4], each=1250),
time=rep(1:50,100),
outcome=rnorm(5000),
seconds=rep(1:500,10),
weights=rnorm(5000),
response=rep(1:200, each=25),
key=c("grp", "time"))
I would like to create a new (possibly rbindlisted) data table of some summary statistics from this table. I first created two intermediary tables a and b,
a <- DT[, list(mean = weighted.mean(outcome, weights),
median=median(outcome),seconds), by=c("grp","time")]
b <- DT[, list(mean=weighted.mean(response, seconds),
median=median(response)), by=c("grp","time")]
and then am trying to rowbind these together across all groups but still preserve the grouping along the rows. This does not work:
DTfinal <- data.table(DT$grp, DT$time,
outcomemean=a$mean, responsemean=b$mean,
outcomemedian=a$median, responsemedian=b$median)
I don't think a merge works since a and b have different lengths. Rowbinding a and b also mixes up the different means and medians of a and b, ideally I would like a rbindlist that has some kind of suffix for each column like c(".a",".b").
Update:
I get an error (since a and b have different dimension) doing
DTfinal <- rbindlist(setNames(list(a[, c("grp", "time", "mean", "median"),
with = FALSE],
b[, c("grp", "time", "mean", "median"),
with = FALSE]),
c("a", "b")),
idcol= "id")
dcast(DTfinal, grp + time ~id, value.var = c('mean', 'median'))
where it returns
Aggregate function missing, defaulting to 'length'

We can use rbindlist after placing the datasets in a list
DTfinal <- rbindlist(list(a,b))
dim(DTfinal)
#[1] 400 4
dim(a)
#[1] 200 4
dim(b)
#[1] 200 4
Suppose if both datasets have different number of columns, and we have a vector of column names that we need to keep
nm1 <- intersect(names(a), names(b))
rbindlist(list(a[, nm1, with = FALSE], b[, nm1, with = FALSE]), idcol= "id")
Update
If we need to convert to 'wide' format
DTfinal <- rbindlist(setNames(list(a,b), c("a", "b")), idcol= "id")
dcast(DTfinal, grp + time ~id, value.var = c('mean', 'median'))
# grp time mean_a mean_b median_a median_b
# 1: a 1 0.52171471 25.99502 -0.06558068 25
# 2: a 2 0.36445108 25.99010 0.13518412 25
# 3: a 3 0.08993721 25.98522 0.20128790 25
# 4: a 4 -64.04617391 25.98039 0.40999376 25
# 5: a 5 0.81730847 25.97561 -0.03481697 25
# ---
#196: d 46 1.62818374 176.67568 -0.26695999 176
#197: d 47 -1.45259871 176.67340 0.14893356 176
#198: d 48 9.59796683 176.67114 -0.05834959 176
#199: d 49 -2.74285453 176.66890 -0.22094347 176
#200: d 50 1.22109043 176.66667 -0.08172928 176

Related

Is there a way to replace rows in one dataframe with another in R?

I'm trying to figure out how to replace rows in one dataframe with another by matching the values of one of the columns. Both dataframes have the same column names.
Ex:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"))
df2 <- data.frame(x = c(1,2), y = c("f", "g"))
Is there a way to replace the rows of df1 with the same row in df2 where they share the same x variable? It would look like this.
data.frame(x = c(1,2,3,4), y = c("f","g","c","d")
I've been working on this for a while and this is the closest I've gotten -
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
But it just replaces the values with NA.
Does anyone know how to do this?
We can use match. :
inds <- match(df1$x, df2$x)
df1$y[!is.na(inds)] <- df2$y[na.omit(inds)]
df1
# x y
#1 1 f
#2 2 g
#3 3 c
#4 4 d
First off, well done in producing a nice reproducible example that's directly copy-pastable. That always helps, specially with an example of expected output. Nice one!
You have several options, but lets look at why your solution doesn't quite work:
First of all, I tried copy-pasting your last line into a new session and got the dreaded factor-error:
Warning message:
In `[<-.factor`(`*tmp*`, iseq, value = 1:2) :
invalid factor level, NA generated
If we look at your data frames df1 and df2 with the str function, you will see that they do not contain text but factors. These are not text - in short they represent categorical data (male vs. female, scores A, B, C, D, and F, etc.) and are really integers that have a text as label. So that could be your issue.
Running your code gives a warning because you are trying to import new factors (labels) into df1 that don't exist. And R doesn't know what to do with them, so it just inserts NA-values.
As r2evens answered, he used the stringsAsFactors to disable using strings as Factors - you can even go as far as disabling it on a session-wide basis using options(stringsAsFactors=FALSE) (and I've heard it will be disabled as default in forthcoming R4.0 - yay!).
After disabling stringsAsFactors, your code works - or does it? Try this on for size:
df2 <- df2[c(2,1),]
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
What's in df1 now? Not quite right anymore.
In the first line, I swapped the two rows in df2 and lo and behold, the replaced values in df1 were swapped. Why is that?
Let's deconstruct your statement df2[which(df1$x %in% df2$x),]$y
Call df1$x %in% df2$x returns a logical vector (boolean) of which elements in df1$x are found ind df2 - i.e. the first two and not the second two. But it doesn't relate which positions in the first vector corresponds to which in the second.
Calling which(df1$x %in% df2$x) then reduces the logical vector to which indices were TRUE. Again, we do not now which elements correspond to which.
For solutions, I would recommend r2evans, as it doesn't rely on extra packages (although data.table or dplyr are two powerful packages to get to know).
In his solution, he uses merge to perform a "full join" which matches rows based on the value, rather than - well, what you did. With transform, he assigns new variables within the context of the data.frame returned from the merge function called in the first argument.
I think what you need here is a "merge" or "join" operation.
(I add stringsAsFactors=FALSE to the frames so that the merging and later work is without any issue, as factors can be disruptive sometimes.)
Base R:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"), stringsAsFactors = FALSE)
# df2 <- data.frame(x = c(1,2), y = c("f", "g"), stringsAsFactors = FALSE)
merge(df1, df2, by = "x", all = TRUE)
# x y.x y.y
# 1 1 a f
# 2 2 b g
# 3 3 c <NA>
# 4 4 d <NA>
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y))
# x y.x y.y y
# 1 1 a f f
# 2 2 b g g
# 3 3 c <NA> c
# 4 4 d <NA> d
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y), y.x = NULL, y.y = NULL)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
Dplyr:
library(dplyr)
full_join(df1, df2, by = "x") %>%
mutate(y = coalesce(y.y, y.x)) %>%
select(-y.x, -y.y)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
A join option with data.table where we join on the 'x' column, assign the values of 'y' in second dataset (i.y) to the first one with :=
library(data.table)
setDT(df1)[df2, y := i.y, on = .(x)]
NOTE: It is better to use stringsAsFactors = FALSE (in R 4.0.0 - it is by default though) or else we need to have all the levels common in both datasets

Multiple functions on multiple columns by group, and create informative column names

How to adjust a data table manipulation so that, besides sum per category of several colums,
it would also calculate other functions at the same time such as mean and counts (.N) and automatically create column names: "sum c1" , "sum c2", "sum c4" , "mean c1", " mean c2", "mean c4" and preferably also 1 column "counts"?
My old solution was to write out
mean col1 = ....
mean col2 = ....
Etc, Inside the data.table command
Which worked, but horribly inefficient I think, and it won't work anymore to precode it if in the new app version, the calculations depend on user choices in an R Shiny app what to calculate for which columns.
I've read my way through a bunch of posts and blog articles but haven't quite figured out how to best do this. I read that in some cases the manipulation can become quite slow on large data tables depending on what approach you use (.sdcols, get, lapply, and or by =). Therefore I added a 'sizeable' dummy data set
My real data is around 100k rows by 100 columns and 1-100 groups roughly.
library(data.table)
n = 100000
dt = data.table(index=1:100000,
category = sample(letters[1:25], n, replace = T),
c1=rnorm(n,10000),
c2=rnorm(n,1000),
c3=rnorm(n,100),
c4 = rnorm(n,10)
)
# add more columns to test for big data tables
lapply(c(paste('c', 5:100, sep ='')),
function(addcol) dt[[addcol]] <<- rnorm(n,1000) )
# Simulate columns selected by shiny app user
Colchoice <- c("c1", "c4")
FunChoice <- c(".N", "mean", "sum")
# attempt which now does just one function and doesn't add names
dt[, lapply(.SD, sum, na.rm=TRUE), by=category, .SDcols=Colchoice ]
Expected output is a row per group and a column for each function per each selected column.
Category Mean c1 Sum c1 Mean c4 ...
A
B
C
D
E
......
Possibly a duplicate but I haven't found the exact answer that I need
If I understand correctly, this question consists of two parts:
How to group and aggregate with multiple functions over a list of columns and generate new column names automatically.
How to pass the names of the functions as a character vector.
For part 1, this is nearly a duplicate of Apply multiple functions to multiple columns in data.table but with the additional requirement that the results should be grouped using by =.
Therefore, eddi's answer has to be modified by adding the parameter recursive = FALSE in the call to unlist():
my.summary = function(x) list(N = length(x), mean = mean(x), median = median(x))
dt[, unlist(lapply(.SD, my.summary), recursive = FALSE),
.SDcols = ColChoice, by = category]
category c1.N c1.mean c1.median c4.N c4.mean c4.median
1: f 3974 9999.987 9999.989 3974 9.994220 9.974125
2: w 4033 10000.008 9999.991 4033 10.004261 9.986771
3: n 4025 9999.981 10000.000 4025 10.003686 9.998259
4: x 3975 10000.035 10000.019 3975 10.010448 9.995268
5: k 3957 10000.019 10000.017 3957 9.991886 10.007873
6: j 4027 10000.026 10000.023 4027 10.015663 9.998103
...
For part 2, we need to create my.summary() from a character vector of function names. This can be achieved by "programming on the language", i.e, by assembling an expression as character string and finally parsing and evaluating it:
my.summary <-
sapply(FunChoice, function(f) paste0(f, "(x)")) %>%
paste(collapse = ", ") %>%
sprintf("function(x) setNames(list(%s), FunChoice)", .) %>%
parse(text = .) %>%
eval()
my.summary
function(x) setNames(list(length(x), mean(x), sum(x)), FunChoice)
<environment: 0xe376640>
Alternatively, we can loop over the categories and rbind() the results afterwards:
library(magrittr) # used only to improve readability
lapply(dt[, unique(category)],
function(x) dt[category == x,
c(.(category = x), unlist(lapply(.SD, my.summary))),
.SDcols = ColChoice]) %>%
rbindlist()
Benchmark
So far, 4 data.table and one dplyr solutions have been posted. At least one of the answers claims to be "superfast". So, I wanted to verify by a benchmark with varying number of rows:
library(data.table)
library(magrittr)
bm <- bench::press(
n = 10L^(2:6),
{
set.seed(12212018)
dt <- data.table(
index = 1:n,
category = sample(letters[1:25], n, replace = T),
c1 = rnorm(n, 10000),
c2 = rnorm(n, 1000),
c3 = rnorm(n, 100),
c4 = rnorm(n, 10)
)
# use set() instead of <<- for appending additional columns
for (i in 5:100) set(dt, , paste0("c", i), rnorm(n, 1000))
tables()
ColChoice <- c("c1", "c4")
FunChoice <- c("length", "mean", "sum")
my.summary <- function(x) list(length = length(x), mean = mean(x), sum = sum(x))
bench::mark(
unlist = {
dt[, unlist(lapply(.SD, my.summary), recursive = FALSE),
.SDcols = ColChoice, by = category]
},
loop_category = {
lapply(dt[, unique(category)],
function(x) dt[category == x,
c(.(category = x), unlist(lapply(.SD, my.summary))),
.SDcols = ColChoice]) %>%
rbindlist()
},
dcast = {
dcast(dt, category ~ 1, fun = list(length, mean, sum), value.var = ColChoice)
},
loop_col = {
lapply(ColChoice, function(col)
dt[, setNames(lapply(FunChoice, function(f) get(f)(get(col))),
paste0(col, "_", FunChoice)),
by=category]
) %>%
Reduce(function(x, y) merge(x, y, by="category"), .)
},
dplyr = {
dt %>%
dplyr::group_by(category) %>%
dplyr::summarise_at(dplyr::vars(ColChoice), .funs = setNames(FunChoice, FunChoice))
},
check = function(x, y)
all.equal(setDT(x)[order(category)],
setDT(y)[order(category)] %>%
setnames(stringr::str_replace(names(.), "_", ".")),
ignore.col.order = TRUE,
check.attributes = FALSE
)
)
}
)
The results are easier to compare when plotted:
library(ggplot2)
autoplot(bm)
Please, note the logarithmic time scale.
For this test case, the unlist approach is always the fastest method, followed by dcast. dplyr is catching up for larger problem sizes n. Both lapply/loop approaches are less performant. In particular, Parfait's approach to loop over the columns and merge subresults afterwards seems to be rather sensitive to problem sizes n.
Edit: 2nd benchmark
As suggested by jangorecki, I have repeated the benchmark with much more rows and also with a varying number of groups.
Due to memory limitations, the largest problem size is 10 M rows times 102 columns which takes 7.7 GBytes of memory.
So, the first part of the benchmark code is modified to
bm <- bench::press(
n_grp = 10^(1:3),
n_row = 10L^seq(3, 7, by = 2),
{
set.seed(12212018)
dt <- data.table(
index = 1:n_row,
category = sample(n_grp, n_row, replace = TRUE),
c1 = rnorm(n_row),
c2 = rnorm(n_row),
c3 = rnorm(n_row),
c4 = rnorm(n_row, 10)
)
for (i in 5:100) set(dt, , paste0("c", i), rnorm(n_row, 1000))
tables()
...
As expected by jangorecki, some solutions are more sensitive to the number of groups than others. In particular, performance of loop_category is degrading much stronger with the number of groups while dcast seems to be less affected. For fewer groups, the unlist approach is always faster than dcast while for many groups dcast is faster. However, for larger problem sizes unlist seems to be ahead of dcast.
Edit 2019-03-12: Computing on the language, 3rd benchmark
Inspired by this follow-up question, I have have added a computing on the language approach where the whole expression is created as character string, parsed and evaluated.
The expression is created by
library(magrittr)
ColChoice <- c("c1", "c4")
FunChoice <- c("length", "mean", "sum")
my.expression <- CJ(ColChoice, FunChoice, sorted = FALSE)[
, sprintf("%s.%s = %s(%s)", V1, V2, V2, V1)] %>%
paste(collapse = ", ") %>%
sprintf("dt[, .(%s), by = category]", .) %>%
parse(text = .)
my.expression
expression(dt[, .(c1.length = length(c1), c1.mean = mean(c1), c1.sum = sum(c1),
c4.length = length(c4), c4.mean = mean(c4), c4.sum = sum(c4)), by = category])
This is then evaluated by
eval(my.expression)
which yields
category c1.length c1.mean c1.sum c4.length c4.mean c4.sum
1: f 3974 9999.987 39739947 3974 9.994220 39717.03
2: w 4033 10000.008 40330032 4033 10.004261 40347.19
3: n 4025 9999.981 40249924 4025 10.003686 40264.84
4: x 3975 10000.035 39750141 3975 10.010448 39791.53
5: k 3957 10000.019 39570074 3957 9.991886 39537.89
6: j 4027 10000.026 40270106 4027 10.015663 40333.07
...
I have modified the code of the 2nd benchmark to include this approach but had to reduce the additional columns from 100 to 25 in order to cope with the memory limitations of a much smaller PC. The chart shows that the "eval" approach is almost always the fastest or second:
Here's a data.table answer:
funs_list <- lapply(FunChoice, as.symbol)
dcast(dt, category~1, fun=eval(funs_list), value.var = Colchoice)
It's super fast and does what you want.
Consider building a list of data tables where you iterate through every ColChoice and apply each function of FuncChoice (setting names accordingly). Then, to merge all data tables together, run merge in a Reduce call. Also, use get to retrieve environment objects (functions/columns).
Note: ColChoice was renamed for camel case and length function replaces .N for functional form for count:
set.seed(12212018) # RUN BEFORE data.table() BUILD TO REPRODUCE OUTPUT
...
ColChoice <- c("c1", "c4")
FunChoice <- c("length", "mean", "sum")
output <- lapply(ColChoice, function(col)
dt[, setNames(lapply(FunChoice, function(f) get(f)(get(col))),
paste0(col, "_", FunChoice)),
by=category]
)
final_dt <- Reduce(function(x, y) merge(x, y, by="category"), output)
head(final_dt)
# category c1_length c1_mean c1_sum c4_length c4_mean c4_sum
# 1: a 3893 10000.001 38930003 3893 9.990517 38893.08
# 2: b 4021 10000.028 40210113 4021 9.977178 40118.23
# 3: c 3931 10000.008 39310030 3931 9.996538 39296.39
# 4: d 3954 10000.010 39540038 3954 10.004578 39558.10
# 5: e 4016 9999.998 40159992 4016 10.002131 40168.56
# 6: f 3974 9999.987 39739947 3974 9.994220 39717.03
It seems that there's not a straightforward answer using data.table since noone has answered this yet. So I'll propose a dplyr-based answer that should do what you want. I use the built-in iris data set for the example:
library(dplyr)
iris %>%
group_by(Species) %>%
summarise_at(vars(Sepal.Length, Sepal.Width), .funs = c(sum=sum,mean= mean), na.rm=TRUE)
## A tibble: 3 x 5
# Species Sepal.Length_sum Sepal.Width_sum Sepal.Length_mean Sepal.Width_mean
# <fct> <dbl> <dbl> <dbl> <dbl>
#1 setosa 245. 171. 5.00 3.43
#2 versicolor 297. 138. 5.94 2.77
#3 virginica 323. 149. 6.60 2.97
or using character vector input for the columns and functions:
Colchoice <- c("Sepal.Length", "Sepal.Width")
FunChoice <- c("mean", "sum")
iris %>%
group_by(Species) %>%
summarise_at(vars(Colchoice), .funs = setNames(FunChoice, FunChoice), na.rm=TRUE)
## A tibble: 3 x 5
# Species Sepal.Length_mean Sepal.Width_mean Sepal.Length_sum Sepal.Width_sum
# <fct> <dbl> <dbl> <dbl> <dbl>
#1 setosa 5.00 3.43 245. 171.
#2 versicolor 5.94 2.77 297. 138.
#3 virginica 6.60 2.97 323. 149.
If the summary statistics you need to compute are things like mean, .N, and (perhaps) median, which data.table optimizes into c code across the by, you may have faster performance if you convert the table into long form so that you can do the computations in a way that data table can optimize them:
> library(data.table)
> n = 100000
> dt = data.table(index=1:100000,
category = sample(letters[1:25], n, replace = T),
c1=rnorm(n,10000),
c2=rnorm(n,1000),
c3=rnorm(n,100),
c4 = rnorm(n,10)
)
> {lapply(c(paste('c', 5:100, sep ='')), function(addcol) dt[[addcol]] <<- rnorm(n,1000) ); dt}
> Colchoice <- c("c1", "c4")
> dt[, .SD
][, c('index', 'category', Colchoice), with=F
][, melt(.SD, id.vars=c('index', 'category'))
][, mean := mean(value), .(category, variable)
][, median := median(value), .(category, variable)
][, N := .N, .(category, variable)
][, value := NULL
][, index := NULL
][, unique(.SD)
][, dcast(.SD, category ~ variable, value.var=c('mean', 'median', 'N')
]
category mean_c1 mean_c4 median_c1 median_c4 N_c1 N_c4
1: a 10000 10.021 10000 10.041 4128 4128
2: b 10000 10.012 10000 10.003 3942 3942
3: c 10000 10.005 10000 9.999 3926 3926
4: d 10000 10.002 10000 10.007 4046 4046
5: e 10000 9.974 10000 9.993 4037 4037
6: f 10000 10.025 10000 10.015 4009 4009
7: g 10000 9.994 10000 9.998 4012 4012
8: h 10000 10.007 10000 9.986 3950 3950
...

group_by() and summarise() with all combinations (including non-existing combinations)

Basically, I want to have all unique combinations of column i and j; and expecting NA in column k if it is not available (similar like doing a group_by & summarise, but with all unique possible combinations).
Is there any existing function from the tidyverse or else that does the same thing as the one that I wrote below?
library(tidyverse)
df <- tibble(
i = c("a", "a", "b"),
j = c("x", "y", "x"),
k = c(100, 300, 20)
)
# I often write this chunk below after group_by & summarise
df %>%
spread(j, k) %>%
gather(j, k, -i)
you can use complete function from the tidyverse
http://tidyr.tidyverse.org/reference/complete.html
df %>% complete(i,j)
This gives you all the combinations of column i, and j
As it seems that cross_join() is still a tidyverse feature request on github, expand.grid(unique(df$i), unique(df$j), stringsAsFactors = FALSE) needs to be used to create a data.frame with all unique possible combinations:
df %>%
right_join(expand.grid(unique(df$i), unique(df$j), stringsAsFactors = FALSE),
by = c("i" = "Var1", "j" = "Var2"))
i j k
1 a x 100
2 b x 20
3 a y 300
4 b y NA
So, OP's approach using spread() and gather() appears to be more concise.
Personally, I do prefer the CJ() function from the data.table package:
library(data.table)
setDT(df)[CJ(i = i, j = j, unique = TRUE), on = .(i, j)]
i j k
1: a x 100
2: a y 300
3: b x 20
4: b y NA
CJ()can be used as replacement for expand.grid() together with right_join():
df %>%
right_join(data.table::CJ(i = .$i, j = .$j, unique = TRUE))
i j k
1 a x 100
2 a y 300
3 b x 20
4 b y NA

Aggregating data frame rows using an input vector

I have this toy data.frame:
df = data.frame(id = c("a","b","c","d"), value = c(2,3,6,5))
and I'd like to aggregate its rows according to this toy vector:
collapsed.ids = c("a,b","c","d")
where the aggregated data.frame should keep max(df$value) of its aggregated rows.
So for this toy example the output would be:
> aggregated.df
id value
1 a,b 3
2 c 6
3 d 5
I should note that my real data.frame is ~150,000 rows
I would use data.table for this.
Something like the following should work:
library(data.table)
DT <- data.table(df, key = "id") # Main data.table
Key <- data.table(ind = collapsed.ids) # your "Key" table
## We need your "Key" table in a long form
Key <- Key[, list(id = unlist(strsplit(ind, ",", fixed = TRUE))), by = ind]
setkey(Key, id) # Set the key to facilitate a merge
## Merge and aggregate in one step
DT[Key][, list(value = max(value)), by = ind]
# ind value
# 1: a,b 3
# 2: c 6
# 3: d 5
You don't need data.table, you can just use base R.
split.ids <- strsplit(collapsed.ids, ",")
split.df <- data.frame(id = tmp <- unlist(split.ids),
joinid = rep(collapsed.ids, sapply(split.ids, length)))
aggregated.df <- aggregate(value ~ id, data = merge(df, split.df), max)
Result:
# id value
# 1 a,b 3
# 2 c 6
# 3 d 5
Benchmark
df <- df[rep(1:4, 50000), ] # Make a big data.frame
system.time(...) # of the above code
# user system elapsed
# 1.700 0.154 1.947
EDIT: Apparently Ananda's code runs in 0.039, so I'm eating crow. But either are acceptable for this size.

reshape wide to long with character suffixes instead of numeric suffixes

Inspired by a comment from #gsk3 on a question about reshaping data, I started doing a little bit of experimentation with reshaping data where the variable names have character suffixes instead of numeric suffixes.
As an example, I'll load the dadmomw dataset from one of the UCLA ATS Stata learning webpages (see "Example 4" on the webpage).
Here's what the dataset looks like:
library(foreign)
dadmom <- read.dta("https://stats.idre.ucla.edu/stat/stata/modules/dadmomw.dat")
dadmom
# famid named incd namem incm
# 1 1 Bill 30000 Bess 15000
# 2 2 Art 22000 Amy 18000
# 3 3 Paul 25000 Pat 50000
When trying to reshape from this wide format to long, I run into a problem. Here's what I do to reshape the data.
reshape(dadmom, direction="long", idvar=1, varying=2:5,
sep="", v.names=c("name", "inc"), timevar="dadmom",
times=c("d", "m"))
# famid dadmom name inc
# 1.d 1 d 30000 Bill
# 2.d 2 d 22000 Art
# 3.d 3 d 25000 Paul
# 1.m 1 m 15000 Bess
# 2.m 2 m 18000 Amy
# 3.m 3 m 50000 Pat
Note the swapped column names for "name" and "inc"; changing v.names to c("inc", "name") doesn't solve the problem.
reshape seems very picky about wanting the columns to be named in a fairly standard way. For example, I can reshape the data correctly (and easily) if I first rename the columns:
dadmom2 <- dadmom # Just so we can continue experimenting with the original data
# Change the names of the last four variables to include a "."
names(dadmom2)[2:5] <- gsub("(d$|m$)", "\\.\\1", names(dadmom2)[2:5])
reshape(dadmom2, direction="long", idvar=1, varying=2:5,
timevar="dadmom")
# famid dadmom name inc
# 1.d 1 d Bill 30000
# 2.d 2 d Art 22000
# 3.d 3 d Paul 25000
# 1.m 1 m Bess 15000
# 2.m 2 m Amy 18000
# 3.m 3 m Pat 50000
My questions are:
Why is R swapping the columns in the example I've provided?
Can I get to this result with base R reshape without changing the variable names before reshaping?
Are there other approaches that could be considered instead of reshape?
This works (to specify to varying what columns go with who):
reshape(dadmom, direction="long", varying=list(c(2, 4), c(3, 5)),
sep="", v.names=c("name", "inc"), timevar="dadmom",
times=c("d", "m"))
So you actually have nested repeated measures here; both name and inc for mom and dad. Because you have more than one series of repeated measures you have to supply a list to varying that tells reshape which group gets stacked on the other group.
So the two approaches to this problem are to provide a list as I did or to rename the columns the way the R beast likes them as you did.
See my recent blogs on base reshape for more on this (particularly the second link deals with this):
reshape (part I)
reshape (part II)
Though this question was specifically about base R, it is useful to know other approaches that help you to achieve the same type of outcome.
One alternative to reshape or merged.stack would be to use a combination of "dplyr" and "tidry", like this:
dadmom %>%
gather(variable, value, -famid) %>% ## Make the entire dataset long
separate(variable, into = c("var", "time"), ## Split "variable" column into two...
sep = "(?<=name|inc)", perl = TRUE) %>% ## ... using regex to split the values
spread(var, value, convert = TRUE) ## Make result wide, converting type
# famid time inc name
# 1 1 d 30000 Bill
# 2 1 m 15000 Bess
# 3 2 d 22000 Art
# 4 2 m 18000 Amy
# 5 3 d 25000 Paul
# 6 3 m 50000 Pat
Another alternative would be to use melt from "data.table", like this:
library(data.table)
melt(as.data.table(dadmom), ## melt here requres a data.table
measure = patterns("name", "inc"), ## identify columns by patterns
value.name = c("name", "inc"))[ ## specify the resulting variable names
## melt creates a numeric "variable" value. Replace with factored labels
, variable := factor(variable, labels = c("d", "m"))][]
# famid variable name inc
# 1: 1 d Bill 30000
# 2: 2 d Art 22000
# 3: 3 d Paul 25000
# 4: 1 m Bess 15000
# 5: 2 m Amy 18000
# 6: 3 m Pat 50000
How do these approaches compare with merged.stack?
Both packages are much better supported. They update and test their code more extensively than I do.
melt is blazing fast.
The Hadleyverse approach is actually slower (in many of my tests, even slower than base R's reshape) probably because of having to make the data long, then wide, then performing type conversion. However, some users like its step-by-step approach.
The Hadleyverse approach might have some unintended consequences because of the requirement of making the data long before making it wide. That forces all of the measure columns to be coerced to the same type (usually "character") if they are of different types to begin with.
Neither have the same convenience of merged.stack. Just look at the code required to get the result ;-)
merged.stack, however, can probably benefit from a simplified update, something along the lines of this function
ReshapeLong_ <- function(indt, stubs, sep = NULL) {
if (!is.data.table(indt)) indt <- as.data.table(indt)
mv <- lapply(stubs, function(y) grep(sprintf("^%s", y), names(indt)))
levs <- unique(gsub(paste(stubs, collapse="|"), "", names(indt)[unlist(mv)]))
if (!is.null(sep)) levs <- gsub(sprintf("^%s", sep), "", levs, fixed = TRUE)
melt(indt, measure = mv, value.name = stubs)[
, variable := factor(variable, labels = levs)][]
}
Which can then be used as:
ReshapeLong_(dadmom, stubs = c("name", "inc"))
How do these approaches compare with base R's reshape?
The main difference is that reshape is not able to handle unbalanced panel datasets. See, for example, "mydf2" as opposed to "mydf" in the tests below.
Test cases
Here's some sample data. "mydf" is balanced. "mydf2" is not balanced.
set.seed(1)
x <- 10000
mydf <- mydf2 <- data.frame(
id_1 = 1:x, id_2 = c("A", "B"), varAa = sample(letters, x, TRUE),
varAb = sample(letters, x, TRUE), varAc = sample(letters, x, TRUE),
varBa = sample(10, x, TRUE), varBb = sample(10, x, TRUE),
varBc = sample(10, x, TRUE), varCa = rnorm(x), varCb = rnorm(x),
varCc = rnorm(x), varDa = rnorm(x), varDb = rnorm(x), varDc = rnorm(x))
mydf2 <- mydf2[-c(9, 14)] ## Make data unbalanced
Here are some functions to test:
f1 <- function(mydf) {
mydf %>%
gather(variable, value, starts_with("var")) %>%
separate(variable, into = c("var", "time"),
sep = "(?<=varA|varB|varC|varD)", perl = TRUE) %>%
spread(var, value, convert = TRUE)
}
f2 <- function(mydf) {
melt(as.data.table(mydf),
measure = patterns(paste0("var", c("A", "B", "C", "D"))),
value.name = paste0("var", c("A", "B", "C", "D")))[
, variable := factor(variable, labels = c("a", "b", "c"))][]
}
f3 <- function(mydf) {
merged.stack(mydf, var.stubs = paste0("var", c("A", "B", "C", "D")), sep = "var.stubs")
}
## Won't run with "mydf2". Should run with "mydf"
f4 <- function(mydf) {
reshape(mydf, direction = "long",
varying = lapply(c("varA", "varB", "varC", "varD"),
function(x) grep(x, names(mydf))),
sep = "", v.names = paste0("var", c("A", "B", "C", "D")),
timevar="time", times = c("a", "b", "c"))
}
Test performance:
library(microbenchmark)
microbenchmark(f1(mydf), f2(mydf), f3(mydf), f4(mydf))
# Unit: milliseconds
# expr min lq mean median uq max neval
# f1(mydf) 463.006547 492.073086 528.533319 514.189548 538.910756 867.93356 100
# f2(mydf) 3.737321 4.108376 6.674066 4.332391 4.761681 47.71142 100
# f3(mydf) 60.211254 64.766770 86.812077 87.040087 92.841747 262.89409 100
# f4(mydf) 40.596455 43.753431 61.006337 48.963145 69.983623 230.48449 100
Observations:
Base R's reshape would not be able to handle reshaping "mydf2".
The "dplyr" + "tidyr" approach would mangle the results in the resulting "varB", "varC", and "varD" because values would be coerced to character.
As the benchmarks show, reshape gives reasonable performance.
Note: Because of the difference in time between posting my last answer and the differences in approach, I thought I would share this as a new answer.
merged.stack from my "splitstackshape" handles this by utilizing the sep = "var.stubs" construct:
library(splitstackshape)
merged.stack(dadmom, var.stubs = c("inc", "name"), sep = "var.stubs")
# famid .time_1 inc name
# 1: 1 d 30000 Bill
# 2: 1 m 15000 Bess
# 3: 2 d 22000 Art
# 4: 2 m 18000 Amy
# 5: 3 d 25000 Paul
# 6: 3 m 50000 Pat
Notice that since there is no real separator in the variables that are being stacked, we can just strip out the var.stubs from the names to create the "time" variables. Using sep = "var.stubs" is equivalent to doing sep = "inc|name".
This works because ".time_1" is created by stripping out what is left after removing the "var.stubs" from the column names.

Resources