Creating wide data that has only 1 ID column [duplicate] - r

This question already has answers here:
How to reshape data from long to wide format
(14 answers)
Closed 3 years ago.
I have a data frame that looks like this:
ID Code_Type Code date
1 10 4 1
1 9 5 2
2 10 6 3
2 9 7 4
and I would like it to look like this:
ID date.1 date.2 9 10
1 1 2 5 4
2 3 4 7 6
Where the different dates have different columns on the same row.
My current code is this:
#Example df
df <- data.frame("ID" = c(1,1,2,2),
"Code_Type" = c(10,9,10,9),
"Code" = c(4,5,6,7),
"date"= c(1,2,3,4))
spread(df, Code_Type,Code)
This outputs:
ID date 9 10
1 1 NA 4
1 2 5 NA
2 3 NA 6
2 4 7 NA
Which is similar to what I want I just have no idea how to make the date column turn into multiple columns. Any help or extra reading is appreciated.
To clarify this is my expected output data frame
ID date.1 date.2 9 10
1 1 2 5 4
2 3 4 7 6

You could use reshape from base R.
reshape(dat, idvar=c("ID"), timevar="Code_Type", direction="wide")
# ID Code.10 date.10 Code.9 date.9
# 1 1 4 1 5 2
# 3 2 6 3 7 4
Data
dat <- structure(list(ID = c(1, 1, 2, 2), Code_Type = c(10, 9, 10, 9
), Code = c(4, 5, 6, 7), date = c(1, 2, 3, 4)), class = "data.frame", row.names = c(NA,
-4L))

Here's a dplyr / tidyr alternative:
df %>% mutate(date.1 = date %% 2 * date) %>% mutate(date.2 = - (date %% 2 - 1) * date) %>% select(-date) %>% spread(Code_Type, Code) %>% group_by(ID) %>% summarise_all(list(~ sum(.[!is.na(.)])))
# A tibble: 2 x 5
ID date.1 date.2 `9` `10`
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1 1 2 5 4
2 2 3 4 7 6
The idea is to split the date column into two columns whether date is even or odd. This is done using the modulo (%%) operator (and some additional number crunching). date.1 = date %% 2 * date catches the odd numbers in date and is 0 for all the others; date.2 = - (date %% 2 - 1) * date catches the even numbers and is 0 for all the others.
Afterwards it's straight forward: select all columns but date; spread it to wide format and, a bit tricky again, summarise by ID and drop all NAs (group_by(ID) %>% summarise_all(list(~ sum(.[!is.na(.)]))).

Related

Cast multiple values in R [duplicate]

This question already has answers here:
Convert data from long format to wide format with multiple measure columns
(6 answers)
Closed 1 year ago.
Is there a way to cast multiple values in R
asd <- data.frame(week = c(1,1,2,2), year = c("2019","2020","2019","2020"), val = c(1,2,3,4), cap = c(3,4,6,7))
Expected output
week 2019_val 2020_val 2019_cap 2020_cap
1 1 2 3 6
2 3 4 4 7
If you want to do this in base R, you can use reshape:
reshape(asd, direction = "wide", idvar = "week", timevar = "year", sep = "_")
#> week val_2019 cap_2019 val_2020 cap_2020
#> 1 1 1 3 2 4
#> 3 2 3 6 4 7
Note that it is best not to start your new column names with the year, since variable names beginning with numbers are not legal in R, and therefore always need to be quoted. It becomes quite tiresome to write asd$'2020_val' rather than asd$val_2020 and can often lead to errors when one forgets the quotes.
With tidyr::pivot_wider you could do:
asd <- data.frame(week = c(1,1,2,2), year = c("2019","2020","2019","2020"), val = c(1,2,3,4), cap = c(3,4,6,7))
tidyr::pivot_wider(asd, names_from = year, values_from = c(val, cap), names_glue = "{year}_{.value}")
#> # A tibble: 2 × 5
#> week `2019_val` `2020_val` `2019_cap` `2020_cap`
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 2 3 4
#> 2 2 3 4 6 7
For completion, here is data.table option -
library(data.table)
dcast(setDT(asd), week~year, value.var = c('val', 'cap'))
# week val_2019 val_2020 cap_2019 cap_2020
#1: 1 1 2 3 4
#2: 2 3 4 6 7
Slightly different approach using pivot_longer and pivot_wider together:
library(tidyr)
library(dplyr)
asd %>%
pivot_longer(
cols = -c(week, year)
) %>%
pivot_wider(
names_from = c(year, name)
)
week `2019_val` `2019_cap` `2020_val` `2020_cap`
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1 1 3 2 4
2 2 3 6 4 7

Increase the value in the next row by 1 if two values are equal

I got data like this
structure(list(id = c(1, 1, 1, 2, 2, 2), time = c(1, 2, 2, 5,
6, 6)), class = "data.frame", row.names = c(NA, -6L))
and If for the same ID the value in the next row is equal to the value in the previous row, then increase the value of the duplicate by 1. I want to get this
structure(list(id2 = c(1, 1, 1, 2, 2, 2), time2 = c(1, 2, 3,
5, 6, 7)), class = "data.frame", row.names = c(NA, -6L))
Using base R:
ave(df$time, df$time, FUN = function(z) z+cumsum(duplicated(z)))
# [1] 1 2 3 5 6 7
(This can be reassigned back into time.)
This deals with 2 or more duplicates, meaning if we instead have another 6th row,
df <- rbind(df, df[6,])
df$time2 <- ave(df$time, df$time, FUN = function(z) z+cumsum(duplicated(z)))
df
# id time time2
# 1 1 1 1
# 2 1 2 2
# 3 1 2 3
# 4 2 5 5
# 5 2 6 6
# 6 2 6 7
# 61 2 6 8
You could use accumulate
library(tidyverse)
df %>%
group_by(id) %>%
mutate(time2 = accumulate(time, ~if(.x>=.y) .x + 1 else .y))
# A tibble: 6 x 3
# Groups: id [2]
id time time2
<dbl> <dbl> <dbl>
1 1 1 1
2 1 2 2
3 1 2 3
4 2 5 5
5 2 6 6
6 2 6 7
This works even if the group is repeated more than twice.
If the first data.frame is named df, this gives you what you need:
df$time[duplicated(df$id) & duplicated(df$time)] <- df$time[duplicated(df$id) & duplicated(df$time)] + 1
df
id time
1 1 1
2 1 2
3 1 3
4 2 5
5 2 6
6 2 7
It finds the rows where both id and time have been duplicated from the previous row, and adds 1 to time in those rows
You can use dplyr's mutate with lag
data%>%group_by(id)%>%
mutate(time=time+cumsum(duplicated(time)))%>%
ungroup()
# A tibble: 6 x 2
id time
<dbl> <dbl>
1 1 1
2 1 2
3 1 3
4 2 5
5 2 6
6 2 7

how to separate 2 numbers within a column in R

In R, I want to separate numbers that are in the same column. My data appear like this:
id time
1 1,2
2 3,4
3 4,5,6
I want it to appear like this:
1 1
1 2
2 3
2 4
3 4
3 5
3 6
Though not shown, there are different iterations of time that vary depending on the id. For example:
4 1,6,7
5 1,3,6
6 1,4,5
7 1,3,5
8 2,3,4
There are 100 ids and the time column has different #s that vary in order as shown above.
Does anyone have advice to do this?
An option with separate_rows
library(dplyr)
library(tidyr)
df %>%
separate_rows(time, sep = "(?<=.)(?=.)", convert = TRUE)
# A tibble: 4 x 2
# id time
# <dbl> <int>
#1 1 1
#2 1 2
#3 2 3
#4 2 4
data
df <- structure(list(id = c(1, 2), time = c(12, 34)), class = "data.frame",
row.names = c(NA,
-2L))
Using tidyverse you could try the following. Make sure time is character type, and use strsplit to split up into single characters.
library(tidyverse)
df %>%
mutate(time = strsplit(as.character(time), ",")) %>%
unnest(cols = time)
Or you can just use separate_rows and indicate comma as separator:
df %>%
separate_rows(time, sep = ',')
Or in base R you could try this:
s <- strsplit(df$time, ',', fixed = T)
data.frame(id = unlist(s), time = rep(df$id, lengths(s)))
Output
# A tibble: 10 x 2
id time
<int> <chr>
1 1 1
2 1 2
3 2 3
4 2 4
5 3 4
6 3 5
7 3 6
8 4 1
9 4 6
10 4 7
Data
df <- structure(list(id = 1:4, time = c("1,2", "3,4", "4,5,6", "1,6,7"
)), class = "data.frame", row.names = c(NA, -4L))

R: Producing frequency table by selecting certain rows

I have a minimal example of a data set D that looks something like:
score person freq
10 1 3
10 2 5
10 3 4
8 1 3
7 2 2
6 4 1
Now, I want to be able to plot frequency of score=10 against person.
However, if I do:
#My bad, turns out the next line only works for matrices anyway:
#D = D[which(D[,1] == 10)]
D = subset(D, score == 10)
then I get:
score person freq
10 1 3
10 2 5
10 3 4
However, this is what I would like to get:
score person freq
10 1 3
10 2 5
10 3 4
10 4 0
Is there any quick and painless way for me to do this in R?
Here's a base R approach:
subset(as.data.frame(xtabs(freq ~ score + person, df)), score == 10)
# score person Freq
#4 10 1 3
#8 10 2 5
#12 10 3 4
#16 10 4 0
You can use complete() from the tidyr package to create the missing rows and then you can simply subset:
library(tidyr)
D2 <- complete(D, score, person, fill = list(freq = 0))
D2[D2$score == 10, ]
## Source: local data frame [4 x 3]
##
## score person freq
## (int) (int) (dbl)
## 1 10 1 3
## 2 10 2 5
## 3 10 3 4
## 4 10 4 0
complete() takes as the first argument the data frame that it should work with. Then follow the names of the columns that should be completed. The argument fill is a list that gives for each of the remaining columns (which is only freq here) the value they should be filled with.
As suggested by docendo-discimus, this can be further simplified by using also the dplyr package as follows:
library(tidyr)
library(dplyr)
complete(D, score, person, fill = list(freq = 0)) %>% filter(score == 10)
Here is a dplyr approach:
D %>% mutate(freq = ifelse(score == 10, freq, 0),
score = 10) %>%
group_by(score, person) %>%
summarise(freq = max(freq))
Source: local data frame [4 x 3]
Groups: score [?]
score person freq
(dbl) (int) (dbl)
1 10 1 3
2 10 2 5
3 10 3 4
4 10 4 0

dplyr: grouping and summarizing/mutating data with rolling time windows

I have irregular timeseries data representing a certain type of transaction for users. Each line of data is timestamped and represents a transaction at that time. By the irregular nature of the data some users might have 100 rows in a day and other users might have 0 or 1 transaction in a day.
The data might look something like this:
data.frame(
id = c(1, 1, 1, 1, 1, 2, 2, 3, 4),
date = c("2015-01-01",
"2015-01-01",
"2015-01-05",
"2015-01-25",
"2015-02-15",
"2015-05-05",
"2015-01-01",
"2015-08-01",
"2015-01-01"),
n_widgets = c(1,2,3,4,4,5,2,4,5)
)
id date n_widgets
1 1 2015-01-01 1
2 1 2015-01-01 2
3 1 2015-01-05 3
4 1 2015-01-25 4
5 1 2015-02-15 4
6 2 2015-05-05 5
7 2 2015-01-01 2
8 3 2015-08-01 4
9 4 2015-01-01 5
Often I'd like to know some rolling statistics about users. For example: for this user on a certain day, how many transactions occurred in the previous 30 days, how many widgets were sold in the previous 30 days etc.
Corresponding to the above example, the data should look like:
id date n_widgets n_trans_30 total_widgets_30
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
If the time window is daily then the solution is simple: data %>% group_by(id, date) %>% summarize(...)
Similarly if the time window is monthly this is also relatively simple with lubridate: data %>% group_by(id, year(date), month(date)) %>% summarize(...)
However the challenge I'm having is how to setup a time window for an arbitrary period: 5-days, 10-days etc.
There's also the RcppRoll library but both RcppRoll and the rolling functions in zoo seem more setup for regular time series. As far as I can tell these window functions work based on the number of rows instead of a specified time period -- the key difference is that a certain time period might have a differing number of rows depending on date and user.
For example, it's possible for user 1, that the number of transactions in the 5 days previous of 2015-01-01 is equal to 100 transactions and for the same user the number of transactions in the 5 days previous of 2015-02-01 is equal to 5 transactions. Thus looking back a set number of rows will simply not work.
Additionally, there is another SO thread discussing rolling dates for irregular time series type data (Create new column based on condition that exists within a rolling date) however the accepted solution was using data.table and I'm specifically looking for a dplyr way of achieving this.
I suppose at the heart of this issue, this problem can be solved by answering this question: how can I group_by arbitrary time periods in dplyr. Alternatively, if there's a different dplyr way to achieve above without a complicated group_by, how can I do it?
EDIT: updated example to make nature of the rolling window more clear.
This can be done using SQL:
library(sqldf)
dd <- transform(data, date = as.Date(date))
sqldf("select a.*, count(*) n_trans30, sum(b.n_widgets) 'total_widgets30'
from dd a
left join dd b on b.date between a.date - 30 and a.date
and b.id = a.id
and b.rowid <= a.rowid
group by a.rowid")
giving:
id date n_widgets n_trans30 total_widgets30
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 2 2015-05-05 5 1 5
6 2 2015-01-01 2 1 2
7 3 2015-08-01 4 1 4
8 4 2015-01-01 5 1 5
Another approach is to expand your dataset to contain all possible days (using tidyr::complete), then use a rolling function (RcppRoll::roll_sum)
The fact that you have multiple observations per day is probably creating an issue though...
library(tidyr)
library(RcppRoll)
df2 <- df %>%
mutate(date=as.Date(date))
## create full dataset with all possible dates (go even 30 days back for first observation)
df_full<- df2 %>%
mutate(date=as.Date(date)) %>%
complete(id,
date=seq(from=min(.$date)-30,to=max(.$date), by=1),
fill=list(n_widgets=0))
## now use rolling function, and keep only original rows (left join)
df_roll <- df_full %>%
group_by(id) %>%
mutate(n_trans_30=roll_sum(x=n_widgets!=0, n=30, fill=0, align="right"),
total_widgets_30=roll_sum(x=n_widgets, n=30, fill=0, align="right")) %>%
ungroup() %>%
right_join(df2, by = c("date", "id", "n_widgets"))
The result is the same as yours (by chance)
id date n_widgets n_trans_30 total_widgets_30
<dbl> <date> <dbl> <dbl> <dbl>
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
But as said, it will fail for some days as it count last 30 obs, not last 30 days. So you might want first to summarise the information by day, then apply this.
EDITED based on comment below.
You can try something like this for up to 5 days:
df %>%
arrange(id, date) %>%
group_by(id) %>%
filter(as.numeric(difftime(Sys.Date(), date, unit = 'days')) <= 5) %>%
summarise(n_total_widgets = sum(n_widgets))
In this case, there are no days within five of current. So, it won't produce any output.
To get last five days for each ID, you can do something like this:
df %>%
arrange(id, date) %>%
group_by(id) %>%
filter(as.numeric(difftime(max(date), date, unit = 'days')) <= 5) %>%
summarise(n_total_widgets = sum(n_widgets))
Resulting output will be:
Source: local data frame [4 x 2]
id n_total_widgets
(dbl) (dbl)
1 1 4
2 2 5
3 3 4
4 4 5
I found a way to do this while working on this question
df <- data.frame(
id = c(1, 1, 1, 1, 1, 2, 2, 3, 4),
date = c("2015-01-01",
"2015-01-01",
"2015-01-05",
"2015-01-25",
"2015-02-15",
"2015-05-05",
"2015-01-01",
"2015-08-01",
"2015-01-01"),
n_widgets = c(1,2,3,4,4,5,2,4,5)
)
count_window <- function(df, date2, w, id2){
min_date <- date2 - w
df2 <- df %>% filter(id == id2, date >= min_date, date <= date2)
out <- length(df2$date)
return(out)
}
v_count_window <- Vectorize(count_window, vectorize.args = c("date2","id2"))
sum_window <- function(df, date2, w, id2){
min_date <- date2 - w
df2 <- df %>% filter(id == id2, date >= min_date, date <= date2)
out <- sum(df2$n_widgets)
return(out)
}
v_sum_window <- Vectorize(sum_window, vectorize.args = c("date2","id2"))
res <- df %>% mutate(date = ymd(date)) %>%
mutate(min_date = date - 30,
n_trans = v_count_window(., date, 30, id),
total_widgets = v_sum_window(., date, 30, id)) %>%
select(id, date, n_widgets, n_trans, total_widgets)
res
id date n_widgets n_trans total_widgets
1 1 2015-01-01 1 2 3
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
This version is fairly case specific but you could probably make a version of the functions that is more general.
For simplicity reasons I recommend runner package which handles sliding window operations. In OP request window size k = 30 and windows depend on date idx = date. You can use runner function which applies any R function on given window, and sum_run
library(runner)
library(dplyr)
df %>%
group_by(id) %>%
arrange(date, .by_group = TRUE) %>%
mutate(
n_trans30 = runner(n_widgets, k = 30, idx = date, function(x) length(x)),
n_widgets30 = sum_run(n_widgets, k = 30, idx = date),
)
# id date n_widgets n_trans30 n_widgets30
#<dbl> <date> <dbl> <dbl> <dbl>
# 1 2015-01-01 1 1 1
# 1 2015-01-01 2 2 3
# 1 2015-01-05 3 3 6
# 1 2015-01-25 4 4 10
# 1 2015-02-15 4 2 8
# 2 2015-01-01 2 1 2
# 2 2015-05-05 5 1 5
# 3 2015-08-01 4 1 4
# 4 2015-01-01 5 1 5
Important: idx = date should be in ascending order.
For more go to documentation and vignettes

Resources