Gather ragged data frame into key-value columns - r

I recently discovered how to create ragged data frames using the I function, but are having a hard time integrating them with tidyr, ggplot2 and the rest of the Hadleyverse. More specifically, how do you gather a column containing named vectors into key-value-columns?
Suppose I create a data frame like this
make.vector <- function(length.out){
x <- sample(9, length.out)
names(x) <- switch(length.out,
"Alice",
c("Bob", "Charlie"),
c("Dave", "Erin", "Frank"),
c("Gwen", "Harold", "Inez", "James"))
x
}
mydf <- data.frame(Game = gl(3, 3, labels=LETTERS[1:3]),
Set = rep(1:3, 3),
Score = I(lapply(rep(2:4, each=3), make.vector)))
producing
> print(mydf)
Game Set Score
1 A 1 8, 3
2 A 2 2, 8
3 A 3 3, 8
4 B 1 1, 5, 4
5 B 2 2, 3, 5
6 B 3 2, 8, 5
7 C 1 7, 2, 3, 4
8 C 2 1, 6, 3, 7
9 C 3 6, 9, 3, 7
The data frame can be manipulated with dplyr and tidyr in a straight forward manner as long as the results are of the expected length.
mydf %>%
mutate(nPlayers = sapply(Score, length))
mydf %>%
group_by(Game) %>%
summarize(TotalScore = list(Reduce("+", Score)))
However, I cannot figure out how to create multiple rows of result for each original row. Suppose I want to create the following data frame by manipulating mydf:
Game Set Player Score
1 A 1 Bob 8
2 A 1 Charlie 3
3 A 2 Bob 2
4 A 2 Charlie 8
5 A 3 Bob 3
6 A 3 Charlie 8
7 B 1 Dave 1
8 B 1 Erin 5
9 B 1 Frank 4
10 B 2 Dave 2
...
The only tool I know for doing so would be the gather function of the tidyr package, but it doesn't seem to play very well with non-atomic data.
mydf %>%
mutate(Player = lapply(Score, names)) %>%
gather(P = Player, S = Score)
I guess I could hack together a solution (as done in similar previous questions [1][2]),
cbind(
mydf[rep(1:nrow(mydf), sapply(mydf$Score, length)),
c("Game", "Set")],
data.frame(
Player = unlist(lapply(mydf$Score, names)),
Score = unlist(mydf$Score)
)
)
but I have a feeling I will have a hard time digesting it if look back at the code next week. Is there a "official" or at least smarter way to do this? Otherwise I'll make a general function for it and add to my personal library.
Update
In the light of David's answer below I figured out that the same result can be achieved with dplyr too.
mydf %>%
group_by(Game, Set) %>%
do(with(., data.frame(Player = names(unlist(Score)),
Score = unlist(Score))))
# Game Set Player Score
# 1 A 1 Bob 8
# 2 A 1 Charlie 6
# 3 A 2 Bob 7
# 4 A 2 Charlie 6
# 5 A 3 Bob 5
# 6 A 3 Charlie 8
# 7 B 1 Dave 1
# 8 B 1 Erin 9
# 9 B 1 Frank 3
# 10 B 2 Dave 8
# .. ... ... ... ...
# Warning message:
# In rbind_all(out[[1]]) : Unequal factor levels: coercing to character

I would try unlisting by group using data.table. You can run this only once per each group while storing it in a temporary variable using curly brackets (as you would do within a function) within the jth expression
library(data.table)
setDT(mydf)[, {
temp <- unlist(Score)
.(Player = names(temp), Score = temp)
}, by = .(Game, Set)]
# Game Set Player Score
# 1: A 1 Bob 2
# 2: A 1 Charlie 9
# 3: A 2 Bob 6
# 4: A 2 Charlie 3
# 5: A 3 Bob 2
# 6: A 3 Charlie 8
# 7: B 1 Dave 1
# 8: B 1 Erin 6
# 9: B 1 Frank 5
# 10: B 2 Dave 3
#...

Related

Combining elements of one column into two columns by group in R

Given a two column data.frame with one containing group labels and a second containing integer values ordered from smallest to largest. How can the data be expanded creating pairs of combinations of the integer column?
Not sure the best way to state this. I'm not interested in all possible combinations but instead all unique combinations starting from the lowest value.
In r, the combn function gives the desired output not considering groups, for example:
t(combn(seq(1:4),2))
[,1] [,2]
[1,] 1 2
[2,] 1 3
[3,] 1 4
[4,] 2 3
[5,] 2 4
[6,] 3 4
Since the first values is 1 we get the unique combination of (1,2) and not the additional combination of (2,1) which I don't need. How would one then apply a similar method by groups?
for example given a data.frame
test <- data.frame(Group = rep(c("A","B"),each=4),
Val = c(1,3,6,8,2,4,5,7))
test
Group Val
1 A 1
2 A 3
3 A 6
4 A 8
5 B 2
6 B 4
7 B 5
8 B 7
I was able to come up with this solution that gives the desired output:
test <- data.frame(Group = rep(c("A","B"),each=4),
Val = c(1,3,6,8,2,4,5,7))
j=1
for(i in unique(test$Group)){
if(j==1){
one <- filter(test,i == Group)
two <- data.frame(t(combn(one$Val,2)))
test1 <- data.frame(Group = i,Val1=two$X1,Val2=two$X2)
j=j+1
}else{
one <- filter(test,i == Group)
two <- data.frame(t(combn(one$Val,2)))
test2 <- data.frame(Group = i,Val1=two$X1,Val2=two$X2)
test1 <- rbind(test1,test2)
}
}
test1
Group Val1 Val2
1 A 1 3
2 A 1 6
3 A 1 8
4 A 3 6
5 A 3 8
6 A 6 8
7 B 2 4
8 B 2 5
9 B 2 7
10 B 4 5
11 B 4 7
12 B 5 7
However, this is not elegant and is really slow as the number of groups and length of each group become large. It seems like there should be a more elegant and efficient solution but so far I have not come across anything on SO.
I would appreciate any ideas!
here is a data.table approach
library( data.table )
#make test a data.table
setDT(test)
#split by group
L <- split( test, by = "Group")
#get unique combinations of 2 Vals
L2 <- lapply( L, function(x) {
as.data.table( t( combn( x$Val, m = 2, simplify = TRUE ) ) )
})
#merge them back together
data.table::rbindlist( L2, idcol = "Group" )
# Group V1 V2
# 1: A 1 3
# 2: A 1 6
# 3: A 1 8
# 4: A 3 6
# 5: A 3 8
# 6: A 6 8
# 7: B 2 4
# 8: B 2 5
# 9: B 2 7
#10: B 4 5
#11: B 4 7
#12: B 5 7
You can set simplify = F in combn() and then use unnest_wider() in dplyr.
library(dplyr)
library(tidyr)
test %>%
group_by(Group) %>%
summarise(Val = combn(Val, 2, simplify = F)) %>%
unnest_wider(Val, names_sep = "_")
# Group Val_1 Val_2
# <chr> <dbl> <dbl>
# 1 A 1 3
# 2 A 1 6
# 3 A 1 8
# 4 A 3 6
# 5 A 3 8
# 6 A 6 8
# 7 B 2 4
# 8 B 2 5
# 9 B 2 7
# 10 B 4 5
# 11 B 4 7
# 12 B 5 7
library(tidyverse)
df2 <- split(df$Val, df$Group) %>%
map(~gtools::combinations(n = 4, r = 2, v = .x)) %>%
map(~as_tibble(.x, .name_repair = "unique")) %>%
bind_rows(.id = "Group")

Pair-wise manipulating rows in data.frame

I have data on several thousand US basketball players over multiple years.
Each basketball player has a unique ID. It is known for what team and on which position they play in a given year, much like the mock data df below:
df <- data.frame(id = c(rep(1:4, times=2), 1),
year = c(1, 1, 2, 2, 3, 4, 4, 4,5),
team = c(1,2,3,4, 2,2,4,4,2),
position = c(1,2,3,4,1,1,4,4,4))
> df
id year team position
1 1 1 1 1
2 2 1 2 2
3 3 2 3 3
4 4 2 4 4
5 1 3 2 1
6 2 4 2 1
7 3 4 4 4
8 4 4 4 4
9 1 5 2 4
What is an efficient way to manipulate df into new_df below?
> new_df
id move time position.1 position.2 year.1 year.2
1 1 0 2 1 1 1 3
2 2 1 3 2 1 1 4
3 3 0 2 3 4 2 4
4 4 1 2 4 4 2 4
5 1 0 2 1 4 3 5
In new_df the first occurrence of the basketball player is compared to the second occurrence, recorded whether the player switched teams and how long it took the player to make the switch.
Note:
In the real data some basketball players occur more than twice and can play for multiple teams and on multiple positions.
In such a case a new row in new_df is added that compares each additional occurrence of a player with only the previous occurrence.
Edit: I think this is not a rather simple reshape exercise, because of the reasons mentioned in the previous two sentences. To clarify this, I've added an additional occurrence of player ID 1 to the mock data.
Any help is most welcome and appreciated!
s=table(df$id)
df$time=rep(1:max(s),each=length(s))
df1 = reshape(df,idvar = "id",dir="wide")
transform(df1, move=+(team.1==team.2),time=year.2-year.1)
id year.1 team.1 position.1 year.2 team.2 position.2 move time
1 1 1 1 1 3 2 1 0 2
2 2 1 2 2 4 2 1 1 3
3 3 2 3 3 4 4 4 0 2
4 4 2 4 4 4 4 4 1 2
The below code should help you get till the point where the data is transposed
You'll have to create the move and time variables
df <- data.frame(id = rep(1:4, times=2),
year = c(1, 1, 2, 2, 3, 4, 4, 4),
team = c(1, 2, 3, 4, 2, 2, 4, 4),
position = c(1, 2, 3, 4, 1, 1, 4, 4))
library(reshape2)
library(data.table)
setDT(df) #convert to data.table
df[,rno:=rank(year,ties="min"),by=.(id)] #gives the occurance
#creating the transposed dataset
Dcast_DT<-dcast(df,id~rno,value.var = c("year","team","position"))
This piece of code did the trick, using data.table
#transform to data.table
dt <- as.data.table(df)
#sort on year
setorder(dt, year, na.last=TRUE)
#indicate the names of the new columns
new_cols= c("time", "move", "prev_team", "prev_year", "prev_position")
#set up the new variables
dtt[ , (new_cols) := list(year - shift(year),team!= shift(team), shift(team), shift(year), shift(position)), by = id]
# select only repeating occurrences
dtt <- dtt[!is.na(dtt$time),]
#outcome
dtt
id year team position time move prev_team prev_year prev_position
1: 1 3 2 1 2 TRUE 1 1 1
2: 2 4 2 1 3 FALSE 2 1 2
3: 3 4 4 4 2 TRUE 3 2 3
4: 4 4 4 4 2 FALSE 4 2 4
5: 1 5 2 4 2 FALSE 2 3 1

Adding NA's where data is missing [duplicate]

This question already has an answer here:
Insert missing time rows into a dataframe
(1 answer)
Closed 5 years ago.
I have a dataset that look like the following
id = c(1,1,1,2,2,2,3,3,4)
cycle = c(1,2,3,1,2,3,1,3,2)
value = 1:9
data.frame(id,cycle,value)
> data.frame(id,cycle,value)
id cycle value
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 4
5 2 2 5
6 2 3 6
7 3 1 7
8 3 3 8
9 4 2 9
so basically there is a variable called id that identifies the sample, a variable called cycle which identifies the timepoint, and a variable called value that identifies the value at that timepoint.
As you see, sample 3 does not have cycle 2 data and sample 4 is missing cycle 1 and 3 data. What I want to know is there a way to run a command outside of a loop to get the data to place NA's where there is no data. So I would like for my dataset to look like the following:
> data.frame(id,cycle,value)
id cycle value
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 4
5 2 2 5
6 2 3 6
7 3 1 7
8 3 2 NA
9 3 3 8
10 4 1 NA
11 4 2 9
12 4 3 NA
I am able to solve this problem with a lot of loops and if statements but the code is extremely long and cumbersome (I have many more columns in my real dataset).
Also, the number of samples I have is very large so I need something that is generalizable.
Using merge and expand.grid, we can come up with a solution. expand.grid creates a data.frame with all combinations of the supplied vectors (so you'd supply it with the id and cycle variables). By merging to your original data (and using all.x = T, which is like a left join in SQL), we can fill in those rows with missing data in dat with NA.
id = c(1,1,1,2,2,2,3,3,4)
cycle = c(1,2,3,1,2,3,1,3,2)
value = 1:9
dat <- data.frame(id,cycle,value)
grid_dat <- expand.grid(id = 1:4,
cycle = 1:3)
# or you could do (HT #jogo):
# grid_dat <- expand.grid(id = unique(dat$id),
# cycle = unique(dat$cycle))
merge(x = grid_dat, y = dat, by = c('id','cycle'), all.x = T)
id cycle value
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 4
5 2 2 5
6 2 3 6
7 3 1 7
8 3 2 NA
9 3 3 8
10 4 1 NA
11 4 2 9
12 4 3 NA
A solution based on the package tidyverse.
library(tidyverse)
# Create example data frame
id <- c(1, 1, 1, 2, 2, 2, 3, 3, 4)
cycle <- c(1, 2, 3, 1, 2, 3, 1, 3, 2)
value <- 1:9
dt <- data.frame(id, cycle, value)
# Complete the combination between id and cycle
dt2 <- dt %>% complete(id, cycle)
Here is a solution with data.table doing a cross join:
library("data.table")
d <- data.table(id = c(1,1,1,2,2,2,3,3,4), cycle = c(1,2,3,1,2,3,1,3,2), value = 1:9)
d[CJ(id=id, cycle=cycle, unique=TRUE), on=.(id,cycle)]

R data.table not preserving factor when applying function by group [duplicate]

The data comes from another question I was playing around with:
dt <- data.table(user=c(rep(3, 5), rep(4, 5)),
country=c(rep(1,4),rep(2,6)),
event=1:10, key="user")
# user country event
#1: 3 1 1
#2: 3 1 2
#3: 3 1 3
#4: 3 1 4
#5: 3 2 5
#6: 4 2 6
#7: 4 2 7
#8: 4 2 8
#9: 4 2 9
#10: 4 2 10
And here's the surprising behavior:
dt[user == 3, as.data.frame(table(country))]
# country Freq
#1 1 4
#2 2 1
dt[user == 4, as.data.frame(table(country))]
# country Freq
#1 2 5
dt[, as.data.frame(table(country)), by = user]
# user country Freq
#1: 3 1 4
#2: 3 2 1
#3: 4 1 5
# ^^^ - why is this 1 instead of 2?!
Thanks mnel and Victor K. The natural follow-up is - shouldn't it be 2, i.e. is this a bug? I expected
dt[, blah, by = user]
to return identical result to
rbind(dt[user == 3, blah], dt[user == 4, blah])
Is that expectation incorrect?
The idiomatic data.table approach is to use .N
dt[ , .N, by = list(user, country)]
This will be far quicker and it will also retain country as the same class as in the original.
As mnel noted in comments, as.data.frame(table(...)) produces a data frame where the first variable is a factor. For user == 4, there is only one level in the factor, which is stored internally as 1.
What you want is factor levels, but what you get is how factors are stored internally (as integers, starting from 1). The following provides the expected result:
> dt[, lapply(as.data.frame(table(country)), as.character), by = user]
user country Freq
1: 3 1 4
2: 3 2 1
3: 4 2 5
Update. Regarding your second question: no, I think data.table behaviour is correct. Same thing happens in plain R when you join two factors with different levels:
> a <- factor(3:5)
> b <- factor(6:8)
> a
[1] 3 4 5
Levels: 3 4 5
> b
[1] 6 7 8
Levels: 6 7 8
> c(a,b)
[1] 1 2 3 1 2 3

create new column based on values on previous rows

I hope somebody can help me.
I have a data like this:
subject choice
1 3
2 3
3 1
4 4
5 3
6 2
7 2
8 3
now I want to create a new column based on the value of 'choice' column. If the value on choice column is new (has never occurred before), the value on the new column will be 'No', otherwise, if the value has already occur on previous rows , than the value in new column will be 'Soc'. the new table will look like this:
subject choice newcolumn
1 3 No
2 3 Soc
3 1 No
4 4 No
5 3 Soc
6 2 No
7 2 Soc
8 3 Soc
can somebody help me? thanks in advance
Using example data
DF <- data.frame(subject = 1:8, choice = c(3, 3, 1, 4, 3, 2, 2, 3))
I would do
DF <- transform(DF, newcolumn = c("No","Soc")[duplicated(choice) + 1])
giving
subject choice newcolumn
1 1 3 No
2 2 3 Soc
3 3 1 No
4 4 4 No
5 5 3 Soc
6 6 2 No
7 7 2 Soc
8 8 3 Soc
Without transform() this would be
DF$newcolumn <- c("No","Soc")[duplicated(DF$choice) + 1])
Another option using duplicated and ifelse:
transform(DF, newcolumn = ifelse(!duplicated(choice),'No','Soc'))
## subject choice newcolumn
## 1 1 3 No
## 2 2 3 Soc
## 3 3 1 No
## 4 4 4 No
## 5 5 3 Soc
## 6 6 2 No
## 7 7 2 Soc
## 8 8 3 Soc
There are a bunch of ways to do this, but using bracket subsetting will teach you some useful things about R:
# Make your example reproducible
subject <- 1:8
choice <- c(3, 3, 1, 4, 3, 2, 2, 3)
d <- data.frame(subject, choice)
# Create a new column, set all teh values to "No
d$newColumn <- "No"
# Set those values for which choice is duplicated to "Soc"
d$newColumn[duplicated(d$choice)] <- "Soc"

Resources