Long to short with data manipulation in r with 2 id pieces - r

In r a data set like below I want to create a variable that is prior minus post. I'll need to do some calculations by ID and later by group so I want to keep both.
Original
ID group time value
1 A prior 8
1 A post 5
2 A prior 4
2 A post 7
3 B prior 3
3 B post 10
4 B prior 5
4 B post 6
Desired data
ID group new_value
1 A -3
2 A 3
3 B 7
4 B 1
I think to get there I need to make my data like this
ID group value_prior value_post
1 A 8 5
2 A 4 7
3 B 3 10
4 B 5 6
But I'm not sure how to get there while preserving ID and group.

Assuming your data is already sorted, you could use:
aggregate(value ~ ID + group, df, diff)
ID group value
1 1 A -3
2 2 A 3
3 3 B 7
4 4 B 1
Or:
library(dplyr)
df %>%
group_by(ID, group) %>%
summarise(new_value = diff(value))

Related

Flagging an id based on another column has different values in R

I have a flagging rule need to apply.
Here is how my dataset looks like:
df <- data.frame(id = c(1,1,1,1, 2,2,2,2, 3,3,3,3),
key = c("a","a","b","c", "a","b","c","d", "a","b","c","c"),
form = c("A","B","A","A", "A","A","A","A", "B","B","B","A"))
> df
id key form
1 1 a A
2 1 a B
3 1 b A
4 1 c A
5 2 a A
6 2 b A
7 2 c A
8 2 d A
9 3 a B
10 3 b B
11 3 c B
12 3 c A
I would like to flag ids based on a key columns that has duplicates, a third column of form shows different forms for each key. The idea is to understand if an id has taken any items from multiple forms. I need to add a filtering column as below:
> df.1
id key form type
1 1 a A multiple
2 1 a B multiple
3 1 b A multiple
4 1 c A multiple
5 2 a A single
6 2 b A single
7 2 c A single
8 2 d A single
9 3 a B multiple
10 3 b B multiple
11 3 c B multiple
12 3 c A multiple
And eventually I need to get rid off the extra duplicated row which has different form. To decide which of the duplicated one drops, I pick whichever the form type has more items.
In a final separate dataset, I would like to have something like below:
> df.2
id key form type
1 1 a A multiple
3 1 b A multiple
4 1 c A multiple
5 2 a A single
6 2 b A single
7 2 c A single
8 2 d A single
9 3 a B multiple
10 3 b B multiple
11 3 c B multiple
So first id has form A dominant so kept the A, and the third id has form B dominant so kept the B.
Any ideas?
Thanks!
We can check number of distinct elements to create the new column by group and then filter based on the highest frequency (Mode)
library(dplyr)
df.2 <- df %>%
group_by(id) %>%
mutate(type = if(n_distinct(form) > 1) 'multiple' else 'single') %>%
filter(form == Mode(form)) %>%
ungroup
-output
> df.2
# A tibble: 10 × 4
id key form type
<dbl> <chr> <chr> <chr>
1 1 a A multiple
2 1 b A multiple
3 1 c A multiple
4 2 a A single
5 2 b A single
6 2 c A single
7 2 d A single
8 3 a B multiple
9 3 b B multiple
10 3 c B multiple
where
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}

R; two data sets merge by column a or column b

I have two data frames looking like this
view
id object date maxdate
1 a 8 9
1 b 8 9
2 a 8 9
3 b 7 8
purchase
id date object purchased
1 9 a 1
2 8 a 1
3 8 b 1
one is table when a product was viewed, and the other was if and when the product was purchased - after it was viewed it can be purchased within 24 hours. I want to merge them on column id, date and object OR id, maxdate=date and object, what is the best way to implement that or condition within full_join (dplyr)? below is the code for the data frame and what output I am looking for
id object date maxdate purchased
1 a 8 9 1
1 b 8 9 NA
2 a 8 9 1
3 b 7 8 1
id=c(1,1,2,3)
object=c("a","b","a","b")
date=c(8,8,8,7)
maxdate=c(9,9,9,8)
view=data.frame(id,object,date,maxdate)`
id=c(1,2,3)
date=c(9,8,8)
object=c("a","a","b")
purchased=(1,1,1)
purchase=data.frame(id,date,object,purchased)
so far I have tried something like this but it is very inefficient and confusing to clean up when it is large dataset
a=merge(view,purchase, by="id")
a$ind=ifelse(a$object.x==a$object.y & (a$date.x==a$date.y | a$maxdate==a$date.y),1,"NA")
Are you trying to do something like this?
a=merge(view[,-4],purchase, by=c("id", "object"))
names(a) = c("id", "object", "date.viewed", "date.purchased", "purchased")
> a
id object date.viewed date.purchased purchased
1 1 a 8 9 1
2 2 a 8 8 1
3 3 b 7 8 1

gather() per grouped variables in R for specific columns

I have a long data frame with players' decisions who worked in groups.
I need to convert the data in such a way that each row (individual observation) would contain all group members decisions (so we basically can see whether they are interdependent).
Let's say the generating code is:
group_id <- c(rep(1, 3), rep(2, 3))
player_id <- c(rep(seq(1, 3), 2))
player_decision <- seq(10,60,10)
player_contribution <- seq(6,1,-1)
df <-
data.frame(group_id, player_id, player_decision, player_contribution)
So the initial data looks like:
group_id player_id player_decision player_contribution
1 1 1 10 6
2 1 2 20 5
3 1 3 30 4
4 2 1 40 3
5 2 2 50 2
6 2 3 60 1
But I need to convert it to wide per each group, but only for some of these variables, (in this example specifically for player_contribution, but in such a way that the rest of the data remains. So the head of the converted data would be:
data.frame(group_id=c(1,1),
player_id=c(1,2),
player_decision=c(10,20),
player_1_contribution=c(6,6),
player_2_contribution=c(5,5),
player_3_contribution=c(4,6)
)
group_id player_id player_decision player_1_contribution player_2_contribution player_3_contribution
1 1 1 10 6 5 4
2 1 2 20 6 5 6
I suspect I need to group_by in dplyr and then somehow gather per group but only for player_contribution (or a vector of variables). But I really have no clue how to approach it. Any hints would be welcome!
Here is solution using tidyr and dplyr.
Make a dataframe with the columns for the players contributions. Then join this dataframe back onto the columns of interest from the original Dataframe.
library(tidyr)
library(dplyr)
wide<-pivot_wider(df, id_cols= - player_decision,
names_from = player_id,
values_from = player_contribution,
names_prefix = "player_contribution_")
answer<-left_join(df[, c("group_id", "player_id", "player_decision") ], wide)
answer
group_id player_id player_decision player_contribution_1 player_contribution_2 player_contribution_3
1 1 1 10 6 5 4
2 1 2 20 6 5 4
3 1 3 30 6 5 4
4 2 1 40 3 2 1
5 2 2 50 3 2 1
6 2 3 60 3 2 1

Pulling Specific Row Values based on Another Column

Simple question here -
if I have a dataframe such as:
> dat
typeID ID modelOption
1 2 1 good
2 2 2 avg
3 2 3 bad
4 2 4 marginCost
5 1 5 year1Premium
6 1 6 good
7 1 7 avg
8 1 8 bad
and I wanted to pull only the modelOption values based on the typeID. I know I can subset out all rows corresponding with the typeID, but I just want to pull the modelOption values in this case.

Calculating the occurrences of numbers in the subsets of a data.frame

I have a data frame in R which is similar to the follows. Actually my real ’df’ dataframe is much bigger than this one here but I really do not want to confuse anybody so that is why I try to simplify things as much as possible.
So here’s the data frame.
id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,3)
df <-data.frame(id,a,b,c,d,e)
df
Basically what I would like to do is to get the occurrences of numbers for each column (a,b,c,d,e) and for each id group (1,2,3) (for this latter grouping see my column ’id’).
So, for column ’a’ and for id number ’1’ (for the latter see column ’id’) the code would be something like this:
as.numeric(table(df[1:10,2]))
##The results are:
[1] 3 7
Just to briefly explain my results: in column ’a’ (and regarding only those records which have number ’1’ in column ’id’) we can say that number '1' occured 3 times and number '3' occured 7 times.
Again, just to show you another example. For column ’a’ and for id number ’2’ (for the latter grouping see again column ’id’):
as.numeric(table(df[11:20,2]))
##After running the codes the results are:
[1] 4 3 3
Let me explain a little again: in column ’a’ and regarding only those observations which have number ’2’ in column ’id’) we can say that number '1' occured 4 times, number '2' occured 3 times and number '3' occured 3 times.
So this is what I would like to do. Calculating the occurrences of numbers for each custom-defined subsets (and then collecting these values into a data frame). I know it is not a difficult task but the PROBLEM is that I’m gonna have to change the input ’df’ dataframe on a regular basis and hence both the overall number of rows and columns might change over time…
What I have done so far is that I have separated the ’df’ dataframe by columns, like this:
for (z in (2:ncol(df))) assign(paste("df",z,sep="."),df[,z])
So df.2 will refer to df$a, df.3 will equal df$b, df.4 will equal df$c etc. But I’m really stuck now and I don’t know how to move forward…
Is there a proper, ”automatic” way to solve this problem?
How about -
> library(reshape)
> dftab <- table(melt(df,'id'))
> dftab
, , value = 1
variable
id a b c d e
1 3 8 2 2 4
2 4 6 3 2 4
3 4 2 1 5 1
, , value = 2
variable
id a b c d e
1 0 1 4 3 3
2 3 3 3 6 2
3 1 4 5 3 4
, , value = 3
variable
id a b c d e
1 7 1 4 5 3
2 3 1 4 2 4
3 5 4 4 2 5
So to get the number of '3's in column 'a' and group '1'
you could just do
> dftab[3,'a',1]
[1] 4
A combination of tapply and apply can create the data you want:
tapply(df$id,df$id,function(x) apply(df[id==x,-1],2,table))
However, when a grouping doesn't have all the elements in it, as in 1a, the result will be a list for that id group rather than a nice table (matrix).
$`1`
$`1`$a
1 3
3 7
$`1`$b
1 2 3
8 1 1
$`1`$c
1 2 3
2 4 4
$`1`$d
1 2 3
2 3 5
$`1`$e
1 2 3
4 3 3
$`2`
a b c d e
1 4 6 3 2 4
2 3 3 3 6 2
3 3 1 4 2 4
$`3`
a b c d e
1 4 2 1 5 1
2 1 4 5 3 4
3 5 4 4 2 5
I'm sure someone will have a more elegant solution than this, but you can cobble it together with a simple function and dlply from the plyr package.
ColTables <- function(df) {
counts <- list()
for(a in names(df)[names(df) != "id"]) {
counts[[a]] <- table(df[a])
}
return(counts)
}
results <- dlply(df, "id", ColTables)
This gets you back a list - the first "layer" of the list will be the id variable; the second the table results for each column for that id variable. For example:
> results[['2']]['a']
$a
1 2 3
4 3 3
For id variable = 2, column = a, per your above example.
A way to do it is using the aggregate function, but you have to add a column to your dataframe
> df$freq <- 0
> aggregate(freq~a+id,df,length)
a id freq
1 1 1 3
2 3 1 7
3 1 2 4
4 2 2 3
5 3 2 3
6 1 3 4
7 2 3 1
8 3 3 5
Of course you can write a function to do it, so it's easier to do it frequently, and you don't have to add a column to your actual data frame
> frequency <- function(df,groups) {
+ relevant <- df[,groups]
+ relevant$freq <- 0
+ aggregate(freq~.,relevant,length)
+ }
> frequency(df,c("b","id"))
b id freq
1 1 1 8
2 2 1 1
3 3 1 1
4 1 2 6
5 2 2 3
6 3 2 1
7 1 3 2
8 2 3 4
9 3 3 4
You didn't say how you'd like the data. The by function might give you the output you like.
by(df, df$id, function(x) lapply(x[,-1], table))

Resources