Creating a sequential ranking based on previous ratings - r

I have an issue with sequentially updating rankings and no matter how I try to search for a solution - or come up with one myself - I fail.
I am trying to analyse results of an experiment of sequential choice in which participants had to find the best possible option (the option with the highest rating). They were presented with a rating in every trial.
I have an ID, an order and a rating variable for every choice. ID is the participant, rating represents how good the option is (the higher the rating the better) and order is the number of the trial (in this example there were 4 trials)
ID rating order
1 4 1
1 3 2
1 5 3
1 2 4
2 3 1
2 5 2
2 2 3
2 1 4
I would like to create a new variable called "current_rank" which is basically the ranking of the rating of the current choice. This variable always needs to take into consideration all previous trials and ratings so e.g. for the participant with ID "1" this would be:
Trial 1: rating = 4, which means this is the best rating so far, current_rank = 1
Trial 2: rating = 3, which means this is the second best rating so far, current_rank = 2
Trial 3: rating = 5, which means this is the best rating so far, making it the new number 1 so, current_rank = 1
Trial 4: rating = 2, which means this is nowhere near the best, current_rank = 4
If I could do this with all participants and all choices my database should look like this:
ID rating order current_rank
1 4 1 1
1 3 2 2
1 5 3 1
1 2 4 4
2 3 1 1
2 5 2 1
2 2 3 3
2 1 4 4
I could successfully create an overall ranking variable like this:
db %>%
arrange(ID, order) %>%
group_by(ID) %>%
mutate(ovr_rank = min_rank(desc(rating)))
But my goal is to create a variable that is something of a sequential ranking. This would make it possible to see what kind of opinion the participant may have formed about the current rating based on the previous ratings, without knowing what future ratings might be. I tried creating loops or use the apply functions, but couldn't come up with a solution yet.
Any and all ideas are greatly appreciated!

Use runner to apply any R function in cumulative window (or rolling window). Below I used runner which rolls rating and applies rank function on "available" data at the moment (cumulative rank). Uncomment print to exhibit what lands into function(x).
library(dplyr)
library(runner)
data %>%
arrange(ID, order) %>%
group_by(ID) %>%
mutate(
current_rank = runner(
x = rating,
f = function(x) {
# print(x)
rank_available_at_the_moment <- rank(-x, ties.method = "last")
tail(rank_available_at_the_moment, 1)
}
)
)
# # A tibble: 8 x 4
# # Groups: ID [2]
# ID rating order current_rank
# <int> <int> <int> <int>
# 1 1 4 1 1
# 2 1 3 2 2
# 3 1 5 3 1
# 4 1 2 4 4
# 5 2 3 1 1
# 6 2 5 2 1
# 7 2 2 3 3
# 8 2 1 4 4
data
data <- read.table(text = "ID rating order
1 4 1
1 3 2
1 5 3
1 2 4
2 3 1
2 5 2
2 2 3
2 1 4", header = TRUE)

This chunk of code will work:
df <- tibble(
ID = c(1,1,1,1,2,2,2,2),
rating = c(4,3,5,2,3,5,2,1),
rank = c(1,0,0,0,0,0,0,0)
)
for(i in 2:nrow(df)){
if(df$ID[i] != df$ID[i-1]){
df$rank[i] <- 1
} else {
df$rank[i] <- which(sort(df[1:i,]$rating[which(df$ID == df$ID[i])], decreasing = TRUE) == df$rating[i])
}
}
Explanation:
Note that I assume your dataframe is already ordered based on ID and order. In my df there is no order column, but it is mainly for simplicity (and it is not necessarily needed in my solution, again, assuming the rows are already ordered by ID and order).
The for loop simply looks if the ID of that row is different from the row above, it automatically gets rank 1. Otherwise, it looks on the subset of df from row 1 to row i, subsets again by similar ID, sorts the ratings in that subset (including our current rating in question) in descending order, and takes the position of our currently asked rating to be assigned as its rank value.
I hope this answers your question and gives you insight.

Here are 2 options using data.table:
1) non-equi join to find all trials before and incl current trial, rank the rating and extract the current rank:
DT[, cr := .SD[.SD, on=.(ID, trial<=trial), by=.EACHI, order(order(-rating))[.N]]$V1]
2) non-equi join to find number of ratings that are higher than current rating in trials before current trial:
DT[, cr2 := DT[DT, on=.(ID, trial<=trial, rating>rating), by=.EACHI, .N + 1L]$V1]
Note that there might be ties in ratings and it will be good to specify how ratings ties should be handled.
output:
ID rating trial cr cr2
1: 1 4 1 1 1
2: 1 3 2 2 2
3: 1 5 3 1 1
4: 1 2 4 4 4
5: 2 3 1 1 1
6: 2 5 2 1 1
7: 2 2 3 3 3
8: 2 1 4 4 4
data:
library(data.table)
DT <- fread("ID rating trial
1 4 1
1 3 2
1 5 3
1 2 4
2 3 1
2 5 2
2 2 3
2 1 4")

Related

R, dplyr: Is there a way to add order of groups when there are multiple rows per group without creating a new data frame? [duplicate]

This question already has answers here:
How to create a consecutive group number
(13 answers)
Closed 2 years ago.
I have data from an experiment that has multiple rows per item (each row has the reading time for one word of a sentence of n words), and multiple items per subject. Items can be varying numbers of rows. Items were presented in a random order, and their order in the data as initially read in reflects the sequence they saw the items in. What I'd like to do is add a column that contains the order in which the subject saw that item (i.e., 1 for the first item, 2 for the second, etc.).
Here's an example of some input data that has the relevant properties:
d <- data.frame(Subject = c(1,1,1,1,1,2,2,2,2,2),
Item = c(2,2,2,1,1,1,1,2,2,2))
Subject Item
1 2
1 2
1 2
1 1
1 1
2 1
2 1
2 2
2 2
2 2
And here's the output I want:
Subject Item order
1 2 1
1 2 1
1 2 1
1 1 2
1 1 2
2 1 1
2 1 1
2 2 2
2 2 2
2 2 2
I know I can do this by setting up a temp data frame that filters d to unique combinations of Subject and Item, adding order to that as something like 1:n() or row_number(), and then using a join function to put it back together with the main data frame. What I'd like to know is whether there's a way to do this without having to create a new data frame just to store the order---can this be done inside dplyr's mutate somehow if I group by Subject and Item, for instance?
Here's one way:
d %>%
group_by(Subject) %>%
mutate(order = match(Item, unique(Item))) %>%
ungroup()
# # A tibble: 10 x 3
# Subject Item order
# <dbl> <dbl> <int>
# 1 1 2 1
# 2 1 2 1
# 3 1 2 1
# 4 1 1 2
# 5 1 1 2
# 6 2 1 1
# 7 2 1 1
# 8 2 2 2
# 9 2 2 2
# 10 2 2 2
Here is a base R option
transform(d,
order = ave(Item, Subject, FUN = function(x) as.integer(factor(x, levels = unique(x))))
)
or
transform(d,
order = ave(Item, Subject, FUN = function(x) match(x, unique(x)))
)
both giving
Subject Item order
1 1 2 1
2 1 2 1
3 1 2 1
4 1 1 2
5 1 1 2
6 2 1 1
7 2 1 1
8 2 2 2
9 2 2 2
10 2 2 2

R: Matching and repeating occurence [duplicate]

This question already has answers here:
Complete dataframe with missing combinations of values
(2 answers)
Closed 2 years ago.
(sample code below) I have two data sets. One is a library of products, the other is customer id, date and viewed product and another detail.I want to get a merge where I see per each id AND date all the library of products as well as where the match was. I have tried using full_join and merge and right and left joins, but they do not repeat the rows. below is the sample of what i am trying to achieve.
id=c(1,1,1,1,2,2)
date=c(1,1,2,2,1,3)
offer=c('a','x','y','x','y','a')
section=c('general','kitchen','general','general','general','kitchen')
t=data.frame(id,date,offer,section)
offer=c('a','x','y','z')
library=data.frame(offer)
######
t table
id date offer section
1 1 1 a general
2 1 1 x kitchen
3 1 2 y general
4 1 2 x general
5 2 1 y general
6 2 3 a kitchen
library table
offer
1 a
2 x
3 y
4 z
and i want to get this:
id date offer section
1 1 1 a general
2 1 1 x kitchen
3 1 1 y NA
4 1 1 z general
...
(there would have to be 6*4 observations)
I realize because I match by offer it is not going to repeat the values like so, but what is another option to do that? Thanks a lot!!
You can use complete to get all combinations of library$offer for each id and date.
tidyr::complete(t, id, date, offer = library$offer)
# A tibble: 24 x 4
# id date offer section
# <dbl> <dbl> <chr> <chr>
# 1 1 1 a general
# 2 1 1 x kitchen
# 3 1 1 y NA
# 4 1 1 z NA
# 5 1 2 a NA
# 6 1 2 x general
# 7 1 2 y general
# 8 1 2 z NA
# 9 1 3 a NA
#10 1 3 x NA
# … with 14 more rows
You can use tidyr and dplyr to get the data. The crossing() function will create all combinations of the variables you pass in
library(dplyr)
library(tidyr)
t %>%
select(id, date) %>%
{crossing(id=.$id, date=.$date, library)} %>%
left_join(t)

In a dataframe, find the index of the next smaller value for each element of a column

Question:
In a dataframe, I want to create a new column as the indices of the next smaller value of an existing column.
For example, the data looks like this. It is already arranged in item, day.
item day val
1 1 2 3
2 1 4 2
3 1 5 1
4 2 1 1
5 2 3 2
6 2 5 3
First I would like to use group_by(item) in dplyr to select the sub-dataframe of each item.
Then for row 1, I look down the rows and find that row 2 has a smaller val. This is what I want, so I record the day corresponding to that row. Similar for row 2.
Note that for row 3 and 6, they are the last rows of corresponding sub-dataframes, so there is no next smaller value. For row 4 and 5, there is no smaller val when I look down the rows.
The dataframe with the new column should look like this.
item day val next.smaller.day
1 1 2 3 4
2 1 4 2 5
3 1 5 1 -1
4 2 1 1 -1
5 2 3 2 -1
6 2 5 3 -1
I wonder if there is any way of using dplyr to implement this, or any codes in r other than a for loop.
I found a thread asking the algorithm of this question. Given an array, find out the next smaller element for each element .
It is relevant, and the proposed algorithm beats mine in terms of time complexity, but I still find it hard to implement in my scenario.
Thank you!
Update:
Here is another example to re-illustrate what I'm looking for.
item day val next.smaller.day
1 1 2 2 5
2 1 4 3 5
3 1 5 1 -1
4 2 1 3 3
5 2 3 1 -1
6 2 5 2 -1
You can group your data by the item, calculate the different between rows using the diff function and check if it is smaller than zero which will then generate a logic vector and you can use the logic vector to pick up the next day. And since you are picking up the next day, you will need the lead function to shift the day column forward so that it can match the rows where you want to place them.
Side note: Since diff function create a vector one element shorter than the original one and you will always leave the last row out per group, we can pad the diff result by a FALSE condition.
library(dplyr);
df %>% group_by(item) %>% mutate(smaller = c(diff(val) < 0, F),
next.smaller.day = ifelse(smaller, lead(day), -1)) %>%
select(-smaller)
# Source: local data frame [6 x 4]
# Groups: item [2]
# item day val next.smaller.day
# <int> <int> <int> <dbl>
# 1 1 2 3 4
# 2 1 4 2 5
# 3 1 5 1 -1
# 4 2 1 1 -1
# 5 2 3 2 -1
# 6 2 5 3 -1
Update:
find.next.smaller <- function(ini = 1, vec) {
if(length(vec) == 1) NA
else c(ini + min(which(vec[1] > vec[-1])),
find.next.smaller(ini + 1, vec[-1]))
} # the recursive function will go element by element through the vector and find out
# the index of the next smaller value.
df %>% group_by(item) %>% mutate(next.smaller.day = day[find.next.smaller(1, val)],
next.smaller.day = replace(next.smaller.day, is.na(next.smaller.day), -1))
# Source: local data frame [6 x 4]
# Groups: item [2]
#
# item day val next.smaller.day
# <int> <int> <dbl> <dbl>
# 1 1 2 2 5
# 2 1 4 3 5
# 3 1 5 1 -1
# 4 2 1 1 -1
# 5 2 3 2 -1
# 6 2 5 3 -1

for loop & if function in R

I was writing a loop with if function in R. The table is like below:
ID category
1 a
1 b
1 c
2 a
2 b
3 a
3 b
4 a
5 a
I want to use the for loop with if function to add another column to count each grouped ID, like below count column:
ID category Count
1 a 1
1 b 2
1 c 3
2 a 1
2 b 2
3 a 1
3 b 2
4 a 1
5 a 1
My code is (output is the table name):
for (i in 2:nrow(output1)){
if(output1[i,1] == output[i-1,1]){
output1[i,"rn"]<- output1[i-1,"rn"]+1
}
else{
output1[i,"rn"]<-1
}
}
But the result returns as all count column values are all "1".
ID category Count
1 a 1
1 b 1
1 c 1
2 a 1
2 b 1
3 a 1
3 b 1
4 a 1
5 a 1
Please help me out... Thanks
There are packages and vectorized ways to do this task, but if you are practicing with loops try:
output1$rn <- 1
for (i in 2:nrow(output1)){
if(output1[i,1] == output1[i-1,1]){
output1[i,"rn"]<- output1[i-1,"rn"]+1
}
else{
output1[i,"rn"]<-1
}
}
With your original code, when you made this call output1[i-1,"rn"]+1 in the third line of your loop, you were referencing a row that didn't exist on the first pass. By first creating the row and filling it with the value 1, you give the loop something explicit to refer to.
output1
# ID category rn
# 1 1 a 1
# 2 1 b 2
# 3 1 c 3
# 4 2 a 1
# 5 2 b 2
# 6 3 a 1
# 7 3 b 2
# 8 4 a 1
# 9 5 a 1
With the package dplyr you can accomplish it quickly with:
library(dplyr)
output1 %>% group_by(ID) %>% mutate(rn = 1:n())
Or with data.table:
library(data.table)
setDT(output1)[,rn := 1:.N, by=ID]
With base R you can also use:
output1$rn <- with(output1, ave(as.character(category), ID, FUN=seq))
There are vignettes and tutorials on the two packages mentioned, and by searching ?ave in the R console for the last approach.
looping solution will be painfully slow for bigger data. Here is one line solution using data.table:
require(data.table)
a<-data.table(ID=c(1,1,1,2,2,3,3,4,5),category=c('a','b','c','a','b','a','b','a','a'))
a[,':='(category_count = 1:.N),by=.(ID)]
what you want is actually a column of factor level. do this
df$count=as.numeric(df$category)
this will give out put as
ID category count
1 1 a 1
2 1 b 2
3 1 c 3
4 2 a 1
5 2 b 2
6 3 a 1
7 3 b 2
8 4 a 1
9 5 a 1
provided your category is already a factor. if not first convert to factor
df$category=as.factor(df$category)
df$count=as.numeric(df$category)

Conditionally dropping duplicates from a data.frame

Im am trying to figure out how to subset my dataset according to the repeated value of the variable s, taking also into account the id associated to the row.
Suppose my dataset is:
dat <- read.table(text = "
id s
1 2
1 2
1 1
1 3
1 3
1 3
2 3
2 3
3 2
3 2",
header=TRUE)
What I would like to do is, for each id, to keep only the first row for which s = 3. The result with dat would be:
id s
1 2
1 2
1 1
1 3
2 3
3 2
3 2
I have tried to use both duplicated() and which() for using subset() in a second moment, but I am not going anywhere. The main problem is that it is not sufficient to isolate the first row of the s = 3 "blocks", because in some cases (as here between id = 1 and id = 2) the 3's overlap between one id and another.. Which strategy would you adopt?
Like this:
subset(dat, s != 3 | s == 3 & !duplicated(dat))
# id s
# 1 1 2
# 2 1 2
# 3 1 1
# 4 1 3
# 7 2 3
# 9 3 2
# 10 3 2
Note that subset can be dangerous to work with (see Why is `[` better than `subset`?), so the longer but safer version would be:
dat[dat$s != 3 | dat$s == 3 & !duplicated(dat), ]

Resources