Find the favorite and analyse sequence questions in R - r

We have a daily meeting when participants nominate each other to speak. The first person is chosen randomly.
I have a dataframe that consists of names and the order of speech every day.
I have a day1, a day2 ,a day3 , etc. in the columns.
The data in the rows are numbers, meaning the order of speech on that particular day.
NA means that the person did not participate on that day.
Name day1 day2 day3 day4 ...
Albert 1 3 1 ...
Josh 2 2 NA
Veronica 3 5 3
Tim 4 1 2
Stew 5 4 4
...
I want to create two analysis, first, I want to create a dataframe who has chosen who the most times. (I know that the result depends on if a participant was nominated before and therefore on that day that participant cannot be nominated again, I will handle it later, but for now this is enough)
It should look like this:
Name Favorite
Albert Stew
Josh Veronica
Veronica Tim
Tim Stew
...
My questions (feel free to answer only one if you can):
1. What code shall I use for it without having to manunally put the names in a different dataframe?
2. How shall I handle a tie, for example Josh chose Veronica and Tim first the same number of times? Later I want to visualise it and I have no idea how to handle ties.
I also would like to analyse the results to visualise strong connections.
Like to show that there are people who usually chose each other, etc.
Is there a good package that is specialised for these? Or how should I get to it?
I do not need DNA sequences, only this simple ones, but I have not found a suitable one yet.
Thanks for your help!

If I am not misunderstanding your problem, here is some code to get the number of occurences of who choose who as next speaker. I added a fourth day to have some count that is not 1. There are ties in the result, choosing the first couple of each group by speaker ('who') may be a solution :
df <- read.table(textConnection(
"Name,day1,day2,day3,day4
Albert,1,3,1,3
Josh,2,2,,2
Veronica,3,5,3,1
Tim,4,1,2,4
Stew,5,4,4,5"),header=TRUE,sep=",",stringsAsFactors=FALSE)
purrr::map(colnames(df)[-1],
function (x) {
who <- df$Name[order(df[x],na.last=NA)]
data.frame(who,lead(who),stringsAsFactors=FALSE)
}
) %>%
replyr::replyr_bind_rows() %>%
filter(!is.na(lead.who.)) %>%
group_by(who,lead.who.) %>% summarise(n=n()) %>%
arrange(who,desc(n))
Input:
Name day1 day2 day3 day4
1 Albert 1 3 1 3
2 Josh 2 2 NA 2
3 Veronica 3 5 3 1
4 Tim 4 1 2 4
5 Stew 5 4 4 5
Result:
# A tibble: 12 x 3
# Groups: who [5]
who lead.who. n
<chr> <chr> <int>
1 Albert Tim 2
2 Albert Josh 1
3 Albert Stew 1
4 Josh Albert 2
5 Josh Veronica 1
6 Stew Veronica 1
7 Tim Stew 2
8 Tim Josh 1
9 Tim Veronica 1
10 Veronica Josh 1
11 Veronica Stew 1
12 Veronica Tim 1

Related

Reshaping a dataset of patients with different numbers of diagnosis from long to wide [duplicate]

This question already has answers here:
How to reshape data from long to wide format
(14 answers)
Closed 3 years ago.
I am a beginner, confronted with a big task and all the typical long to wide reshaping tools I found using the search function did not really do the job for me. I would be glad if someone could help me.
I try to achieve the following:
I have patientdata in which every patient has a unique patient number but multiple stays in hospital have lead to multiple cases per person. I want to work with these cases. Problem is, I have all the diagnoses per case but not everybody has the same number of diagnosis and I don't know how to tell R to create a new dagnosis (and date of diagnosis) variable each time there is already a diagnosis. Every help is highly appreciated!
So, I have a huge dataset that looks roughly like that:
Patient Case Diagnosis DateOfDiagnosis
1 John Doe 1 A 2010-10-10
2 John Doe 1 B 2010-10-10
3 John Doe 1 C 2010-10-10
4 Peter Griffin 2 D 2010-10-11
5 Peter Griffin 2 E 2010-10-11
6 Homer Simpson 3 F 2010-10-12
7 Homer Simpson 4 G 2010-10-13
I need row by case and I need all the diagnosis and their dates in separate variables. This would be no problem but there is no pattern in the cases or diagnosis so some patients have only one case others 5 and some cases have 1 others 5 diagnoses with respective date.
So what I need looks like this:
Patient Case Diag1 DateOfDiag1 Diag2 DateOfDiag2 Diag3 DateOfDiag3 ....
1 John Doe 1 A 2010-10-10 B 2010-10-10 C 2010-10-10
2 Peter Grif 2 D 2010-10-11 E 2010-10-11 NA NA
3 Homer Simp 3 F 2010-10-12 NA NA NA NA
4 Homer Simp 4 G 2010-10-13 NA NA NA NA
The code for my example is:
Patient <- c('John Doe','John Doe','John Doe', 'Peter Griffin','Peter Griffin', 'Homer Simpson', 'Homer Simpson')
Case <- c(1,1,1,2,2,3,4)
Diagnosis <- c('A','B','C','D','E','F','G')
DateOfDiagnosis <- as.Date(c('2010-10-10','2010-10-10','2010-10-10','2010-10-11','2010-10-11','2010-10-12','2010-10-13'))
df<-data.frame(Patient, Case, Diagnosis, DateOfDiagnosis)
Every help is highly appreciated!
Kind regards,
Jan
You could use pivot_wider, after creating a unique column.
library(dplyr)
library(tidyr)
df %>%
group_by(Patient, Case) %>%
mutate(row = row_number()) %>%
pivot_wider(values_from = c(Diagnosis, DateOfDiagnosis), names_from = row)
# Patient Case Diagnosis_1 Diagnosis_2 Diagnosis_3 DateOfDiagnosis_1 DateOfDiagnosis_2 DateOfDiagnosis_3
# <fct> <dbl> <fct> <fct> <fct> <date> <date> <date>
#1 John Doe 1 A B C 2010-10-10 2010-10-10 2010-10-10
#2 Peter Griffin 2 D E NA 2010-10-11 2010-10-11 NA
#3 Homer Simpson 3 F NA NA 2010-10-12 NA NA
#4 Homer Simpson 4 G NA NA 2010-10-13 NA NA

Complex merging in R with duplicate matching values in y set producing problems

So I'm trying to merge two dataframes. Dataframe x looks something like:
Name ParentID
Steve 1
Kevin 1
Stacy 1
Paula 4
Evan 7
Dataframe y looks like:
ParentID OtherStuff
1 things
2 stuff
3 item
4 ideas
5 short
6 help
7 me
The dataframe I want would look like:
Name ParentID OtherStuff
Steve 1 things
Kevin 1 things
Stacy 1 things
Paula 4 ideas
Evan 7 me
Using a left merge gives me substantially more observations than I want, with many duplicates. Any idea how to merge things, where y is duplicated where appropriate to match x?
I'm working with a databases set up similarly to the example. x has 5013 observations, while y has 6432. Using the merge function as described by Joel and thelatemail gives me 1627727 observations.
We can use match from base R
df1$OtherStuff <- with(df1, df2$OtherStuff[match(ParentID, df2$ParentID)])
df1
# Name ParentID OtherStuff
#1 Steve 1 things
#2 Kevin 1 things
#3 Stacy 1 things
#4 Paula 4 ideas
#5 Evan 7 me

Classification according to unique values \

I have a data frame named as Records having 2 vectors Rank and Name
Rank Name
1 Ashish
1 Ashish
2 Ashish
3 Mark
4 Mark
1 Mark
3 Spencer
2 Spencer
1 Spencer
2 Mary
4 Joseph
I want that every name should be placed in either 1, 2 ,3 or 4 tag depending on their occurrence and uniqueness:
I want to create a new vector which will be named as Tagging
So The output should be:
Rank 1 has three unique elements Mark Spencer and Ashish so the tag is 1 for all three.
Rank 2 has one unique records which is Mary as Ashish has already been assigned tag 1 so Mary is tagged as 2.
Rank 3 has no unique records as Spencer and Mark has already been assigned 1 so I cannot tag 3 to anybody.
Rank 4 has one unique record Joseph so he gets tagged as 4.
Let me know which function can help me do this.
I do not want to use looping as this is 1000000 row database
The below solution follows the principle that the highest Rank of a person is going to be that person's tag too.
tbl <- read.table(header=TRUE, text='
Rank Name
1 Ashish
1 Ashish
2 Ashish
3 Mark
4 Mark
1 Mark
3 Spencer
2 Spencer
1 Spencer
2 Mary
4 Joseph
')
Ordering the 'tbl' dataframe by Rank
tbl_ord <- tbl[with(tbl,order(Rank)),]
Removing multiple occurrence of name within same Rank
> name_ord<- tbl_ord[duplicated(tbl_ord$Rank),]
> name_ord
Rank Name
2 1 Ashish
6 1 Mark
9 1 Spencer
8 2 Spencer
10 2 Mary
7 3 Spencer
11 4 Joseph
Displaying unique Names
#name_ord[unique(name_ord$Name),] #this will work too
> name_ord[!duplicated(name_ord$Name),]
Rank Name
2 1 Ashish
6 1 Mark
9 1 Spencer
10 2 Mary
11 4 Joseph
Using the setkey function of data.table package and unique:
library(data.table)
dt<-data.table(Rank=c(1,1,2,3,4,1,3,2,1,2,4), Name=c(rep("Ashish", 3), rep("Mark", 3), rep("Spencer", 3), "Mary", "Joseph"))
setkey(dt, Rank, Name)
dt<-unique(dt)
setkey(dt, Name)
dt<-unique(dt) # works because of the above setkey call which sorted it
setkey(dt, Rank) # if you want to order them by Rank again

Erasing duplicates with NA values

I have a data frame like this:
names <- c('Mike','Mike','Mike','John','John','John','David','David','David','David')
dates <- c('04-26','04-26','04-27','04-28','04-27','04-26','04-01','04-02','04-02','04-03')
values <- c(NA,1,2,4,5,6,1,2,NA,NA)
test <- data.frame(names,dates,values)
Which is:
names dates values
1 Mike 04-26 NA
2 Mike 04-26 1
3 Mike 04-27 2
4 John 04-28 4
5 John 04-27 5
6 John 04-26 6
7 David 04-01 1
8 David 04-02 2
9 David 04-02 NA
10 David 04-03 NA
I'd like to get rid of duplicates with NA values. So, in this case, I have a valid observation from Mike on 04-26 and also have a valid observation from David on 04-02, so rows 1 and 9 should be erased and I will end up with:
names dates values
1 Mike 04-26 1
2 Mike 04-27 2
3 John 04-28 4
4 John 04-27 5
5 John 04-26 6
6 David 04-01 1
7 David 04-02 2
8 David 04-03 NA
I tried to use duplicated function, something like this:
test[!duplicated(test[,c('names','dates')]),]
But that does not work since some NA values come before the valid value. Do you have any suggestions without trying things like merge or making another data frame?
Update: I'd like to keep rows with NA that are not duplicates.
What about this way?
library(dplyr)
test %>% group_by(names, dates) %>% filter((n()>=2 & !is.na(values)) | n()==1)
Source: local data frame [8 x 3]
Groups: names, dates [8]
names dates values
(fctr) (fctr) (dbl)
1 Mike 04-26 1
2 Mike 04-27 2
3 John 04-28 4
4 John 04-27 5
5 John 04-26 6
6 David 04-01 1
7 David 04-02 2
8 David 04-03 NA
Here is an attempt in data.table:
# set up
libary(data.table)
setDT(test)
# construct condition
test[, dupes := max(duplicated(.SD)), .SDcols=c("names", "dates"), by=c("names", "dates")]
# print out result
test[dupes == 0 | !is.na(values),]
Here is a similar method using base R, except that the dupes variable is kept separately from the data.frame:
dupes <- duplicated(test[c("names", "dates")])
# this generates warnings, but works nonetheless
dupes <- ave(dupes, test$names, test$dates, FUN=max)
# print out result
test[dupes == 0 | !is.na(test$values),]
If there are duplicated rows where the values variable is NA, and these duplicates add nothing to the data, then you can drop them prior to running the code above:
testNoNADupes <- test[!(duplicated(test) & is.na(test$values)),]
This should work based on your sample.
test <- test[order(test$values),]
test <- test[!(duplicated(test$names) & duplicated(test$dates) & is.na(test$values)),]

Find the minimum sum of K rows for a MxN matrix, where you need to have 1 item in each column

Let say below is a table describing the time it takes 5 people to finish 5 jobs.
Job1 Job2 Job3 Job4 Job5
Bob 1 2 3 4 5
Joe 2 1 3 4 5
Tom 2 3 1 4 5
May 2 3 1 2 5
Sue 2 3 4 5 1
If the company has enough money to hire all 5 people, then I can see that the minimum time to complete everything is
Bob do Job1
Joe do Job2
Tom do Job3
May do Job4
Sue do Job5
1 + 1 + 1 + 2 + 1 = 6 units
I found the fastest way to solve this is to use Hungary Algorthm
(Referenced find the minimum sum of matrix (n x n) that select only one in each row and column)
Also, if company can only hire 1 person, May will be hired because everyone took 15 units to finish, where May took only 13 units.
However, if company has enough budget to hire 3 people. What's the fastest algorithm to find out which 3 people I should hire?
In the above Example, should be Bob (or Joe) + May + Sue, their total will be
Either Bob do Job 1
Either Bob do Job 2
May do Job 3
May do Job 4
Sue do Job 5
1 + 2 + 1 + 2 + 1 = 7 units

Resources