Objects' IDs by common value in different variables - r

I have a dataset with self-ratings and peer-ratings. The dataset is in long format. Before reshaping the dataset into wide-format, I want to give self-ratings and peer-ratings a common ID so that I can later match the peer-ratings to the self-ratings by that ID. The data look like this:
| questionnaire | ID | REF | SERIAL | x | y |
|---------------|----|------|--------|----|----|
| self | 1 | 1234 | NA | 4 | NA |
| self | 2 | 2345 | NA | 6 | NA |
| peer | NA | NA | 1234 | NA | 8 |
| peer | NA | NA | 2345 | NA | 4 |
The self-ratings have a reference variable ("REF") which refer to a peer-rating. The peer-ratings have the same value in the variable "SERIAL".
I'm now trying to attribute the same ID to the peer-ratings as the ID of the self-ratings which refer to the peers by the SERIAL value. The table should look like this then:
| questionnaire | ID | REF | SERIAL | x | y |
|---------------|----|------|--------|----|----|
| self | 1 | 1234 | NA | 4 | NA |
| self | 2 | 2345 | NA | 6 | NA |
| peer | 1 | NA | 1234 | NA | 8 |
| peer | 2 | NA | 2345 | NA | 4 |
How could I do this best?

may I can help you. I would like to change the ID values right in the data frame. For that I would do a loop around the rows by picking up an ID from the right counterpart and adding it into the NA's in the data frame. In my following code example, I've searched for the right ID for every peer. The map_df function does the loop and prepares as a output a data frame.
library(purrr)
library(dplyr)
marcel.data <- data.frame(questionnaire = c("self", "self", "peer", "peer"),
ID = c(1, 2, NA, NA),
REF = c("1234", "2345", NA, NA),
SERIAL = c(NA, NA, "1234", "2345"),
x = c(4, 6, NA, NA),
y = c(NA, NA, 8, 4))
new_id_data <- map_df(1:nrow(marcel.data), function(i){
if(marcel.data[i, 1] == "peer") { # matching peer to Self
serial <- unique(marcel.data[i, "SERIAL"])
new.id <- marcel.data %>% # getting the needed ID
filter(REF == serial & questionnaire == "self") %>%
select(ID)
marcel.data[i, "ID"] <- new.id # adding the new ID to the old data
marcel.data[i,]
} else
(marcel.data[i,])
})

Related

Merging two data frames without duplicating metric values

I have two data frames and I want to merge them by leader values, so that I can see the total runs and walks for each groups. Each leader can have multiple members in their team, but the problem that I'm having is that when I merge them, the metrics also gets duplicated over to the newly added rows.
Here is an example of the two data sets that I have:
Data set 1:
+-------------+-----------+------------+-------------+
| leader name | leader id | total runs | total walks |
+-------------+-----------+------------+-------------+
| ab | 11 | 4 | 9 |
| tg | 47 | 8 | 3 |
+-------------+-----------+------------+-------------+
Data set 2:
+-------------+-----------+--------------+-----------+
| leader name | leader id | member name | member id |
+-------------+-----------+--------------+-----------+
| ab | 11 | gfh | 589 |
| ab | 11 | tyu | 739 |
| tg | 47 | rtf | 745 |
| tg | 47 | jke | 996 |
+-------------+-----------+--------------+-----------+
I want to merge the two datasets so that they become like this:
+-------------+-----------+--------------+------------+------------+-------------+
| leader name | leader id | member name | member id | total runs | total walks |
+-------------+-----------+--------------+------------+------------+-------------+
| ab | 11 | gfh | 589 | 4 | 9 |
| ab | 11 | tyu | 739 | | |
| tg | 47 | rtf | 745 | 8 | 3 |
| tg | 47 | jke | 996 | | |
+-------------+-----------+--------------+------------+------------+-------------+
But right now I keep getting:
+-------------+-----------+--------------+------------+------------+-------------+
| leader name | leader id | member name | member id | total runs | total walks |
+-------------+-----------+--------------+------------+------------+-------------+
| ab | 11 | gfh | 589 | 4 | 9 |
| ab | 11 | tyu | 739 | 4 | 9 |
| tg | 47 | rtf | 745 | 8 | 3 |
| tg | 47 | jke | 996 | 8 | 3 |
+-------------+-----------+--------------+------------+------------+-------------+
It doesn't matter if they're blank, NA's or 0's, as long as the values aren't duplicating. Is there a way to achieve this?
We can do a replace on those 'total' columns after a left_join
library(dplyr)
left_join(df2, df1 ) %>%
group_by(leadername) %>%
mutate_at(vars(starts_with('total')), ~ replace(., row_number() > 1, NA))
# A tibble: 4 x 6
# Groups: leadername [2]
# leadername leaderid membername memberid totalruns totalwalks
# <chr> <dbl> <chr> <dbl> <dbl> <dbl>
#1 ab 11 gfh 589 4 9
#2 ab 11 tyu 739 NA NA
#3 tg 47 rtf 745 8 3
#4 tg 47 jke 996 NA NA
Or without using the group_by
left_join(df2, df1 ) %>%
mutate_at(vars(starts_with('total')), ~
replace(., duplicated(leadername), NA))
Or a base R option is
out <- merge(df2, df1, all.x = TRUE)
i1 <- duplicated(out$leadername)
out[i1, c("totalruns", "totalwalks")] <- NA
out
# leadername leaderid membername memberid totalruns totalwalks
#1 ab 11 gfh 589 4 9
#2 ab 11 tyu 739 NA NA
#3 tg 47 rtf 745 8 3
#4 tg 47 jke 996 NA NA
data
df1 <- structure(list(leadername = c("ab", "tg"), leaderid = c(11, 47
), totalruns = c(4, 8), totalwalks = c(9, 3)), class = "data.frame", row.names = c(NA,
-2L))
df2 <- structure(list(leadername = c("ab", "ab", "tg", "tg"), leaderid = c(11,
11, 47, 47), membername = c("gfh", "tyu", "rtf", "jke"), memberid = c(589,
739, 745, 996)), class = "data.frame", row.names = c(NA, -4L))

Grouping data based on repetitive records using R

I have a dataset which contains repetitive records/common records. It looks something like this:
| Vendor | Buyer | Amount |
|--------|:-----:|-------:|
| A | P | 100 |
| B | P | 150 |
| C | Q | 300 |
| A | P | 290 |
I need to group similar records together but I do not want to summarize my amount. I want to have the amount value being represented individually. The output should like something like this:
| Vendor | Buyer | Amount |
|--------|:-----:|-------:|
| A | P | 100 |
| A | P | 290 |
| | | |
| B | P | 150 |
| | | |
| C | Q | 300 |
I thought of using split(), but since my original data has too many records, the split function creates too many lists and it becomes tedious to create new datasets from them. How can I achieve the above stated output with any other method?
EDIT:
Let us assume that we have an additional column called date and the dataset now looks like this:
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|-----------|
| A | P | 100 | 3/6/2019 |
| B | P | 150 | 7/6/2018 |
| C | Q | 300 | 4/21/2018 |
| A | P | 290 | 6/5/2018 |
Once, each buyer and vendor is grouped together, I need to arrange the dates in ascending order for each buyer and vendor such that it looks something like the below one:
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|-----------|
| A | P | 290 | 6/5/2018 |
| A | P | 100 | 3/6/2019 |
| | | | |
| B | P | 150 | 7/6/2018 |
| | | | |
| C | Q | 300 | 4/21/2018 |
and then remove the single transactions to get the final table containing only
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|----------|
| A | P | 290 | 6/5/2018 |
| A | P | 100 | 3/6/2019 |
In the following we sort the data frame and add a group column which allows easy subsequent processing of individual groups. For example, to process the groups without creating a large split of DF:
for(g in unique(DFout$group)) {
DFsub <- subset(DFout, group == g)
... process DFsub ...
}
1) Base R Sort the data and then assign the group column using cumsum on the non-duplicated elements.
library(data.table)
o <- with(DF, order(Vendor, Buyer))
DFo <- DF[o, ]
DFout <- transform(DFo, group = cumsum(!duplicated(data.frame(Vendor, Buyer))))
DFout
giving:
Vendor Buyer Amount group
1 A P 100 1
4 A P 290 1
2 B P 150 2
3 C Q 300 3
I am not sure this is such a good idea to do in the first place but if you really want to add a row of NAs after each group:
ix <- unname(unlist(tapply(DFout$group, DFout$group, function(x) c(x, NA))))
ix[!is.na(ix)] <- seq_len(nrow(DFout))
DFout[ix, ]
2) data.table Convert to data.table, set the key (which sorts it) and use rleid to assign the group number.
library(data.table)
DT <- data.table(DF)
setkey(DT, Vendor, Buyer)
DT[, group := rleid(Vendor, Buyer)]
3) sqldf Another approach is to use SQL. This requires the development version of RSQLite on github. Here dense_rank acts similarly to rleid above.
library(sqldf)
sqldf("select *, dense_rank() over (order by Vendor, Buyer) as [group]
from DF
order by Vendor, Buyer")
giving:
Vendor Buyer Amount group
1 A P 100 1
2 A P 290 1
3 B P 150 2
4 C Q 300 3
Note
DF <- structure(list(Vendor = structure(c(1L, 2L, 3L, 1L), .Label = c("A",
"B", "C"), class = "factor"), Buyer = structure(c(1L, 1L, 2L,
1L), .Label = c("P", "Q"), class = "factor"), Amount = c(100L,
150L, 300L, 290L)), class = "data.frame", row.names = c(NA, -4L
))

Data imputation for empty subsetted dataframes in R

I'm trying to build a function in R in which I can subset my raw dataframe according to some specifications, and thereafter convert this subsetted dataframe into a proportion table.
Unfortunately, some of these subsettings yields to an empty dataframe as for some particular specifications I do not have data; hence no proportion table can be calculated. So, what I would like to do is to take the closest time step from which I have a non-empty subsetted dataframe and use it as an input for the empty subsetted dataframe.
Here some insights to my dataframe and function:
My raw dataframe looks +/- as follows:
| year | quarter | area | time_comb | no_individuals | lenCls | age |
|------|---------|------|-----------|----------------|--------|-----|
| 2005 | 1 | 24 | 2005.1.24 | 8 | 380 | 3 |
| 2005 | 2 | 24 | 2005.2.24 | 4 | 490 | 2 |
| 2005 | 1 | 24 | 2005.1.24 | 3 | 460 | 6 |
| 2005 | 1 | 21 | 2005.1.21 | 25 | 400 | 2 |
| 2005 | 2 | 24 | 2005.2.24 | 1 | 680 | 6 |
| 2005 | 2 | 21 | 2005.2.21 | 2 | 620 | 5 |
| 2005 | 3 | 21 | 2005.3.21 | NA | NA | NA |
| 2005 | 1 | 21 | 2005.1.21 | 1 | 510 | 5 |
| 2005 | 1 | 24 | 2005.1.24 | 1 | 670 | 4 |
| 2006 | 1 | 22 | 2006.1.22 | 2 | 750 | 4 |
| 2006 | 4 | 24 | 2006.4.24 | 1 | 660 | 8 |
| 2006 | 2 | 24 | 2006.2.24 | 8 | 540 | 3 |
| 2006 | 2 | 24 | 2006.2.24 | 4 | 560 | 3 |
| 2006 | 1 | 22 | 2006.1.22 | 2 | 250 | 2 |
| 2006 | 3 | 22 | 2006.3.22 | 1 | 520 | 2 |
| 2006 | 2 | 24 | 2006.2.24 | 1 | 500 | 2 |
| 2006 | 2 | 22 | 2006.2.22 | NA | NA | NA |
| 2006 | 2 | 21 | 2006.2.21 | 3 | 480 | 2 |
| 2006 | 1 | 24 | 2006.1.24 | 1 | 640 | 5 |
| 2007 | 4 | 21 | 2007.4.21 | 2 | 620 | 3 |
| 2007 | 2 | 21 | 2007.2.21 | 1 | 430 | 3 |
| 2007 | 4 | 22 | 2007.4.22 | 14 | 410 | 2 |
| 2007 | 1 | 24 | 2007.1.24 | NA | NA | NA |
| 2007 | 2 | 24 | 2007.2.24 | NA | NA | NA |
| 2007 | 3 | 24 | 2007.3.22 | NA | NA | NA |
| 2007 | 4 | 24 | 2007.4.24 | NA | NA | NA |
| 2007 | 3 | 21 | 2007.3.21 | 1 | 560 | 4 |
| 2007 | 1 | 21 | 2007.1.21 | 7 | 300 | 3 |
| 2007 | 3 | 23 | 2007.3.23 | 1 | 640 | 5 |
Here year, quarter and area refers to a particular time (Year & Quarter) and area for which X no. of individuals were measured (no_individuals). For example, from the first row we get that in the first quarter of the year 2005 in area 24 I had 8 individuals belonging to a length class (lenCLs) of 380 mm and age=3. It is worth to mention that for a particular year, quarter and area combination I can have different length classes and ages (thus, multiple rows)!
So what I want to do is basically to subset the raw dataframe for a particular year, quarter and area combination, and from that combination calculate a proportion table based on the number of individuals in each length class.
So far my basic function looks as follows:
LAK <- function(df, Year="2005", Quarter="1", Area="22", alkplot=T){
require(FSA)
# subset alk by year, quarter and area
sALK <- subset(df, year==Year & quarter==Quarter & area==Area)
dfexp <- sALK[rep(seq(nrow(sALK)), sALK$no_individuals), 1:ncol(sALK)]
raw <- t(table(dfexp$lenCls, dfexp$age))
key <- round(prop.table(raw, margin=1), 3)
return(key)
if(alkplot==TRUE){
alkPlot(key,"area",xlab="Age")
}
}
From the dataset example above, one can notice that for year=2005 & quarter=3 & area=21, I do not have any measured individuals. Yet, for the same area AND year I have data for either quarter 1 or 2. The most reasonable assumption would be to take the subsetted dataframe from the closest time step (herby quarter 2 with the same area and year), and replace the NA from the columns "no_individuals", "lenCls" and "age" accordingly.
Note also that for some cases I do not have data for a particular year! In the example above, one can see this by looking into area 24 from year 2007. In this case I can not borrow the information from the nearest quarter, and would need to borrow from the previous year instead. This would mean that for year=2007 & area=24 & quarter=1 I would borrow the information from year=2006 & area=24 & quarter 1, and so on and so forth.
I have tried to include this in my function by specifying some extra rules, but due to my poor programming skills I didn't make any progress.
So, any help here will be very much appreciated.
Here my LAK function which I'm trying to update:
LAK <- function(df, Year="2005", Quarter="1", Area="22", alkplot=T){
require(FSA)
# subset alk by year, quarter and area
sALK <- subset(df, year==Year & quarter==Quarter & area==Area)
# In case of empty dataset
#if(is.data.frame(sALK) && nrow(sALK)==0){
if(sALK[rowSums(is.na(sALK)) > 0,]){
warning("Empty subset combination; data will be subsetted based on the
nearest timestep combination")
FIXME: INCLDUE IMPUTATION RULES HERE
}
dfexp <- sALK[rep(seq(nrow(sALK)), sALK$no_individuals), 1:ncol(sALK)]
raw <- t(table(dfexp$lenCls, dfexp$age))
key <- round(prop.table(raw, margin=1), 3)
return(key)
if(alkplot==TRUE){
alkPlot(key,"area",xlab="Age")
}
}
So, I finally came up with a partial solution to my problem and will include my function here in case it might be of someone's interest:
LAK <- function(df, Year="2005", Quarter="1", Area="22",alkplot=T){
require(FSA)
# subset alk by year, quarter, area and species
sALK <- subset(df, year==Year & quarter==Quarter & area==Area)
print(sALK)
if(nrow(sALK)==1){
warning("Empty subset combination; data has been subsetted to the nearest input combination")
syear <- unique(as.numeric(as.character(sALK$year)))
sarea <- unique(as.numeric(as.character(sALK$area)))
sALK2 <- subset(df, year==syear & area==sarea)
vals <- as.data.frame(table(sALK2$comb_index))
colnames(vals)[1] <- "comb_index"
idx <- which(vals$Freq>1)
quarterId <- as.numeric(as.character(vals[idx,"comb_index"]))
imput <- subset(df,year==syear & area==sarea & comb_index==quarterId)
dfexp2 <- imput[rep(seq(nrow(imput)), imput$no_at_length_age), 1:ncol(imput)]
raw2 <- t(table(dfexp2$lenCls, dfexp2$age))
key2 <- round(prop.table(raw2, margin=1), 3)
print(key2)
if(alkplot==TRUE){
alkPlot(key2,"area",xlab="Age")
}
} else {
dfexp <- sALK[rep(seq(nrow(sALK)), sALK$no_at_length_age), 1:ncol(sALK)]
raw <- t(table(dfexp$lenCls, dfexp$age))
key <- round(prop.table(raw, margin=1), 3)
print(key)
if(alkplot==TRUE){
alkPlot(key,"area",xlab="Age")
}
}
}
This solves my problem when I have data for at least one quarter of a particular Year & Area combination. Yet, I'm still struggling to figure out how to deal when I do not have data for a particular Year & Area combination. In this case I need to borrow data from the closest Year that contains data for all the quarters for the same area.
For the example exposed above, this would mean that for year=2007 & area=24 & quarter=1 I would borrow the information from year=2006 & area=24 & quarter 1, and so on and so forth.
I don't know if you have ever encountered MICE, but it is a pretty cool and comprehensive tool for variable imputation. It also allows you to see how the imputed data is distributed so that you can choose the method most suited for your problem. Check this brief explanation and the original package description

Calculate a mean and deal with NA

I got a dataset (df) that looks like this:
LETTER | VALUE |
A | 2 |
A | 3 |
B | 4 |
B | NA |
B | 6 |
B | NA |
C | NA |
C | NA |
I m looking for a way to create a second datased (new_df) based on the mean of VALUE for each LETTER. But I need to know which letter have NA.
new_df should look like this:
LETTER | VALUE |
A | 2,5 |
B | 5 |
C | NA |
Here is the code I tried :
new_df <- aggregate(as.numeric(VALUE) ~ LETTER, df, mean)
The issue with it, is that it omit NA and only returns this:
LETTER | VALUE |
A | 2,5 |
B | 5 |
Can you please help?
You may just change defaults of aggregate()
aggregate(as.numeric(VALUE) ~ LETTER, df, function(x) mean(x, na.rm=TRUE),
na.action = na.pass)

Why won't my column name change work in R?

This is part of a script im writing to merge the collumns more fully after using merge().
If both datasets have a column with the same name merge() gives you columns column.x and column.y. I have written a script to put this data together and to drop the unneeded columns (which would be column.y and column.x_error, a column i've added to give warnings in case dat$column.x != dat$column.y). I also want to rename column.x to column, to decrease unneeded manual actions in my dataset. I have not managed to rename column.x to column, see the code for more info.
dat is obtained from doing a dat = merge(data1,data2, by= "ID", all.x=TRUE)
#obtain a list of double columns
dubbelkol = cbind()
sorted = sort(names(dat))
for(i in as.numeric(1:length(names(dat)))) {
if(grepl(".x",sorted[i])){
if (grepl(".y", sorted[i+1]) && (sub(".x","",sorted[i])==sub(".y","",sorted[i+1]))){
dubbelkol = cbind(dubbelkol,sorted[i],sorted[i+1])
}
}
}
#Check data, fill in NA in column.x from column.y if poss
temp = cbind()
for (p in as.numeric(1:(length(dubbelkol)-1))){
if(grepl(".x",dubbelkol[p])){
dat[dubbelkol[p]][is.na(dat[dubbelkol[p]])] = dat[dubbelkol[p+1]][is.na(dat[dubbelkol[p]])]
temp = (dat[dubbelkol[p]] != dat[dubbelkol[p+1]])
colnames(temp) = (paste(dubbelkol[p],"_error", sep=""))
dat[colnames(temp)] = temp
}
}
#If every value in "column.x_error" is TRUE or NA, delete "column.y" and "column.x_error"
#Rename "column.x" to "column"
#from here until next comment everything works
droplist= c()
for (k in as.numeric(1:length(names(dat)))) {
if (grepl(".x_error",colnames(dat[k]))) {
if (all(dat[k]==FALSE, na.rm = TRUE)) {
droplist = c(droplist,colnames(dat[k]), sub(".x_error",".y",colnames(dat[k])))
#the next line doesnt work, it's supposed to turn the .x column back to "" before the .y en .y_error columns are dropped.
colnames(dat[sub(".x_error",".x",colnames(dat[k]))])= paste(sub(".x_error","",colnames(dat[k])))
}
}
}
dat = dat[,!names(dat) %in% droplist]
paste(sub(".x_error","",colnames(dat[k]))) will give me "BNR" just fine, but the colnames(...) = ... won't change the column name in dat.
Any idea what's going wrong?
data1
+----+-------+
| ID | BNR |
+----+-------+
| 1 | 123 |
| 2 | 234 |
| 3 | NA |
| 4 | 456 |
| 5 | 677 |
| 6 | NA |
+----+-------+
data2
+----+-------+
| ID | BNR |
+----+-------+
| 1 | 123 |
| 2 | 234 |
| 3 | 345 |
| 4 | 456 |
| 5 | 677 |
| 6 | NA |
+----+-------+
dat
+----+-------+-------+-----------+
| ID | BNR.x | BNR.y |BNR.x_error|
+----+-------+-------+-----------+
| 1 | 123 | NA |FALSE |
| 2 | 234 | 234 |FALSE |
| 3 | NA | 345 |FALSE |
| 4 | 456 | 456 |FALSE |
| 5 | 677 | 677 |FALSE |
| 6 | NA | NA |NA |
+----+-------+-------+-----------+
desired output
+----+-------+
| ID | BNR |
+----+-------+
| 1 | 123 |
| 2 | 234 |
| 3 | 345 |
| 4 | 456 |
| 5 | 677 |
| 6 | NA |
+----+-------+
I suggest replacing:
sub(".x_error",".x",colnames(dat[k]))]
with:
sub("\\.x_error", "\\.x", colnames(dat[k]))]
if you wish to replace an actual .. You have to escape . with \\.. A . in regex means any character.
Even better, since you are replacing . with . why not just say:
sub("x_error", "x", colnames(dat[k]))]
(or) if there is no other _error other than x_error, simply:
sub("_error", "", colnames(dat[k]))]
Edit: The problem seems to be that your data format seems to be loading additional columns on the left and the right. You can select the columns you want first and then merge.
d1 <- read.table(textConnection("| ID | BNR |
| 1 | 123 |
| 2 | 234 |
| 3 | NA |
| 4 | 456 |
| 5 | 677 |
| 6 | NA |"), sep = "|", header = TRUE, stringsAsFactors = FALSE)[,2:3]
d1$BNR <- as.numeric(d1$BNR)
d2 <- read.table(textConnection("| 1 | 123 |
| 2 | 234 |
| 3 | 345 |
| 4 | 456 |
| 5 | 677 |
| 6 | NA |"), header = FALSE, sep = "|", stringsAsFactors = FALSE)[,2:3]
names(d2) <- c("ID", "BNR")
d2$BNR <- as.numeric(d2$BNR)
# > d1
# ID BNR
# 1 1 123
# 2 2 234
# 3 3 NA
# 4 4 456
# 5 5 677
# 6 6 NA
# > d2
# ID BNR
# 1 1 123
# 2 2 234
# 3 3 345
# 4 4 456
# 5 5 677
# 6 6 NA
dat <- merge(d1, d2, by="ID", all=T)
> dat
# ID BNR.x BNR.y
# 1 1 123 123
# 2 2 234 234
# 3 3 NA 345
# 4 4 456 456
# 5 5 677 677
# 6 6 NA NA
# replace all NA values in x from y
dat$BNR.x <- ifelse(is.na(dat$BNR.x), dat$BNR.y, dat$BNR.x)
# now remove y
dat$BNR.y <- null

Resources