Row-wise Boolean comparison of data - r

I have grouped my data by the appropriate grouping, and I need to be sure that "x" and "y" values equal each other for each unique combination of Group1 and Group2. In other words, what code could I use to cycle through this dataset and ensure that A1x == A1y and A2x == A2y, etc.
"Group1","Group2","group3","values"
"A" "1" x 10
"A" "1" y 10
"A" "2" x 15
"A" "2" y 15
To help make the answer easier, here is the data.frame from the example
d <- data.frame(Group1= c("A", "A", "A", "A"),
Group2= c("1", "1", "2", "2"),
group3= c("x", "y", "x", "y"),
values= c(10, 10, 15, 15))

With dplyr, you can do:
d %>%
group_by(Group1, Group2) %>%
mutate(cond = all(values == first(values)))
Group1 Group2 group3 values cond
<fct> <fct> <fct> <dbl> <lgl>
1 A 1 x 10 TRUE
2 A 1 y 10 TRUE
3 A 2 x 15 TRUE
4 A 2 y 15 TRUE
Or:
d %>%
group_by(Group1, Group2) %>%
mutate(cond = n_distinct(values) == 1)

You can also do this with pivot_wider:
tidyr::pivot_wider(d, names_from='group3', values_from='values') %>%
dplyr::mutate(eq=x==y)

I think you went too far into turning your data into a long format maybe this is easier to manipulate
d %>%
pivot_wider(names_from = group3,values_from = values) %>%
mutate(is_equal = x == y)

Here is a base R solution using ave() to make it
d <- within(d,isequal <- as.logical(ave(values,Group1,Group2,FUN = function(v) v==unique(v))))
such that
> d
Group1 Group2 group3 values isequal
1 A 1 x 10 TRUE
2 A 1 y 10 TRUE
3 A 2 x 15 TRUE
4 A 2 y 15 TRUE

Another option if the data is grouped properly and has 2 rows for each group:
d$check <- rep(d$values[seq(1L,nrow(d),2L)]==d$values[seq(2L,nrow(d),2L)], each=2L)

A simple way would be to merge the sub tables with group x and group y to compare the values.
> d[d$group3=="y",]
# Group1 Group2 group3 values
# 2 A 1 y 10
# 4 A 2 y 15
> merge(d[d$group3=="y",],d[d$group3=="x",],by=c("Group1","Group2"))
# Group1 Group2 group3.x values.x group3.y values.y
# 1 A 1 y 10 x 10
# 2 A 2 y 15 x 15
with(merge(d[d$group3=="y",], d[d$group3=="x",],
by=c("Group1","Group2")),
values.x==values.y)
## [1] TRUE TRUE
Of course you have fancier ways of doing it but it is not bad to start simple first

Related

Identify rows with a value greater than threshold, but only direct one above per group

Suppose we have a dataset with a grouping variable, a value, and a threshold that is unique per group. Say I want to identify a value that is greater than a threshold, but only one.
test <- data.frame(
grp = c("A", "A", "A", "B", "B", "B"),
value = c(1, 3, 5, 1, 3, 5),
threshold = c(4,4,4,2,2,2)
)
want <- data.frame(
grp = c("A", "A", "A", "B", "B", "B"),
value = c(1, 3, 5, 1, 3, 5),
threshold = c(4,4,4,2,2,2),
want = c(NA, NA, "yes", NA, "yes", NA)
)
In the table above, Group A has a threshold of 4, and only value of 5 is higher. But in Group B, threshold is 2, and both value of 3 and 5 is higher. However, only row with value of 3 is marked.
I was able to do this by identifying which rows had value greater than threshold, then removing the repeated value:
library(dplyr)
test %>%
group_by(grp) %>%
mutate(want = if_else(value > threshold, "yes", NA_character_)) %>%
mutate(across(want, ~replace(.x, duplicated(.x), NA)))
I was wondering if there was a direct way to do this using a single logical statement rather than doing it two-step method, something along the line of:
test %>%
group_by(grp) %>%
mutate(want = if_else(???, "yes", NA_character_))
The answer doesn't have to be on R either. Just a logical step explanation would suffice as well. Perhaps using a rank?
Thank you!
library(dplyr)
test %>%
group_by(grp) %>%
mutate(want = (value > threshold), want = want & !lag(cumany(want))) %>%
ungroup()
# # A tibble: 6 × 4
# grp value threshold want
# <chr> <dbl> <dbl> <lgl>
# 1 A 1 4 FALSE
# 2 A 3 4 FALSE
# 3 A 5 4 TRUE
# 4 B 1 2 FALSE
# 5 B 3 2 TRUE
# 6 B 5 2 FALSE
If you really want strings, you can if_else after this.
Here is more direct way:
The essential part:
With min(which((value > threshold) == TRUE) we get the first TRUE in our column,
Next we use an ifelse and check the number we get to the row number and set the conditions:
library(dplyr)
test %>%
group_by(grp) %>%
mutate(want = ifelse(row_number()==min(which((value > threshold) == TRUE)),
"yes", NA_character_))
grp value threshold want
<chr> <dbl> <dbl> <chr>
1 A 1 4 NA
2 A 3 4 NA
3 A 5 4 yes
4 B 1 2 NA
5 B 3 2 yes
6 B 5 2 NA
>
This is a perfect chance for a data.table answer using its non-equi matching and multiple match handling capabilities:
library(data.table)
setDT(test)
test[test, on=.(grp, value>threshold), mult="first", flag := TRUE]
test
# grp value threshold flag
# <char> <num> <num> <lgcl>
#1: A 1 4 NA
#2: A 3 4 NA
#3: A 5 4 TRUE
#4: B 1 2 NA
#5: B 3 2 TRUE
#6: B 5 2 NA
Find the "first" matching value in each group that is greater than > the threshold and set := it to TRUE

Find point in dataframe where (col_1[ i ], col_2[ i ]) = (col_1[ j ], -col_2[ j ])

There might be an obvious solution to this that I have missed but here goes:
Consider the data frame below. I wish to create a column with TRUE/FALSE values, where the value is TRUE whenever the condition (col_1[i], col_2[i]) = (col_1[j], -col_2[j]) is fulfilled. Note that sum() does not work here, since there might be a third value.
To elaborate; what I have is:
col_1 <- c("x", "x", "y", "y", "y", "z", "z")
col_2 <- c(-1, 1, 3, -3, 4, 7, 3)
df <- data.frame(col_1, col_2)
What I want is:
I think the answer must be something with df %>% group_by(x), but I can't think of the complete solution.
Here is my attempt. As you were saying, grouping data is necessary. I defined groups with col_1 and foo. foo contains absolute values of col_2. If the number of observation is larger than one and unique number of observation in col_2 is equal to 2, you have the pairs you are searching.
group_by(df, col_1, foo = abs(col_2)) %>%
mutate(check = n() > 1 & n_distinct(col_2) == 2) %>%
ungroup %>%
select(-foo)
col_1 col_2 check
<fct> <dbl> <lgl>
1 x -1 TRUE
2 x 1 TRUE
3 y 3 TRUE
4 y -3 TRUE
5 y 4 FALSE
6 z 7 FALSE
7 z 3 FALSE
As Ronak previously mentioned, there may be cases like this.
col_1 <- c("x", "x", "y", "y", "y", "z", "z")
col_2 <- c(1, 1, 3, -3, 4, 7, 3)
df2 <- data.frame(col_1, col_2)
col_1 col_2
1 x 1
2 x 1
3 y 3
4 y -3
5 y 4
6 z 7
7 z 3
group_by(df2, col_1, foo = abs(col_2)) %>%
mutate(check = n() > 1 & n_distinct(col_2) == 2) %>%
ungroup %>%
select(-foo)
col_1 col_2 check
<fct> <dbl> <lgl>
1 x 1 FALSE
2 x 1 FALSE
3 y 3 TRUE
4 y -3 TRUE
5 y 4 FALSE
6 z 7 FALSE
7 z 3 FALSE
You can try the following base R code, where a custom function f is defined to check the sum:
f <- function(v) {
unique(c(combn(seq(v),2)[,combn(v,2,sum)==0]))
}
dfout <- Reduce(rbind,
lapply(split(df,df$col_1),
function(v) {
v$col_3 <- F
v$col_3[f(v$col_2)] <- T
v
})
)
dfout <- dfout[order(as.numeric(rownames(dfout))),]
such that
> dfout
col_1 col_2 col_3
1 x -1 TRUE
2 x 1 TRUE
3 y 3 TRUE
4 y -3 TRUE
5 y 4 FALSE
6 z 7 FALSE
7 z 3 FALSE

Group data by factor level, then transform to data frame with colname being levels?

There is my problem that I can't solve it:
Data:
df <- data.frame(f1=c("a", "a", "b", "b", "c", "c", "c"),
v1=c(10, 11, 4, 5, 0, 1, 2))
data.frame:f1 is factor
f1 v1
a 10
a 11
b 4
b 5
c 0
c 1
c 2
# What I want is:(for example, fetch data with the number of element of some level == 2, then to data.frame)
a b
10 4
11 5
Thanks in advance!
I might be missing something simple here , but the below approach using dplyr works.
library(dplyr)
nlevels = 2
df1 <- df %>%
add_count(f1) %>%
filter(n == nlevels) %>%
select(-n) %>%
mutate(rn = row_number()) %>%
spread(f1, v1) %>%
select(-rn)
This gives
# a b
# <int> <int>
#1 10 NA
#2 11 NA
#3 NA 4
#4 NA 5
Now, if you want to remove NA's we can do
do.call("cbind.data.frame", lapply(df1, function(x) x[!is.na(x)]))
# a b
#1 10 4
#2 11 5
As we have filtered the dataframe which has only nlevels observations, we would have same number of rows for each column in the final dataframe.
split might be useful here to split df$v1 into parts corresponding to df$f1. Since you are always extracting equal length chunks, it can then simply be combined back to a data.frame:
spl <- split(df$v1, df$f1)
data.frame(spl[lengths(spl)==2])
# a b
#1 10 4
#2 11 5
Or do it all in one call by combining this with Filter:
data.frame(Filter(function(x) length(x)==2, split(df$v1, df$f1)))
# a b
#1 10 4
#2 11 5
Here is a solution using unstack :
unstack(
droplevels(df[ave(df$v1, df$f1, FUN = function(x) length(x) == 2)==1,]),
v1 ~ f1)
# a b
# 1 10 4
# 2 11 5
A variant, similar to #thelatemail's solution :
data.frame(Filter(function(x) length(x) == 2, unstack(df,v1 ~ f1)))
My tidyverse solution would be:
library(tidyverse)
df %>%
group_by(f1) %>%
filter(n() == 2) %>%
mutate(i = row_number()) %>%
spread(f1, v1) %>%
select(-i)
# # A tibble: 2 x 2
# a b
# * <dbl> <dbl>
# 1 10 4
# 2 11 5
or mixing approaches :
as_tibble(keep(unstack(df,v1 ~ f1), ~length(.x) == 2))
Using all base functions (but you should use tidyverse)
# Add count of instances
x$len <- ave(x$v1, x$f1, FUN = length)
# Filter, drop the count
x <- x[x$len==2, c('f1','v1')]
# Hacky pivot
result <- data.frame(
lapply(unique(x$f1), FUN = function(y) x$v1[x$f1==y])
)
colnames(result) <- unique(x$f1)
> result
a b
1 10 4
2 11 5
I'd like code this, may it helps for you
library(reshape2)
library(dplyr)
aa = data.frame(v1=c('a','a','b','b','c','c','c'),f1=c(10,11,4,5,0,1,2))
cc = aa %>% group_by(v1) %>% summarise(id = length((v1)))
dd= merge(aa,cc) #get the level
ee = dd[dd$aa==2,] #select number of level equal to 2
ee$id = rep(c(1,2),nrow(ee)/2) # reset index like (1,2,1,2)
dcast(ee, id~v1,value.var = 'f1')
all done!

How to rank column accordingly using case_when?

I wanted to create another column (called delayGrade) where the top 10% of values (closest to 0) from another column (averageDelay) get assigned the letter 'A', the next 25% 'B', and the remaining 'C'. I figured I could use a case_when function to do so, but not sure how to go about doing it. Any ideas?
Here is toy data frame and solution:
library(tidyverse)
df <- tibble(
averageDelay = rnorm(10)
)
df %>%
mutate(
delayGrade = case_when(
averageDelay < quantile(averageDelay, .1) ~ "A",
averageDelay < quantile(averageDelay, .35) ~ "B",
TRUE ~ "C"
)
) %>%
arrange(averageDelay) # Not necissary, but improves readability
# A tibble: 10 x 2
averageDelay delayGrade
<dbl> <chr>
1 -1.57878473 A
2 -1.00129022 B
3 -0.34245100 B
4 -0.08652020 B
5 -0.05240453 C
6 0.15732711 C
7 0.21509389 C
8 0.34202367 C
9 0.90296373 C
10 0.90820894 C

Bind data frames on longer identifiers R

I've got two data frames in which the unique identifiers common to both frames differ in the number of observations. I would like to create a dataframe from both in which the observations from each frame are taken if they have more observations for a common identifier. For example:
f1 <- data.frame(x = c("a", "a", "b", "c", "c", "c"), y = c(1,1,2,3,3,3))
f2 <- data.frame(x = c("a","b", "b", "c", "c"), y = c(4,5,5,6,6))
I would like this to generate a merge based on the longer x such that it produces:
x y
a 1
a 1
b 5
b 5
c 3
c 3
c 3
Any and all thoughts would be great.
Here's a solution using split
dd<-rbind(cbind(f1, s="f1"), cbind(f2, s="f2"))
keep<-unsplit(lapply(split(dd$s, dd$x), FUN=function(x) {
y<-table(x)
x == names(y[which.max(y)])
}), dd$x)
dd <- dd[keep,]
Normally i'd prefer to use the ave function here but because i'm changing data.types from a factor to a logical, it wasn't as appropriate so I basically copied the idea that ave uses and used split.
dplyr solution
library(dplyr)
First we combine the data:
with rbind() and introduce a new variable called ref to know where each observation came from:
both <- rbind( f1, f2 )
both$ref <- rep( c( "f1", "f2" ) , c( nrow(f1), nrow(f2) ) )
then count the observations:
make another new variable that contains how many observations for each ref and x combination:
both_with_counts <- both %>%
group_by( ref ,x ) %>%
mutate( counts = n() )
then filter for the largest count:
both_with_counts %>% group_by( x ) %>% filter( n==max(n) )
note: you could also select only the x and y cols with select(x,y)...
this gives:
## Source: local data frame [7 x 4]
## Groups: x
##
## x y ref counts
## 1 a 1 f1 2
## 2 a 1 f1 2
## 3 c 3 f1 3
## 4 c 3 f1 3
## 5 c 3 f1 3
## 6 b 5 f2 2
## 7 b 5 f2 2
Altogether now...
what_I_want <-
rbind(cbind(f1,ref = "f1"),cbind(f2,ref = "f2")) %>%
group_by(ref,x) %>%
mutate(counts = n()) %>%
group_by( x ) %>%
filter( counts==max(counts) ) %>%
select( x, y )
and thus:
> what_I_want
# Source: local data frame [7 x 2]
# Groups: x
#
# x y
# 1 a 1
# 2 a 1
# 3 c 3
# 4 c 3
# 5 c 3
# 6 b 5
# 7 b 5
Not a elegant answer but still give the desired result. Hope this help.
f1table <- data.frame(table(f1$x))
colnames(f1table) <- c("x","freq")
f1new <- merge(f1,f1table)
f2table <- data.frame(table(f2$x))
colnames(f2table) <- c("x","freq")
f2new <- merge(f2,f2table)
table <- rbind(f1table, f2table)
table <- table[with(table, order(x,-freq)), ]
table <- table[!duplicated(table$x), ]
data <-rbind(f1new, f2new)
merge(data, table, by=c("x","freq"))[,c(1,3)]
x y
1 a 1
2 a 1
3 b 5
4 b 5
5 c 3
6 c 3
7 c 3

Resources