Determine R Data Frame Row Values from Column Value Comparison - r

I’m trying to compare the values in column C and return the rows that are associated with them . An example would be to compare the first two values in column C. If the first value is greater than the second, return the first two rows in a data frame. If the first value is not greater then skip to the next set, and compare and see if the third value from column C is greater than the fourth. If this is the case return rows 3 and 4. if not skip to the next set.
I’ve been wrangling with the filter function from dplyr but no luck.
Below is a example data frame.
set.seed(99)
DF <- data.frame(abs(rnorm(10)), abs(rnorm(10)), abs(rnorm(10)))
colnames(DF) <-c("A", "B", "C")
DF
Any help would be appreciated.

You can use rollapply from zoo package,
library(zoo)
ind <- rep(rollapply(DF$C, 2, by = 2, which.max) == 1, each = 2)
DF[ind,]
A B C
#1 1.52984334 2.0127251 1.70922539
#2 1.96454540 0.2887642 0.52301701
#5 1.15765833 0.2866493 1.72702076
#6 0.80379719 1.0945894 0.72269558
#7 1.52239099 0.5296913 2.04080511
#8 0.01663749 0.3593682 0.88601771
#9 0.12672258 0.4110257 0.19165526
#10 0.27740770 0.1950477 0.01378397

Here is a base R solution you can try with where you find the index for every two rows based on the condition and then do a subset on the data frame:
ind <- which(DF$C[c(T, F)] > DF$C[c(F, T)]) # check whether the odd rows are larger than
# the even rows and find out the index
DF[c(2*ind-1, 2*ind), ] # subset the data frame based on index for every two rows
# A B C
# 1 1.6866933 0.6886403 1.1231086
# 9 0.8781335 2.1689560 1.3686023
# 2 0.8377870 0.5539177 0.4028848
# 10 0.8215811 1.2079620 0.2257710

Related

How to find the longest sequence of non-NA rows in R?

I have an ordered dataframe with many variables, and am looking to extract the data from all columns associated with the longest sequence of non-NA rows for one particular column. Is there an easy way to do this? I have tried the na.contiguous() function but my data is not formatted as a time series.
My intuition is to create a running counter which determines whether a row has NA or not, and then will determine the count for the number of consecutive rows without an NA. I would then put this in an if statement to keep restarting every time an NA is encountered, outputting a dataframe with the lengths of every sequence of non-NAs, which I could use to find the longest such sequence. This seems very inefficient so I'm wondering if there is a better way!
If I understand this phrase correctly:
[I] am looking to extract the data from all columns associated with the longest sequence of non-NA rows for one particular column
You have a column of interest, call it WANT, and are looking to isolate all columns from the single row of data with the highest consecutive non-NA values in WANT.
Example data
df <- data.frame(A = LETTERS[1:10],
B = LETTERS[1:10],
C = LETTERS[1:10],
WANT = LETTERS[1:10],
E = LETTERS[1:10])
set.seed(123)
df[sample(1:nrow(df), 2), 4] <- NA
# A B C WANT E
#1 A A A A A
#2 B B B B B
#3 C C C <NA> C
#4 D D D D D
#5 E E E E E
#6 F F F F F
#7 G G G G G
#8 H H H H H
#9 I I I I I # want to isolate this row (#9) since most non-NA in WANT
#10 J J J <NA> J
Here you would want all I values as it is the row with the longest running non-NA values in WANT.
If my interpretation of your question is correct, we can extend the excellent answer found here to your situation. This creates a data frame with a running tally of consecutive non-NA values for each column.
The benefit of using this is that it will count consecutive non-NA runs across all columns (of any type, ie character, numeric), then you can index on whatever column you want using which.max()
# from #jay.sf at https://stackoverflow.com/questions/61841400/count-consecutive-non-na-items
res <- as.data.frame(lapply(lapply(df, is.na), function(x) {
r <- rle(x)
s <- sapply(r$lengths, seq_len)
s[r$values] <- lapply(s[r$values], `*`, 0)
unlist(s)
}))
# index using which.max()
want_data <- df[which.max(res$WANT), ]
#> want_data
# A B C WANT E
#9 I I I I I
If this isn't correct, please edit your question for clarity.

Filtering/subsetting R dataframe based on each rows n'th position value

I have a 'df' with 2 columns:
Combinations <- c(0011111111, 0011113111, 0013113112, 0022223114)
Values <- c(1,2,3,4)
df <- cbind.data.frame(Combinations, Values)
I am trying to find a way to subset or filter the dataframe where the 'Combinations' column's 7th, 8th, and 9th digits equal 311. For the example given, I would expect Combination's 0011113111, 0013113112, 0022223114
There are also instances where I would need to find different combinations, in different nth positions.
I know substring() can find these values for single rows but I'm not sure how to apply it to an entire dataframe.
subtring will work with vectors as well.
subset(df, substring(Combinations, 7, 9) == 311)
# Combinations Values
#2 0011113111 2
#3 0013113112 3
#4 0022223114 4
data
Combinations <- c("0011111111", "0011113111", "0013113112", "0022223114")
Values <- c(1,2,3,4)
df <- data.frame(Combinations, Values)
Another base R idea:
Combinations <- c("0011111111", "0011113111", "0013113112", "0022223114")
Values <- c(1,2,3,4)
df <- data.frame(Combinations, Values)
df[grep(pattern = "^[0-9]{6}311.$", df$Combinations), ]
Output:
Combinations Values
2 0011113111 2
3 0013113112 3
4 0022223114 4
As a tip, if you want to know more about regular expressions, this website helps me a lot: https://regexr.com/3elkd
Would this work?
library(dplyr)
library(stringr)
df %>% filter(str_sub(Combinations, 7,9) == 311)
Combinations Values
1 0011113111 2
2 0013113112 3
3 0022223114 4
Not pretty but works:
df[which(lapply(strsplit(df$Combinations, ""), function(x) which(x[7]==3 & x[8]==1 & x[9]==1))==1),]
Combinations Values
2 0011113111 2
3 0013113112 3
4 0022223114 4
Data:
Combinations <- c("0011111111", "0011113111", "0013113112", "0022223114")
Values <- c(1,2,3,4)
df <- cbind.data.frame(Combinations, Values)

Is there any way to delete the rows of data which don't have all numeric values?

I have data that has two columns. Each column of data has numerical values in it but some of them don't have any numerical values. I want to remove the rows which don't have all values numerical. In reality, the data has 1000 rows but for simplification, I made the data file in smaller size here. Thanks!
a <- c(1, 2, 3, 4, "--")
b <- c("--", 2, 3, "--", 5)
data <- data.frame(a, b)
One base R option could be:
data[!is.na(Reduce(`+`, lapply(data, as.numeric))), ]
a b
2 2 2
3 3 3
And for importing the data, use stringsAsFactors = FALSE.
Or using sapply():
data[!is.na(rowSums(sapply(data, as.numeric))), ]
An easier option is to check for NA after converting to numeric with as.numeric. If the element is not numeric, it returns NA and that can be detected with is.na and use it in filter_all to remove the rows
library(dplyr)
data %>%
filter_all(all_vars(!is.na(as.numeric(.))))
# a b
#1 2 2
#2 3 3
If we don't like the warnings, an option is to detect the numbers only element with regex by checking one or more digits ([0-9.]+) including a dot from start (^) to end ($) of string with str_detect
library(stringr)
data %>%
filter_all(all_vars(str_detect(., "^[0-9.]+$")))
# a b
#1 2 2
#2 3 3
If we have only -- as non-numeric, it is easier to remove
data[!rowSums(data == "--"),]
# a b
#2 2 2
#3 3 3
data
data <- data.frame(a,b, stringsAsFactors = FALSE)

Removing rows in data.frame having columns subsumed in others

I am trying to achieve something similar to unique in a data.frame where column each element of a column in a row are vectors. What I want to do is if the elements of the vector in the column of that hat row a subset or equal to another remove the row with smaller number of elements. I can achieve this with a nested for loop but since data contains 400,000 rows the program is very inefficient.
Sample data
# Set the seed for reproducibility
set.seed(42)
# Create a random data frame
mydf <- data.frame(items = rep(letters[1:4], length.out = 20),
grps = sample(1:5, 20, replace = TRUE),
supergrp = sample(LETTERS[1:4], replace = TRUE))
# Aggregate items into a single column
temp <- aggregate(items ~ grps + supergrp, mydf, unique)
# Arrange by number of items for each grp and supergroup
indx <- order(lengths(temp$items), decreasing = T)
temp <- temp[indx, ,drop=FALSE]
Temp looks like
grps supergrp items
1 4 D a, c, d
2 3 D c, d
3 5 D a, d
4 1 A b
5 2 A b
6 3 A b
7 4 A b
8 5 A b
9 1 D d
10 2 D c
Now you can see that second combination of supergrp and items in second and third row is contained in first row. So, I want to delete the second and third rows from the result. Similarly, rows 5 to 8 are contained in row 4. Finally, rows 9 and 10 are contained in the first row, so I want to delete rows 9 and 10.
Hence, my result would look like:
grps supergrp items
1 4 D a, c, d
4 1 A b
My implementation is as follows::
# initialise the result dataframe by first row of old data frame
newdf <-temp[1, ]
# For all rows in the the original data
for(i in 1:nrow(temp))
{
# Index to check if all the items are found
indx <- TRUE
# Check if item in the original data appears in the new data
for(j in 1:nrow(newdf))
{
if(all(c(temp$supergrp[[i]], temp$items[[i]]) %in%
c(newdf$supergrp[[j]], newdf$items[[j]]))){
# set indx to false if a row with same items and supergroup
# as the old data is found in the new data
indx <- FALSE
}
}
# If none of the rows in new data contain items and supergroup in old data append that
if(indx){
newdf <- rbind(newdf, temp[i, ])
}
}
I believe there is an efficient way to implement this in R; may be using the tidy framework and dplyr chains but I am missing the trick. Apologies for a longish question. Any input would be highly appreciated.
I would try to get the items out of a list column and store them in a longer dataframe. Here is my somewhat hacky solution:
library(stringr)
items <- temp$items %>%
map(~str_split(., ",")) %>%
map_df(~data.frame(.))
out <- bind_cols(temp[, c("grps", "supergrp")], items)
out %>%
gather(item_name, item, -grps, -supergrp) %>%
select(-item_name, -grps) %>%
unique() %>%
filter(!is.na(item))

How do you test if a pair of elements is in a data frame?

Let's say I have this data frame A :
A = data.frame(first=c("a", "b","c", "d"), second=c(1, 2, 3, 4))
first second
1 a 1
2 b 2
3 c 3
4 d 4
And I have this data frame B :
B = data.frame(first=c("x", "a", "c"), second=c(1, 4, 3))
first second
1 x 1
2 a 4
3 c 3
I want to count the number of times a pair of the data frame B (B$first, B$second) is in the data frame A. The counting part is not the problem, I just can't find the function to determine whether a pair is in a data frame.
The result would be that only c("c",3) is an element of A, so it should be 1. both "a" and 4 are in data frame A, but the couple c("a", 4) does not exist in data frame A, so I don't want to count this. I want the exact match.
I'm looking for a function like %in% that could work for pairs.
Thanks for your help
Maybe something like this
apply(B, 1, function(r, A){ sum(A$first==r[1] & A$second==r[2]) }, A)
Basically, what it does is the following: for every row of B it applies a function that inspects which elements of A are in accordance with row r from B (part A$first==r[1] & A$second==r[2]) and then sums obtained logicals to derive the number of rows in A that are in accordance with row r.
If you also want grouping it can easily be done with dplyr like this
cbind(B,tmp) %.% group_by(first,second) %.% summarise(n=max(tmp))
where tmp is a variable representing the result of the aforementioned apply
Here's an alternative: rbind your data.frames together and use duplicated.
AB <- do.call(rbind, mget(c("A", "B")))
AB$ind <- as.numeric(duplicated(AB))
AB[grep("^B", rownames(AB)), ]
# first second ind
# B.1 x 1 0
# B.2 a 4 0
# B.3 c 3 1
You can also probably try to use "digest" to generate a hash for each row, but I'm not sure how efficient this would be:
library(digest)
Reduce(function(x, y) y %in% x,
lapply(mget(c("A", "B")), function(x)
apply(x, 1, digest)))
# [1] FALSE FALSE TRUE
An alternative is to merge by row, e.g. mB<-apply(B,1,function(j) paste0(j[1],"_",j[2]) and similarly for A at which point you can loop mB[j]%in%mA[k]
Not that I would really recommend doing this :-)

Resources