dplyr filter by the first column - r
Is it possible to filter in dplyr by the position of a column?
I know how to do it without dplyr
iris[iris[,1]>6,]
But how can I do it in dplyr?
Thanks!
Besides the suggestion by #thelatemail, you can also use filter_at and pass the column number to vars parameter:
iris %>% filter_at(1, all_vars(. > 6))
all(iris %>% filter_at(1, all_vars(. > 6)) == iris[iris[,1] > 6, ])
# [1] TRUE
No magic, just use the item column number as per above, rather than the variable (column) name:
library("dplyr")
iris %>%
filter(iris[,1] > 6)
Which as #eipi10 commented is better as
iris %>%
filter(.[[1]] > 6)
dply >= 1.0.0
Scoped verbs (_if, _at, _all) and by extension all_vars() and any_vars() have been superseded by across(). In the case of filter the functions if_any and if_all have been created to combine logic across multiple columns to aid in subsetting (these verbs are available in dplyr >= 1.0.4):
if_any() and if_all() are used with to apply the same predicate function to a selection of columns and combine the results into a single logical vector.
The first argument to across, if_any, and if_any is still tidy-select syntax for column selection, which includes selection by column position.
Single Column
In your single column case you could do any with the same result:
iris %>%
filter(across(1, ~ . > 6))
iris %>%
filter(if_any(1, ~ . > 6))
iris %>%
filter(if_all(1, ~ . > 6))
Multiple Columns
If you're apply a predicate function or formula across multiple columns then across might give unexpected results and in this case you should use if_any and if_all:
iris %>%
filter(if_all(c(2, 4), ~ . > 2.3)) # by column position
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 6.3 3.3 6.0 2.5 virginica
2 7.2 3.6 6.1 2.5 virginica
3 5.8 2.8 5.1 2.4 virginica
4 6.3 3.4 5.6 2.4 virginica
5 6.7 3.1 5.6 2.4 virginica
6 6.7 3.3 5.7 2.5 virginica
Notice this returns rows where all selected columns have a value greater than 2.3, which is a subset of rows where any of the selected columns meet the logic:
iris %>%
filter(if_any(ends_with("Width"), ~ . > 2.3)) # same columns selection as above
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
5 5.0 3.6 1.4 0.2 setosa
6 6.7 3.3 5.7 2.5 virginica
7 6.7 3.0 5.2 2.3 virginica
8 6.3 2.5 5.0 1.9 virginica
9 6.5 3.0 5.2 2.0 virginica
10 6.2 3.4 5.4 2.3 virginica
11 5.9 3.0 5.1 1.8 virginica
The output above was shorted to be more compact for this example.
Related
Function to filter data equal to or greater than a certain value
I have a dataframe containing thousands of rows and columns. The rows contain the names of genes and the columns the names of samples. I only want to keep the rows that contain a value equal to or greater than 5 in more than 3 samples. I tried this so far but I can't figure out how to set multiple conditions: data.frame1 %>% filter_all(all_vars(.>= 5)) I hope I have stated this question correctly.
The way I do it in my gene expression filtering pre-differential gene expression pipeline is as follows: data.frame1[rowSums(data.frame1 >= 5) > 3, ] -> filtered.counts And if your first column is your gene identifier, with all the other columns being numeric, you can have the evaluation skip the first column as follows: data.frame1[rowSums(data.frame1[-1] >= 5) > 3, ] -> filtered.counts
The way to do this in dplyr 1.0.0 is iris %>% filter(rowSums(across(where(is.numeric)) > 6) > 1) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 7.6 3.0 6.6 2.1 virginica 2 7.3 2.9 6.3 1.8 virginica 3 7.2 3.6 6.1 2.5 virginica 4 7.7 3.8 6.7 2.2 virginica 5 7.7 2.6 6.9 2.3 virginica 6 7.7 2.8 6.7 2.0 virginica 7 7.4 2.8 6.1 1.9 virginica etc For your case data.frame1 %>% filter(rowSums(across(where(is.numeric)) >= 5) > 3)
How do I identify duplicates except for one column, and replace that column with max [duplicate]
This question already has answers here: Select the row with the maximum value in each group (19 answers) Closed 3 years ago. I am trying to find data where three out of four columns are duplicated, and then to remove duplicates but keep the row with the largest number for the otherwise identical data. I found this very helpful article on the StackOverflow which I think gets me about half way there. I will base my question of the example in that question. (The example has more columns than what I am working on but I don' think that really matters.) require(tidyverse) x = iris%>%select(-Petal.Width) dups = x[x%>%duplicated(),] answer = iris%>%semi_join(dups) > answer Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.1 1.5 0.1 setosa 3 4.8 3.0 1.4 0.1 setosa 4 5.1 3.5 1.4 0.3 setosa 5 4.9 3.1 1.5 0.2 setosa 6 4.8 3.0 1.4 0.3 setosa 7 5.8 2.7 5.1 1.9 virginica 8 6.7 3.3 5.7 2.1 virginica 9 6.4 2.8 5.6 2.1 virginica 10 6.4 2.8 5.6 2.2 virginica 11 5.8 2.7 5.1 1.9 virginica 12 6.7 3.3 5.7 2.5 virginica That article introduced me to code that will identify all rows where everything is equal except petal width: iris[duplicated(iris[,-4]) | duplicated(iris[,-4], fromLast = TRUE),] This is great but I don't know how to progress from here. I would like to have rows 2 and 5 to collapse into a single row that is equal to row 5. Similarly 9 & 10, should become just 10, and 8 & 12 become just 12. The data set I have has more than 2 rows in some sets of duplicates, so I haven't had any luck using arrange functions to order them and delete the smallest row.
This should do what you want iris %>% group_by(Sepal.Length, Sepal.Width, Petal.Length, Species) %>% filter(Petal.Width == max(Petal.Width)) %>% filter(row_number() == 1) %>% ungroup() The second filtering is to get rid of duplicates if the Petal.Width is also identical for two entries. Does this work for you?
Write a tidyeval function to rename a factor level in a dplyr
I'm trying to write a tidyeval function that takes a numeric column, replaces values above a certain limit with the value for limit, turns that column into a factor and then replaces the factor level equal to limit with a level called "limit+". For example, I'm trying to replace any value above 3 in sepal.width with 3 and then rename that factor level to 3+. As an example, here's how I'm trying to make it work with the iris dataset. The fct_recode() function is not renaming the factor level properly, though. plot_hist <- function(x, col, limit) { col_enq <- enquo(col) x %>% mutate(var = factor(ifelse(!!col_enq > limit, limit,!!col_enq)), var = fct_recode(var, assign(paste(limit,"+", sep = ""), paste(limit)))) } plot_hist(iris, Sepal.Width, 3)
To fix the last line, we can use the special symbol :=, since we need to set the value at the left hand side of the expression. For the RHS we need to coerce to character, since fct_recode expects a character vector on the right. library(tidyverse) plot_hist <- function(x, col, limit) { col_enq <- enquo(col) x %>% mutate(var = factor(ifelse(!!col_enq > limit, limit, !!col_enq)), var = fct_recode(var, !!paste0(limit, "+") := as.character(limit))) } plot_hist(iris, Sepal.Width, 3) %>% sample_n(10) #> Sepal.Length Sepal.Width Petal.Length Petal.Width Species var #> 40 5.1 3.4 1.5 0.2 setosa 3+ #> 98 6.2 2.9 4.3 1.3 versicolor 2.9 #> 7 4.6 3.4 1.4 0.3 setosa 3+ #> 99 5.1 2.5 3.0 1.1 versicolor 2.5 #> 76 6.6 3.0 4.4 1.4 versicolor 3+ #> 77 6.8 2.8 4.8 1.4 versicolor 2.8 #> 85 5.4 3.0 4.5 1.5 versicolor 3+ #> 119 7.7 2.6 6.9 2.3 virginica 2.6 #> 110 7.2 3.6 6.1 2.5 virginica 3+ #> 103 7.1 3.0 5.9 2.1 virginica 3+
create a dummy variable (using mutate) based on a pattern in a character string
I'm trying to figure out how to create a dummy variable based on a pattern in a character string. The point is to end up with a simple way to make certain aspects of my ggplot (color, linetype, etc.) the same for samples that have something in common (such as different types of mutations of the same gene -- each sample name contains the name of the gene, plus some other characters). As an example with the iris dataset, let's say I want to add a column (my dummy variable) that will have one value for species whose names contain the letter "v", and another value for species that don't. (In the real dataset, I have many more possible categories.) I've been trying to use mutate and recode, str_detect, or if_else, but can't seem to get the syntax right. For instance, mutate(iris, anyV = ifelse(str_detect('Species', "v"), "withV", "noV")) doesn't throw any errors, but it doesn't detect that any of the species names contain a v, either. Which I think has to do with my inability to figure out how to get str_detect to work: iris %>% select(Species) %>% str_detect("setosa") just returns [1] FALSE. iris %>% filter(str_detect('Species', "setosa")) doesn't work, either. (I've also tried things like a mutate/recode solution, based on an example in 7 Most Practically Useful Operations When Wrangling Text Data in R , but can't get that to work, either.) What am I doing wrong? And how do I fix it?
This works: library(stringr) iris%>% mutate( anyV = ifelse(str_detect(Species, "v"), "withV", "noV")) Sepal.Length Sepal.Width Petal.Length Petal.Width Species anyV 1 5.1 3.5 1.4 0.2 setosa noV 2 4.9 3.0 1.4 0.2 setosa noV 3 4.7 3.2 1.3 0.2 setosa noV 4 4.6 3.1 1.5 0.2 setosa noV 5 5.0 3.6 1.4 0.2 setosa noV ... 52 6.4 3.2 4.5 1.5 versicolor withV 53 6.9 3.1 4.9 1.5 versicolor withV 54 5.5 2.3 4.0 1.3 versicolor withV 55 6.5 2.8 4.6 1.5 versicolor withV 56 5.7 2.8 4.5 1.3 versicolor withV 57 6.3 3.3 4.7 1.6 versicolor withV 58 4.9 2.4 3.3 1.0 versicolor withV 59 6.6 2.9 4.6 1.3 versicolor withV An alternative to nested ifelse statements: iris%>% mutate(newVar = case_when( str_detect(.$Species, "se") ~ "group1", str_detect(.$Species, "ve") ~ "group2", str_detect(.$Species, "vi") ~ "group3", TRUE ~ as.character(.$Species)))
Smart spreadsheet parsing (managing group sub-header and sum rows, etc)
Say you have a set of spreadsheets formatted like so: Is there an established method/library to parse this into R without having to individually edit the source spreadsheets? The aim is to parse header rows and dispense with sum rows so the output is the raw data, like so: Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 7.0 3.2 4.7 1.4 versicolor 5 6.4 3.2 4.5 1.5 versicolor 6 6.9 3.1 4.9 1.5 versicolor 7 5.7 2.8 4.1 1.3 versicolor 8 6.3 3.3 6.0 2.5 virginica 9 5.8 2.7 5.1 1.9 virginica 10 7.1 3.0 5.9 2.1 virginica I can certainly hack a tailored solution to this, but wondering there is something a bit more developed/elegant than read.csv and a load of logic. Here's a reproducible demo csv dataset (can't assume an equal number of lines per group..), although I'm hoping the solution can transpose to *.xlsx: ,Sepal.Length,Sepal.Width,Petal.Length,Petal.Width ,,,, Setosa,,,, 1,5.1,3.5,1.4,0.2 2,4.9,3,1.4,0.2 3,4.7,3.2,1.3,0.2 Mean,4.9,3.23,1.37,0.2 ,,,, Versicolor,,,, 1,7,3.2,4.7,1.4 2,6.4,3.2,4.5,1.5 3,6.9,3.1,4.9,1.5 Mean,6.77,3.17,4.7,1.47 ,,,, Virginica,,,, 1,6.3,3.3,6,2.5 2,5.8,2.7,5.1,1.9 3,7.1,3,5.9,2.1 Mean,6.4,3,5.67,2.17
There is a variety of ways to present spreadsheets so it would be hard to have a consistent methodology for all presentations. However, it is possible to transform the data once it is loaded in R. Here's an example with your data. It uses the function na.locf from package zoo. x <- read.csv(text=",Sepal.Length,Sepal.Width,Petal.Length,Petal.Width ,,,, Setosa,,,, 1,5.1,3.5,1.4,0.2 2,4.9,3,1.4,0.2 3,4.7,3.2,1.3,0.2 Mean,4.9,3.23,1.37,0.2 ,,,, Versicolor,,,, 1,7,3.2,4.7,1.4 2,6.4,3.2,4.5,1.5 3,6.9,3.1,4.9,1.5 Mean,6.77,3.17,4.7,1.47 ,,,, Virginica,,,, 1,6.3,3.3,6,2.5 2,5.8,2.7,5.1,1.9 3,7.1,3,5.9,2.1 Mean,6.4,3,5.67,2.17", header=TRUE, stringsAsFactors=FALSE) library(zoo) x <- x[x$X!="Mean",] #remove Mean line x$Species <- x$X #create species column x$Species[grepl("[0-9]",x$Species)] <- NA #put NA if Species contains numbers x$Species <- na.locf(x$Species) #carry last observation if NA x <- x[!rowSums(is.na(x))>0,] #remove lines with NA X Sepal.Length Sepal.Width Petal.Length Petal.Width Species 3 1 5.1 3.5 1.4 0.2 Setosa 4 2 4.9 3.0 1.4 0.2 Setosa 5 3 4.7 3.2 1.3 0.2 Setosa 9 1 7.0 3.2 4.7 1.4 Versicolor 10 2 6.4 3.2 4.5 1.5 Versicolor 11 3 6.9 3.1 4.9 1.5 Versicolor 15 1 6.3 3.3 6.0 2.5 Virginica 16 2 5.8 2.7 5.1 1.9 Virginica 17 3 7.1 3.0 5.9 2.1 Virginica
I just recently did something similar. Here was my solution: iris <- read.csv(text=",Sepal.Length,Sepal.Width,Petal.Length,Petal.Width ,,,, Setosa,,,, 1,5.1,3.5,1.4,0.2 2,4.9,3,1.4,0.2 3,4.7,3.2,1.3,0.2 Mean,4.9,3.23,1.37,0.2 ,,,, Versicolor,,,, 1,7,3.2,4.7,1.4 2,6.4,3.2,4.5,1.5 3,6.9,3.1,4.9,1.5 Mean,6.77,3.17,4.7,1.47 ,,,, Virginica,,,, 1,6.3,3.3,6,2.5 2,5.8,2.7,5.1,1.9 3,7.1,3,5.9,2.1 Mean,6.4,3,5.67,2.17", header=TRUE, stringsAsFactors=FALSE) First I used a which splits at an index. split_at <- function(x, index) { N <- NROW(x) s <- cumsum(seq_len(N) %in% index) unname(split(x, s)) } Then you define that index using: iris[,1] <- stringr::str_trim(iris[,1]) index <- which(iris[,1] %in% c("Virginica", "Versicolor", "Setosa")) The rest is just using purrr::map_df to perform actions on each data.frame in the list that's returned. You can add some additional flexibility for removing unwanted rows if needed. split_at(iris, index) %>% .[2:length(.)] %>% purrr::map_df(function(x) { Species <- x[1,1] x <- x[-c(1,NROW(x) - 1, NROW(x)),] data.frame(x, Species = Species) })