I've a dataset with 18 columns from which I need to return the column names with the highest value(s) for each observation, simple example below. I came across this answer, and it almost does what I need, but in some cases I need to combine the names (like abin maxcolbelow). How should I do this?
Any suggestions would be greatly appreciated! If it's possible it would be easier for me to understand a tidyverse based solution as I'm more familiar with that than base.
Edit: I forgot to mention that some of the columns in my data have NAs.
library(dplyr, warn.conflicts = FALSE)
#turn this
Df <- tibble(a = 4:2, b = 4:6, c = 3:5)
#into this
Df <- tibble(a = 4:2, b = 4:6, c = 3:5, maxol = c("ab", "b", "b"))
Created on 2018-10-30 by the reprex package (v0.2.1)
Continuing from the answer in the linked post, we can do
Df$maxcol <- apply(Df, 1, function(x) paste0(names(Df)[x == max(x)], collapse = ""))
Df
# a b c maxcol
# <int> <int> <int> <chr>
#1 4 4 3 ab
#2 3 5 4 b
#3 2 6 5 b
For every row, we check which position has max values and paste the names at that position together.
If you prefer the tidyverse approach
library(tidyverse)
Df %>%
mutate(row = row_number()) %>%
gather(values, key, -row) %>%
group_by(row) %>%
mutate(maxcol = paste0(values[key == max(key)], collapse = "")) %>%
spread(values, key) %>%
ungroup() %>%
select(-row)
# maxcol a b c
# <chr> <int> <int> <int>
#1 ab 4 4 3
#2 b 3 5 4
#3 b 2 6 5
We first convert dataframe from wide to long using gather, then group_by each row we paste column names for max key and then spread the long dataframe to wide again.
Here's a solution I found that loops through column names in case you find it hard to wrap your head around spread/gather (pivot_wider/longer)
out_df <- Df %>%
# calculate rowwise maximum
rowwise() %>%
mutate(rowmax = max(across())) %>%
# create empty maxcol column
mutate(maxcol = "")
# loop through column names
for (colname in colnames(Df)) {
out_df <- out_df %>%
# if the value at the specified column name is the maximum, paste it to the maxcol
mutate(maxcol = ifelse(.data[[colname]] == rowmax, paste0(maxcol, colname), maxcol))
}
# remove rowmax column if no longer needed
out_df <- out_df %>%
select(-rowmax)
Related
I have a data frame similar to this one.
df <- data.frame(id=c(1,2,3), tot_1=runif(3, 0, 100), tot_2=runif(3, 0, 100), tot_3=runif(3, 0, 100), tot_4=runif(3, 0, 100))
I want to select or make an operation only with those with suffixes lower than 3.
#select
df <- df %>% select(id, tot_1, tot_2)
#or sum
df <- df %>% mutate(sumVar = rowSums(across(c(tot_1, tot_2))))
However, in my real data, there are many more variables and not in order. So how could I select them without doing it manually?
We may use matches
df %>%
mutate(sumVar = rowSums(across(matches('tot_[1-2]$'))))
If we need to be more flexible, extract the digit part from the column names that starts with 'tot', subset based on the condition and use that new names
library(stringr)
nm1 <- str_subset(names(df), 'tot')
nm2 <- nm1[readr::parse_number(nm1) <3]
df %>%
mutate(sumVar = rowSums(across(all_of(nm2))))
Solution with num_range
This is the rare case for the often forgotten num_range selection helper from dplyr, which extracts the numbers from the names in a single step, then selects a range:
determine the threshold
suffix_threshold <- 3
Select( )
library(dplyr)
df %>% select(id, num_range(prefix='tot_',
range=seq_len(suffix_threshold-1)))
id tot_1 tot_2
1 1 26.75082 26.89506
2 2 21.86453 18.11683
3 3 51.67968 51.85761
mutate() with rowSums()
library(dplyr)
df %>% mutate(sumVar = across(num_range(prefix='tot_', range=seq_len(suffix_threshold-1)))%>%
rowSums)
id tot_1 tot_2 tot_3 tot_4 sumVar
1 1 26.75082 26.89506 56.27829 71.79353 53.64588
2 2 21.86453 18.11683 12.91569 96.14099 39.98136
3 3 51.67968 51.85761 25.63676 10.01408 103.53730
Here is a base R way -
cols <- grep('tot_', names(df), value = TRUE)
#Select
df[c('id', cols[as.numeric(sub('tot_', '',cols)) < 3])]
# id tot_1 tot_2
#1 1 75.409112 30.59338
#2 2 9.613496 44.96151
#3 3 58.589574 64.90672
#Rowsums
df$sumVar <- rowSums(df[cols[as.numeric(sub('tot_', '',cols)) < 3]])
df
# id tot_1 tot_2 tot_3 tot_4 sumVar
#1 1 75.409112 30.59338 59.82815 50.495758 106.00250
#2 2 9.613496 44.96151 84.19916 2.189482 54.57501
#3 3 58.589574 64.90672 18.17310 71.390459 123.49629
I have 2 dataframes like the following:
df1
colA
A
B
C
D
df2
one two
x A
y A;B
z A;D;C
p E
q F
I want to filter df2 for entries contained in df1. i.e "two" containing values of colA, so that my output will be
one two
x A
y A;B
z A;D;C
I tried all these options that didn't work
df2filtered = df2 %>% filter(two %in% df1$colA)
df2filtered = df2 %>% filter(two %in% str_detect(df1$colA))
df2filtered = df2 %>% select(two, contains(df1$colA))
str_detect with character works but not when given in df like above. What is the right solution?
Here's one way to obtaning the desired output using map to create an extra column to afterwards apply the filter.
library(tidyverse)
df2 %>%
# Use map to check if any string in df1$colA is found in
# df2$two; then use any to check if any entry is T
mutate(stay = map(two, function(x){
any(str_detect(x,df1$colA))
})) %>%
# Filter
filter(stay == T) %>%
# Remove extra column
select(-c(stay))
# one two
#1 x A
#2 y A;B
#3 z A;D;C
Your data is not "tidy". I'd reshape it into a long format. Then, filtering becomes easy.
Below an approach which makes use of an non-exported function of the eye package in order to split the column into an unknown number of columns. (disclaimer: I am the author of this package. The function was inspired and modified from this answer). Then pivot the result longer and filter by the presence in df1$colA. I'd leave the result in a tidy format, but you can of course melt it back to your rather messy shape.
library(tidyverse)
df1 <- read.table(text = "colA
A
B
C
D", header = TRUE)
df2 <- read.table(text = "one two
x A
y A;B
z A;D;C
p E
q F ", header = TRUE)
#install.packages("eye")
eye:::split_mult(df2, "two", pattern = ";" ) %>%
pivot_longer(cols = starts_with("var"), names_to = "var", values_to = "val") %>%
drop_na(val)%>%
select(-var) %>%
group_by(one) %>%
filter(any(val %in% df1$colA))
#> # A tibble: 6 x 2
#> # Groups: one [3]
#> one val
#> <chr> <chr>
#> 1 x A
#> 2 y A
#> 3 y B
#> 4 z A
#> 5 z D
#> 6 z C
Created on 2021-07-14 by the reprex package (v2.0.0)
because this function might change in the future, here for future reference:
split_mult <- function (x, col, pattern = "_", into = NULL, prefix = "var",
sep = "")
{
cols <- stringr::str_split_fixed(x[[col]], pattern, n = Inf)
cols[which(cols == "")] <- NA_character_
m <- dim(cols)[2]
if (length(into) == m) {
colnames(cols) <- into
}
else {
colnames(cols) <- paste(prefix, 1:m, sep = sep)
}
cbind(cols, x[names(x) != col])
}
Another option using str_detect. You can collapse df1$colA so that str_detect searches for A or B or C or D. e.g. "A|B|C|D".
library(tidyverse)
df2 %>% filter(str_detect(two, paste(df1$colA, collapse = '|')))
#> one two
#> 1 x A
#> 2 y A;B
#> 3 z A;D;C
I have a dataframe as following:
data.frame("id" = 1:2, "tag" = c("a,b,c","a,d"))
id tag
1 a,b,c
2 a,d
in tag where ever is a or b consider as lan and and "d"="c"="con" means that a and b are consider as lan , d and c consider as con then we want to count the number of lan and con in each row in seperate columns like table in below:
I want to create two columns which are the aggregation of a,b,c to shows like the follows:
id tag. lan_count. con_count
1 a,b,c 2 1
2 a,d 1 1
Could you please give me advice how to do this.
You can also use the following code:
library(dplyr)
library(tidyr)
df <- data.frame("id" = 1:2, "tag" = c("a,b,c","a,d"))
df %>%
separate_rows(tag, sep = ",") %>%
group_by(id) %>%
add_count(tag) %>%
pivot_wider(id, names_from = tag, values_from = n) %>%
rowwise() %>%
mutate(lan_count = sum(c_across(a:b), na.rm = TRUE),
con_count = sum(c_across(c:d), na.rm = TRUE)) %>%
select(-c(a:d))
# A tibble: 2 x 3
# Rowwise: id
id lan_count con_count
<int> <int> <int>
1 1 2 1
2 2 1 1
The main issue here is that your data is untidy. So my solution is in two parts: first, tidy the data and then summarise it. Once the data is tidy, the summary is trivial.
library(tidyverse)
# Adjust to suit your real data
maxCols <- 10
d <- data.frame(id = 1:2, tag = c("a,b,c","a,d"))
d %>%
separate(
tag,
sep=",",
into=paste0("Element", 1:maxCols),
extra="drop",
fill="right",
remove=FALSE
) %>%
pivot_longer(
cols=starts_with("Element"),
values_to="Value",
names_prefix="Element"
) %>%
select(-name) %>%
# Remove unused Values
filter(!is.na(Value)) %>%
# At this point the data frame is tidy
group_by(tag) %>%
# Translate tags into "categories". Add more if required. or write a function
mutate(
lan=Value %in% c("a", "b"),
con=Value %in% c("c", "d")
) %>%
# Adjust the column specification if more categories are added.
# Or use a factor instead of binary indicators
summarise(across(lan:con, sum))
# A tibble: 2 x 3
tag lan con
* <fct> <int> <int>
1 a,b,c 2 1
2 a,d 1 1
I have a data frame with two lists of variables. Each observation in the list contains different length of elements. For example the 4th of the variable “accession” contains one element but 7th contains two elements.
current dataframe
I want to make a new data frame combine two lists together which looks like:
final dataframe I want
Thanks for helping me!
This is data frame I am currently having.
library(rentrez)
search <- entrez_search(db="gds", term=paste0("disease", " AND gse[ETYP]") , retMax = 15)
id <- unlist(search$ids)
UID <- c(sapply(id, paste0, collapse=""))
pub.summary <- entrez_summary(db = "gds", id = UID ,
always_return_list = TRUE)
summary <- extract_from_esummary(esummaries = pub.summary ,
elements = c("samples"),
simplify = T)
df <- data.frame(summary)
df <-data.frame(t(df))
df <- df %>% mutate()
df
This is the data frame result I wish to have
# accession title
#1 GSM3955152 Cancer3
GSM3955155 Adjacent3
GSM3955757 SW480 cells, HES1-binding RNAs/LncRNAs
GSM3955153 Adjacent1
GSM3955150 Cancer1
GSM3955151 Cancer2
#2 GSM33026213 his4wk_sensitized_uti_1
GSM3302681 3his4wk_resolved_pbs_2
GSM3302624 c57bl6j_pbs_9
.
.
.
.
#4 GSM3955757 SW480 cells, HES1-binding RNAs/LncRNAs
.
.
.
.
#15 GSM3934992 control rep4 [N_0039]
GSM3935006 control rep15 [W_010]
GSM3935012 control rep17 [W_023]
GSM3934989 control rep1 [N_0026]
END
Update
Based on the OP's updates, an option is to specify simplify = FALSE in the extract_from_esummary to return as list, then extract the first list element fom each list and rbind to create a single dataframe
summary <- extract_from_esummary(esummaries = pub.summary ,
elements = "samples",
simplify = FALSE)
out <- do.call(rbind, lapply(summary, `[[`, 1))
row.names(out) <- NULL
head(out)
# accession title
#1 GSM3955152 Cancer3
#2 GSM3955155 Adjacent3
#3 GSM3955757 SW480 cells, HES1-binding RNAs/LncRNAs
#4 GSM3955153 Adjacent1
#5 GSM3955150 Cancer1
#6 GSM3955151 Cancer2
An option would be pad the list elements with NA to keep the length same in both columns (if one is of different length) and then unnest
library(dplyr)
library(purrr)
df1 %>%
mutate(n = pmax(lengths(accession), lengths(title))) %>%
mutate_at(vars(accession, title), ~
map2(., n, ~ `length<-`(.x, .y))) %>%
select(-n) %>%
unnest(cols = c(accession, title))
# A tibble: 12 x 2
# accession title
# <chr> <chr>
# 1 A a
# 2 B b
# 3 C c
# 4 <NA> d
# 5 <NA> e
# 6 A a
# 7 B b
# 8 C c
# 9 D <NA>
#10 E <NA>
#11 A d
#12 B <NA>
Or an option is to gather into 'long' format, then unnest the 'val' column and spread it back to 'wide' format
library(tidyr)
df1 %>%
mutate(rn = row_number()) %>%
gather(key, val, -rn) %>%
unnest(val) %>%
group_by(rn, key) %>%
mutate(i1 = row_number()) %>%
spread(key, val) %>%
ungroup %>%
select(-rn, -i1)
data
df1 <- tibble(accession = list(LETTERS[1:3], LETTERS[1:5], LETTERS[1:2]),
title = list(letters[1:5], letters[1:3], letters[4]))
suppose I have a tibble dat below, what I would like to do is to calculate maximum of (x 2, x 3) and then minus x 1, where x can be either a or b. In my real data I have more than 3 columns, so something like 2:n (e.g., 2:3) would be great. tried many things, seems not working as I wanted them to, still struggling with the string vs column name thing..
dat <- tibble(`a 1` = c(0, 0, 0), `a 2` = 1:3, `a 3` = 3:1,
`b 1` = rep(1, 3), `b 2` = 4:6, `b 3` = 6:4)
foo <- function(x = 'a')
{
???
}
end result:
if x == `a`
c(3, 2, 3)
if x == `b`
c(5, 4, 5)
Solution 1
This solution uses only base R. The idea is to define a function (max_minus_first) to calculate the answer. The max_minus_first function has two arguments. The first argument, dat, is a data frame for analysis with the same format as the OP provided. group is the name of the group for analysis. The end product is a vector with the answer.
max_minus_first <- function(dat, group){
# Get all column names with starting string "group"
col_names <- colnames(dat)
dat2 <- dat[, col_names[grepl(paste0("^", group), col_names)]]
# Get the maximum values from all columns except the first column
max_value <- apply(dat2[, -1], 1, max, na.rm = TRUE)
# Calculate max_value minus the values from the first column
final_value <- max_value - unlist(dat2[, 1], use.names = FALSE)
return(final_value)
}
max_minus_first(dat, "a")
# [1] 3 2 3
max_minus_first(dat, "b")
# [1] 5 4 5
Solution 2
A solution using the tidyverse. The end product (dat2) is a tibble with the output from each group (a, b, ...)
library(tidyverse)
dat2 <- dat %>%
rowid_to_column() %>%
gather(Column, Value, -rowid, -ends_with(" 1")) %>%
separate(Column, into = c("Group", "Column_Number")) %>%
gather(Column_1, Value_1, ends_with(" 1")) %>%
separate(Column_1, into = c("Group_1", "Column_Number_1")) %>%
filter(Group == Group_1) %>%
group_by(rowid, Group, Value_1) %>%
summarise(Value = max(Value, na.rm = TRUE)) %>%
mutate(Final = Value - Value_1) %>%
ungroup() %>%
select(-starts_with("Value")) %>%
spread(Group, Final)
dat2
# # A tibble: 3 x 3
# rowid a b
# * <int> <dbl> <dbl>
# 1 1 3 5
# 2 2 2 4
# 3 3 3 5
Explanation
rowid_to_column() is from the tibble package, a way to create a new column based on row ID.
gather is from the tidyr package to convert the data frame from the wide format to long format. I used gather twice because the first column of each group is different than other columns in the same group. ends_with(" 1") is a select helper function from the dplyr, which select the column with a name ending in " 1". Notice that the space in " 1" is important because "1" may select other columns like a 11 if such columns exist.
separate is from the tidyr package to separate a column into two columns. I used it to separate the Group name and column numbers in each Group.
filter(Group == Group_1) is to filter rows with Group == Group_1.
group_by(rowid, Group, Value_1) and then summarise(Value = max(Value, na.rm = TRUE)) make sure the maximum from each Group is calculated.
mutate(Final = Value - Value_1) is to calculate the difference between maximum from each Group and the value from the first column. The results are stored in the Final column.
select(-starts_with("Value")) removes any columns with a name beginning with "Value".
spread from the tidyr package converts the data frame from long format to wide format.
Solution 3
Another tidyverse solution, which similar to Solution 2. It uses do to conduct operation to each Group hence making the code more concise.
dat2 <- dat %>%
rowid_to_column() %>%
gather(Column, Value, -rowid) %>%
separate(Column, into = c("Group", "Column_Number")) %>%
group_by(rowid, Group) %>%
do(data_frame(Max = max(.$Value[.$Column_Number != 1]),
First = .$Value[.$Column_Number == 1])) %>%
mutate(Final = Max - First) %>%
select(-Max, -First) %>%
spread(Group, Final) %>%
ungroup()
dat2
# # A tibble: 3 x 3
# rowid a b
# * <int> <dbl> <dbl>
# 1 1 3 5
# 2 2 2 4
# 3 3 3 5