Dynamic adding of columns in R - r

I need some help with finding a good way to dynamically add columns with counts for different categories that I need to extract from a string.
In my data, I have a column that contains names of categories and counts thereof. The fields can be empty or contain any combination of categories one can think of. Here are some examples:
themes:firstcategory_1;secondcategory_33;thirdcategory_5
themes:secondcategory_33;fourthcategory_2
themes:fifthcategory_1
What I need is a column for each category (should have the category's name) and the count extracted from the strings above. The list of categories is dynamic, so I don't know beforehand which ones exist.
How do I approach this?

This code will get a column for each category with the counts for each row.
library(dplyr)
library(tidyr)
library(stringr)
# Create test dataframe
df <- data.frame(themes = c("firstcategory_1;secondcategory_33;thirdcategory_5", "secondcategory_33;fourthcategory_2","fifthcategory_1"), stringsAsFactors = FALSE)
# Get the number of columns to split values into
cols <- max(str_count(df$themes,";")) + 1
# Get vector of temporary column names
cols <- paste0("col",c(1:cols))
df <- df %>%
# Add an ID column based on row number
mutate(ID = row_number()) %>%
# Separate multiple categories by semicolon
separate(col = themes, into = cols, sep = ";", fill = "right") %>%
# Gather categories into a single column
gather_("Column", "Value", cols) %>%
# Drop temporary column
select(-Column) %>%
# Filter out NA values
filter(!is.na(Value)) %>%
# Separate categories from their counts by underscore
separate(col = Value, into = c("Category","Count"), sep = "_", fill = "right") %>%
# Spread categories to create a column for each category, with the count for each ID in that category
spread(Category, Count)

Related

Keeping the first column after rotate_df

I'm using rotate_df from sjmisc to change columns to rows and vice versa:
library(sjmisc)
library(dplyr)
ss557r <- as.data.frame(myfiles557r) %>%
rename(variable = 1) %>% # change first column name
select(-ends_with("Player.Name"))%>% # remove duplicated columns
rotate_df(rn = NULL, cn = TRUE) %>%
clean_names()
However, this produces the following df:
The first column is "date", but my intended first column would be what's on the left - the row names.
What changes can I make to the code to make that happen?

Separating into separate columns based on 2 delimiters

I'm an R beginner and have a basic question I can't seem to figure out. I have row values that need to be separated into different columns, but there are more than one delimiter I am trying to use. The Expression_level column contains an ensembl gene ID with its corresponding value, in the form ensembl:exp value, but there are sometimes 2 ensembl IDs in the same row separated by ;. I want to have a column for ensembl and for gene expression value, but not sure how to separate while keeping them mapped to the correct ID/expression value. This is the type of data I am working with: rna_seq and this is what I am trying to get out: org_rna. TYIA
rna_seq= cbind("Final_gene" = c("KLHL15", "CPXCR1", "MAP7D3", "WDR78"), "Expression_level" = c("1.62760683812965:ENSG00000174010", "-9.96578428466209:ENSG00000147183",
"-4.32192809488736:ENSG00000129680", "-1.39592867633114:ENSG00000152763;-9.96578428466209:ENSG00000231080"))
org_rna = cbind("Final_gene" = c("KLHL15", "CPXCR1", "MAP7D3", "WDR78", "WDR78"), "Ensembl" = c("ENSG00000174010", "ENSG00000147183", "ENSG00000129680", "ENSG00000152763", "ENSG00000231080")
, "Expression" = c("1.62760683812965", "-9.96578428466209", "-4.32192809488736", "-1.39592867633114", "-9.96578428466209"))
library(tidyr)
library(dplyr)
rna_seq %>%
as.data.frame() %>%
# separate cells containing multiple values into
# multiple rows
separate_rows(Expression_level, sep = ";") %>%
# extract pairs
extract(col = Expression_level,
into = c("Expression", "Ensembl"),
regex = "(.*):(.*)")
# A tibble: 5 x 3
# Final_gene Expression Ensembl
# <chr> <chr> <chr>
# KLHL15 1.62760683812965 ENSG00000174010
# CPXCR1 -9.96578428466209 ENSG00000147183
# MAP7D3 -4.32192809488736 ENSG00000129680
# WDR78 -1.39592867633114 ENSG00000152763
# WDR78 -9.96578428466209 ENSG00000231080
Another (less elegant) solution using separate():
library(tidyr)
library(dplyr)
rna_seq |> as.data.frame() |>
# Separate any second IDs
separate(Expression_level, sep = ";", into = c("ID1", "ID2")) |>
# Reshape to longer (columns to rows)
pivot_longer(cols = starts_with("ID")) |>
# Separate Expression from Ensembl
separate(value, sep = ":", into = c("Expression", "Ensembl")) |>
filter(!is.na(Expression)) |>
select(Final_gene, Ensembl, Expression)

Determine the size of string in a particular cell in dataframe: R

In a data frame, I have a column (type: chr) that contains answers separated by a comma. I want to create another column based on the size of the string and award points. For example, some of the entries in a column are:
Column1
word1,word2,word3
word1,word2
word1
Now, for the first cell, I want the size of the cell to be evaluated as 3 (as it contains three distinct word and there are no duplicates in the cell values). I'm not sure how do I achieve this.
An option is to split the column with strsplit into a list of vectors, get the unique elements by looping over the list with lapply and get the lengths
df1$Size <- lengths(lapply(strsplit(df1$Column1, ",\\s*"), unique))
Another option is separate_rows from tidyr
library(dplyr)
library(tidyr)
df1 %>%
mutate(rn = row_number()) %>%
separate_rows(Column1) %>%
group_by(rn) %>%
summarise(Size = n_distinct(Column1), .groups = 'drop') %>%
select(Size) %>%
bind_cols(df1, .)
-output
# Column1 Size
#1 word1,word2,word3 3
#2 word1,word2 2
#3 word1 1
data
df1 <- data.frame(Column1 = c('word1,word2,word3', 'word1,word2', 'word1'))
Original Answer:
Another option:
library(dplyr)
library(stringr)
df %>%
mutate(Lengths = str_count(Column1, ",") + 1)
Edit:
I hadn't noticed the OP requirements properly (about non-duplicates). As #Onyambu pointed out in the comments, this chunk will only works if there are no duplicated words in data.
It basically counts how many words there are.

Drop a column that was used as 'by' argument in join

I have the following query:
library(dplyr)
FinalQueryDplyr <- PostsWithFavorite %>%
inner_join(Users, by = c("OwnerUserId" = "Id"), keep = FALSE) %>%
select(DisplayName, Age, Location, FavoriteTotal, MostFavoriteQuestion, MostFavoriteQuestionLikes) %>%
select(-c(OwnerUserId)) %>%
arrange(desc(FavoriteTotal))
As you can see, I use the OwnerUserId column as the joining column between 2 data frames.
I want the result data frame to only have other columns, without the OwnerUserId column visible.
Even though I 'deselect' the OwnerUserId column 2 times in said query:
once by not including it in the first select clause
once by explicitly deselecting it with select(-c(OwnerUserId))
It is still visible in the result:
OwnerUserId DisplayName Age Location FavoriteTotal MostFavoriteQuestion MostFavoriteQuestionLikes
How can I get rid of the column that was used as a joining column in dplyr?
One option is to remove the attribute by converting to data.frame
library(dplyr)
PostsWithFavorite %>%
inner_join(Users, by = c("OwnerUserId" = "Id"), keep = FALSE) %>%
select(DisplayName, Age, Location, FavoriteTotal,
MostFavoriteQuestion, MostFavoriteQuestionLikes) %>%
as.data.frame %>%
select(-c(OwnerUserId)) %>%
arrange(desc(FavoriteTotal))

select highest pairs from complex table

I want to make a new dataframe from a selection of rows in a complex table of pairwise comparisons. I want to select the rows such that the 2 highest values of each pairwise comparison is selected.
Below is an example dataset:
dataframe <- data.frame(X1 = c("OP2413iiia","OP2413iiib","OP2413iiic","OP2645ii_a","OP2645ii_b","OP2645ii_c","OP2645ii_d","OP2645ii_e","OP3088i__a","OP5043___a","OP5043___b","OP5044___a","OP5044___b","OP5044___c","OP5046___a","OP5046___b","OP5046___c","OP5046___d","OP5046___e","OP5047___a","OP5047___b","OP5048___b","OP5048___c","OP5048___d","OP5048___e","OP5048___f","OP5048___g","OP5048___h","OP5049___a","OP5049___b","OP5051DNAa","OP5051DNAb","OP5051DNAc","OP5052DNAa","OP5053DNAa"),
gr1 = c("2","2","2","3","3","3","3","3","3","4","4","4","3","4","2","3","3","3","4","2","4","3","3","3","4","2","4","2","3","3","3","4","2","4","2"),
X2 = c("OP2413iiib","OP2413iiic","OP5046___a","OP2645ii_a","OP2645ii_a","OP2645ii_a","OP2645ii_b","OP2645ii_b","OP5046___a","OP2645ii_b","OP2645ii_c","OP2645ii_c","OP2645ii_c","OP2645ii_c","OP5048___e","OP2645ii_d","OP5046___a","OP2645ii_d","OP2645ii_d","OP2645ii_d","OP2645ii_d","OP2645ii_e","OP5048___e","OP2645ii_e","OP2645ii_e","OP2645ii_e","OP2645ii_e","OP2645ii_e","OP3088i__a","OP3088i__a","OP3088i__a","OP3088i__a","OP3088i__a","OP3088i__a","OP3088i__a"),
gr2 = c("3","3","3","4","4","4","2","2","2","2","4","4","4","4","4","2","2","2","2","2","2","4","4","4","4","4","4","4","3","3","3","3","3","3","3"),
value = c("1.610613e+00","1.609732e+00","8.829263e-04","1.080257e+01","1.111006e+01","1.110978e+01","4.048302e+00","5.610458e+00","5.609584e+00","9.911490e+00","1.078518e+01","1.133728e+01","1.133686e+01","1.738092e+00","9.247411e+00","5.170646e+00","6.074909e+00","6.074287e+00","6.212711e+00","3.769029e+00","5.793390e+00","1.124045e+01","1.163326e+01","1.163293e+01","7.752766e-01","1.008434e+01","1.222854e+00","6.469443e+00","1.610828e+00","1.784774e+00","1.784235e+00","9.434803e+00","4.512563e+00","9.582847e+00","4.309312e+00"))
expected_output_dataframe <- rbind(dataframe[10,],dataframe[34,],dataframe[32,],dataframe[15,],dataframe[3,],dataframe[17,])
Many thanks in advance
Cheers
The method works using dplyr. I created an extra column, gr_pair, to identify the pairwise groups.
library(dplyr)
library(magrittr)
dataframe %>%
filter(gr1 != gr2) %>% # This case is excluded from your expected output
mutate(gr_pair = paste(pmin(gr1, gr2), pmax(gr1, gr2), sep = ",")) %>%
group_by(gr_pair) %>%
top_n(2, value) # Keep the top two rows in each group, sorted by value

Resources