Conditional Insert of Rows - r
I have a unique dataset, a portion of which can be reproduced using:
data <- textConnection("SNP_Pres,Chr_N,BP_A1F,A1_Beta,A2_SE,ForSortSNP,SortOrder
rs122,13,100461219,C,T,rs122,6
1,16362,0.8701,-0.0048,0.0056,rs122,7
1,19509,0.546015137607046,-0.0033,0.0035,rs122,8
1,17218,0.1539,-0.004,0.013,rs122,9
rs142,13,61952115,G,T,rs142,6
1,16387,0.1295,0.0044,0.0057,rs142,7
1,17218,0.8454,0.006,0.013,rs142,9
rs160,13,100950452,C,T,rs160,6
1,16387,0.549,-0.0021,0.0035,rs160,7
1,19509,0.519102731537216,0.003,0.0027,rs160,8
rs298,13,66664221,C,G,rs298,6
1,19509,0.308290808358246,-0.0032,0.0033,rs298,8
1,17218,0.7227,0.022,0.01,rs298,9")
mydata <- read.csv(data, header = T, sep = ",", stringsAsFactors=FALSE)
It is formatted for use in a program that requires holding spots for missing data entries. In this case, a missing entry is indicated by a numeric skip in the Sort Order column. An entry is complete if the column descends 6 - 7 - 8 - 9, with a new entry beginning again with 6.
I need a way to read through the data file, and insert a row of zeros for each missing entry, so that the file looks like this:
data <- textConnection("SNP_Pres,Chr_N,BP_A1F,A1_Beta,A2_SE,ForSortSNP,SortOrder
rs122,13,100461219,C,T,rs122,6
1,16362,0.8701,-0.0048,0.0056,rs122,7
1,19509,0.546015137607046,-0.0033,0.0035,rs122,8
1,17218,0.1539,-0.004,0.013,rs122,9
rs142,13,61952115,G,T,rs142,6
1,16387,0.1295,0.0044,0.0057,rs142,7
0,0,0,0,0,rs142,8
1,17218,0.8454,0.006,0.013,rs142,9
rs160,13,100950452,C,T,rs160,6
1,16387,0.549,-0.0021,0.0035,rs160,7
1,19509,0.519102731537216,0.003,0.0027,rs160,8
0,0,0,0,0,rs160,9
rs298,13,66664221,C,G,rs298,6
0,0,0,0,0,rs289, 7
1,19509,0.308290808358246,-0.0032,0.0033,rs298,8
1,17218,0.7227,0.022,0.01,rs298,9")
mydata <- read.csv(data, header = T, sep = ",", stringsAsFactors=FALSE)
Ultimately, the last two columns, ForSortSNP and SortOrder will be deleted from the data file, but they are included now for convenience's sake.
Any suggestions are greatly appreicated.
Here is a solution using the expand.grid and merge functions.
grid <- with(mydata, expand.grid(ForSortSNP=unique(ForSortSNP), SortOrder=unique(SortOrder)))
complete <- merge(mydata, grid, all=TRUE, sort=FALSE)
complete[is.na(complete)] <- 0 # replace NAs with 0's
complete <- complete[order(complete$ForSortSNP, complete$SortOrder), ] # re-sort
Related
How can I process my StringTie data so that I can run DEseq2 using R?
I have StringTie data for a parental cell line and a KO cell line (which I'll refer to as B10). I am interested in comparing the parental and B10 cell lines. The issue seems to be that my StringTie files are separate, meaning I have one for the parental cell line and one for B10. I've included the code I have written to date for context along with the error messages I received and troubleshooting steps I have already tried. I have no idea where to go from here and I'd appreciate all the help I could get. This isn't something that anyone in my lab has done before so I'm struggling to do this without any guidance. Thank you all in advance! `# My code to go from StringTie to count data: (I copy pasted this so all my notes are included. I'm new to R so they're really just for me. I'm not trying to explain to everyone what every bit of the code means condescendingly. You all likely know much more that I do) # Open Data # List StringTie output files for all samples # All files should be in same directory files_B10 <- list.files("C:/Users/kimbe/OneDrive/Documents/Lab/RNAseq/StringTie/data/B10", recursive = TRUE, full.names = TRUE) files_parental <- list.files("C:/Users/kimbe/OneDrive/Documents/Lab/RNAseq/StringTie/data/parental", recursive = TRUE, full.names = TRUE) tmp_B10 <- read_tsv(files_B10[1]) tx2gene_B10 <- tmp_B10[, c("t_name", "gene_name")] txi_B10 <- tximport(files_B10, type = "stringtie", tx2gene = tx2gene_B10) tmp_parental <- read_tsv(files_parental[1]) tx2gene_parental <- tmp_parental[, c("t_name", "gene_name")] txi_parental <- tximport(files_parental, type = "stringtie", tx2gene = tx2gene_parental) # Create a filter (vector) showing which rows have at least two columns with 5 or more counts txi_B10.filter<-apply(txi_B10$counts,1,function(x) length(x[x>5])>=2) txi_parental.filter<-apply(txi_parental$counts,1,function(x) length(x[x>5])>=2) head(txi_parental.filter) sum(txi_B10.filter) # Now filter the txi object to keep only the rows of $counts, $abundance, and $length where the txi.filter value is >=5 is true txi_B10$counts<-txi_B10$counts[txi_B10.filter,] txi_B10$abundance<-txi_B10$abundance[txi_B10.filter,] txi_B10$length<-txi_B10$length[txi_B10.filter,] txi_parental$counts<-txi_parental$counts[txi_parental.filter,] txi_parental$abundance<-txi_parental$abundance[txi_parental.filter,] txi_parental$length<-txi_parental$length[txi_parental.filter,] # save count data as csv files write.csv(txi_B10$counts, "txi_B10.counts.csv") write.csv(txi_parental$counts, "txi_parental.counts.csv") # Open count data # Do this in order that the files are organized in file manager txi_B10_counts <- read_csv("txi_B10.counts.csv") txi_parental_counts <- read_csv("txi_parental.counts.csv") # Set column names colnames(txi_B10_counts) = c("Gene_name", "B10_n1", "B10_n2") View(txi_B10_counts) colnames(txi_parental_counts) = c("Gene_name", "parental_n1", "parental_n2") View(txi_parental_counts) ## R is case sensitive so you just wanna ensure that everything is in the same case ## convert Gene names which is column [[1]] into lowercase txi_parental_counts[[1]] <- tolower( txi_parental_counts[[1]]) View(txi_parental_counts) txi_B10_counts[[1]] <- tolower(txi_B10_counts[[1]]) View(txi_B10_counts) ## Capitalize the first letter of each gene name capFirst <- function(s) { paste(toupper(substring(s, 1, 1)), substring(s, 2), sep = "") } txi_parental_counts$Gene_name <- capFirst(txi_parental_counts$Gene_name) View(txi_parental_counts) capFirst <- function(s) { paste(toupper(substring(s, 1, 1)), substring(s, 2), sep = "") } txi_B10_counts$Gene_name <- capFirst(txi_B10_counts$Gene_name) View(txi_B10_counts) # Merge PL and KO into one table # full_join takes all counts from PL and KO even if the gene names are missing # If a value is missing it writes it as NA # This site explains different types of merging https://remiller1450.github.io/s230s19/Merging_and_Joining.html mergedCounts <- full_join (x = txi_parental_counts, y = txi_B10_counts, by = "Gene_name") view(mergedCounts) # Replace NA with value = 0 mergedCounts[is.na(mergedCounts)] = 0 view(mergedCounts) # Save file for merged counts write.csv(mergedCounts, "MergedCounts.csv") ## -------------------------------------------------------------------------------- # My code to go from count data to DEseq2 # Import data # I added my metadata incase the issue is how I set up the columns # metaData is a file with your samples name and Comparison # Your second column in metadata must be called Comparison, otherwise you'll get error in dds line metadata <- read.csv(metadata.csv', header = TRUE, sep = ",") countData <- read.csv('MergedCounts.csv', header = TRUE, sep = ",") # Assign "Gene Names" as row names # Notice how there's suddenly an extra row (x)? # R automatically created and assigned column x as row names # If you don't fix this the # of columns won't add up rownames(countData) <- countData[,1] countData <- countData[,-1] # Create DEseq2 object # !!!!!!! Here is where I get stuck!!!!!!! dds <- DESeqDataSetFromMatrix(countData = countData, colData = metaData, design = ~ Comparison, tidy = TRUE) # I can't run this line # It says Error in DESeqDataSet(se, design = design, ignoreRank) : some values in assay are not integers ## -------------------------------------------------------------------------------- # How I tried to fix this: # 1) I saw something here that suggested this might be an issue with having zeros in the count data # I viewed the countData files to make sure there were no zeros and there weren't any # I thought that would be the case since I replaced NA with value = 0 earlier using this bit of code mergedCounts[is.na(mergedCounts)] = 0 view(mergedCounts) # 2) I was then informed that StringTie outputs non integer values # It was recommended that I try DESeqDataSetFromTximport instead dds <- DESeqDataSetFromTximport(countData, colData = metaData, design = ~ Comparison, tidy = TRUE) # I can't run this line either # It says Error in DESeqDataSetFromTximport(countData, colData = metaData, design = ~Comparison, : is(txi, "list") is not TRUE # I think this might be because merging the parental and B10 counts led to a file that's no longer a txi or accessible through Tximport # It seems like this should be done with the original StringTie files from the very beginning of the code # My concern with doing that is that the files for parental and B10 are separate so I don't see how I could end up comparing the two # I think this approach would work if I was interested in comparing n1 verses n2 for each cell line but that is not of interest to me `
Read list of files with inconsistent delimiter/fixed width
I am trying to find a more efficient way to import a list of data files with a kind of awkward structure. The files are generated by a software program that looks like it was intended to be printed and viewed rather than exported and used. The file contains a list of "Compounds" and then some associated data. Following a line reading "Compound X: XXXX", there are a lines of tab delimited data. Within each file the number of rows for each compound remains constant, but the number of rows may change with different files. Here is some example data: #Generate two data files to be imported cat("Quantify Compound Summary Report\n", "\nPrinted Mon March 28 14:54:39 2022\n", "\nCompound 1: One\n", "\tName\tID\tResult", "\n1\tA1234\tQC\t25.2", "\n2\tA4567\tQC\t26.8\n", "\nCompound 2: Two\n", "\tName\tID\tResult", "\n1\tA1234\tQC\t51.1", "\n2\tA4567\tQC\t48.6\n", file = "test1.txt") cat("Quantify Compound Summary Report\n", "\nPrinted Mon March 28 14:54:39 2022\n", "\nCompound 1: One\n", "\tName\tID\tResult", "\n1\tC1234\tQC\t25.2", "\n2\tC4567\tQC\t26.8", "\n3\tC8910\tQC\t25.4\n", "\nCompound 2: Two\n", "\tName\tID\tResult", "\n1\tC1234\tQC\t51.1", "\n2\tC4567\tQC\t48.6", "\n3\tC8910\tQC\t45.6\n", file = "test2.txt") What I want in the end is a list of data frames, one for each "Compound", containing all rows of data associated with each compound. To get there, I have a fairly convoluted approach of smashed together functions which give me what I want but in a very unruly fashion. library(tidyverse) ## Step 1: ID list of data files data.files <- list.files(path = ".", pattern = ".txt", full.names = TRUE) ## Step 2: Read in the data files data.list.raw <- lapply(data.files, read_lines, skip = 4) ## Step 3: Identify the "compounds" in the data file output Hdr.dat <- lapply(data.list.raw, function(x) grepl("Compound", x)) # Scan the file and find the different compounds within it (this can be applied to any Waters output) grp.dat <- Map(function(x, y) {x[y][cumsum(y)]}, data.list.raw, Hdr.dat) ## Step 4: Unpack the tab delimited parts of the export file, then generate a list of dataframes within a list of imported files Read <- function(x) read.table(text = x, sep = "\t", fill = TRUE, stringsAsFactors = FALSE) raw.dat <- Map(function(x,y) {Map(Read, split(x, y))}, data.list.raw, grp.dat) ## Step 5: Curate the list of compounds - remove "Compound X: " cmpd.list <- lapply(raw.dat, function(x) trimws(substring(names(x), 13))) ## Step 6: Rename the headers for the dataframes, remove the blank rows and recentre NameCols <- function(z) lapply(names(z), function(i){ x <- z[[ i ]] colnames(x) <- x[2,] x[c(-1,-2),] }) data.list <- Map(function(x,y){setNames(NameCols(x), y)}, raw.dat, cmpd.list) ## Step 7: rbind the data based on the compound cmpd_names <- unique(unlist(sapply(data.list, names))) result <- list() j <- for (n in cmpd_names) { result[[n]] <- map(data.list, n) } list.merged <- map(result, dplyr::bind_rows) list.merged <- lapply(list.merged, function(x) x %>% filter(Name != "")) The challenge here is script efficiency as far as time (I can import hundreds or thousands of data files with hundreds of lines of data, which can take quite a while) as well as general "cleanliness", which is why I included tidyverse as a tag here. I also want this to be highly generalizable, as the "Compounds" may change over time. If someone can come up with a clean and efficient way to do all of this I would be forever in your debt.
See one approach below. The whole pipeline might be intimidating at first glance. You can insert a head (or tail) call after each step (%>%) to display the current stage of data transformation. There's a bit of cleanup with regular expressions going on in the gsubs: modify as desired. intermediate_result <- data.frame(file_name = c('test1.txt','test2.txt')) %>% rowwise %>% ## read file content into a raw string: mutate(raw = read_file(file_name)) %>% ## separate raw file contents into rows ## using newline and carriage return as row delimiters: separate_rows(raw, sep = '[\\n\\r]') %>% ## provide a compound column for later grouping ## by extracting the 'Compound' string from column raw ## or setting the compound column to NA otherwise: mutate(compound = ifelse(grepl('^Compound',raw), gsub('.*(Compound .*):.*','\\1', raw), NA) ) %>% ## remove rows with empty raw text: filter(raw != '') %>% ## filling missing compound values (NAs) with last non-NA compound string: fill(compound, .direction = 'down') %>% ## keep only rows with tab-separated raw string ## indicating tabular data filter(grepl('\\t',raw)) %>% ## insert a column header 'Index' because ## original format has four data columns but only three header cols: mutate(raw = gsub(' *\\tName','Index\tName',raw)) Above steps result in a dataframe with a column 'raw' containing the cleaned-up data as string suited for conversion into tabular data (tab-delimited, linefeeds). From there on, we can either proceed by keeping and householding the future single tables inside the parent table as a so-called list column (Variant A) or proceed with splitting column 'raw' and mapping it (Variant B, credits to #Dorton). Variant A produces a column of dataframes inside the dataframe: intermediate_result %>% group_by(compound) %>% ## the nifty piece: you can store dataframes inside a dataframe: mutate( tables = list(read.table(text = raw, header = TRUE, sep = '\t' )) ) Variant B produces a list of dataframes named with the corresponding compound: intermediate_result %>% split(f = as.factor(.$compound)) %>% lapply(function(x) x %>% separate(raw, into = unlist( str_split(x$raw[1], pattern = "\t")) ) )
Ordering columns of data in R
I have a CSV file with 141 rows and several columns. I wanted my data to be ordered in ascending order by the first two columns i.e. 'label' and 'index'. Following is my code: final_data <- read.csv("./features.csv", header = FALSE, col.names = c('label','index', 'nr_pix', 'rows_with_1', 'cols_with_1', 'rows_with_3p', 'cols_with_3p', 'aspect_ratio', 'neigh_1', 'no_neigh_above', 'no_neigh_below', 'no_neigh_left', 'no_neigh_right', 'no_neigh_horiz', 'no_neigh_vert', 'connected_areas', 'eyes', 'custom')) sorted_data_by_label <- final_data[order(label),] sorted_data_by_index <- sorted_data_by_label[order(index),] write.table(sorted_data_by_index, file = "./features.csv", append = FALSE, sep = ',', row.names = FALSE) I chose to read from a CSV and use write.table because that was necessary for my code requirement to override the CSV with column names. Now even when I added a , after order(label), and order(index), the code sorted data should still read other rows and columns right? After running this code, I only get the first row out of 141 rows. Is there a way to fix this problem?
As #akrun has mentioned briefly, what you need to do is to change sorted_data_by_label <- final_data[order(label),] to sorted_data_by_label <- final_data[order(final_data$label),] and to change sorted_data_by_index <- sorted_data_by_label[order(index),] to sorted_data_by_index <- sorted_data_by_label[order(sorted_data_by_label$index),] This is because when you write label, R will try to find the index object in the global environment, not within the final_data data frame. If you intended to use index that is a column of final_data, you need to use explicit final_data$index. Other options You can use with: sorted_data_by_label <- with(final_data, final_data[order(label),]) sorted_data_by_index <- with(sorted_data_by_label, sorted_data_by_label[order(index),]) In dplyr you can use sorted_data_by_label <- final_data %>% arrange(label) sorted_data_by_index <- sorted_data_by_label %>% arrange(index)
Filtering process not fetching full data? Using dplyr filter and grep
I have this log file that has about 1200 characters (max) on a line. What I want to do is read this first and then extract certain portions of the file into new columns. I want to extract rows that contain the text “[DF_API: input string]”. When I read it and then filter based on the rows that I am interested, it almost seems like I am losing data. I tried this using the dplyr filter and using standard grep with the same result. Not sure why this is the case. Appreciate your help with this. The code and the data is there at the following link. Satish Code is given below library(dplyr) setwd("C:/Users/satis/Documents/VF/df_issue_dec01") sec1 <- read.delim(file="secondary1_aa_small.log") head(sec1) names(sec1) <- c("V1") sec1_test <- filter(sec1,str_detect(V1,"DF_API: input string")==TRUE) head(sec1_test) sec1_test2 = sec1[grep("DF_API: input string",sec1$V1, perl = TRUE),] head(sec1_test2) write.csv(sec1_test, file = "test_out.txt", row.names = F, quote = F) write.csv(sec1_test2, file = "test2_out.txt", row.names = F, quote = F) Data (and code) is given at the link below. Sorry, I should have used dput. https://spaces.hightail.com/space/arJlYkgIev
Try this below code which could give you a dataframe of filtered lines from your file based a matching condition. #to read your file sec1 <- readLines("secondary1_aa_small.log") #framing a dataframe by extracting required lines from above file new_sec1 <- data.frame(grep("DF_API: input string", sec1, value = T)) names(new_sec1) <- c("V1") Edit: Simple way to split the above column into multiple columns #extracting substring in between < & > new_sec1$V1 <- gsub(".*[<\t]([^>]+)[>].*", "\\1", new_sec1$V1) #replacing comma(,) with a white space new_sec1$V1 <- gsub("[,]+", " ", new_sec1$V1) #splitting into separate columns new_sec1 <- strsplit(new_sec1$V1, " ") new_sec1 <- lapply(new_sec1, function(x) x[x != ""] ) new_sec1 <- do.call(rbind, new_sec1) new_sec1 <- data.frame(new_sec1) Change columns names for your analysis.
R: Conditional Formatting across excel files
I am trying to highlight rows of an excel file based on a match from the columns in a separate excel file. Pretty much, I want to highlight a row in file1 if a cell in that row matches a cell in file2. I saw the R package "conditionalFormatting" has some of this functionality, but I cannot figure out how to use it. the pseudo-code i think would look something like this: file1 <- read_excel("file1") file2 <- read_excel("file2") conditionalFormatting(file1, sheet = 1, cols = 1:end, rows = 1:22, rule = "number in file1 is found in a specific column of file 2") Please let me know if this makes sense or if i need to clarify something. Thanks!
The conditionalFormatting() function embeds active conditional formatting into the excel document but is likely more complicated than you need for a one-time highlight. I'd suggest loading each file into a dataframe, determining which rows contain a matching cell, creating a highlight style (yellow background), loading the file as a workbook object, setting the appropriate rows to the highlight style, and saving the updated workbook object. The following function is the used to determine which rows have a match. The magrittr package provides the %>% pipes and the data.table package provides the transpose() function. find_matched_rows <- function(df1, df2) { require(magrittr) require(data.table) # the dataframe object treats each column as a list making it much easier and # faster to search via column than row. Transpose the original file1 dataframe # to treat the rows as columns. df1_transposed <- data.table::transpose(df1) # assuming that the location of the match in the second file is irrelevant, # unlist the file2 dataframe so that each value in file1 can be searched in a # vector df2_as_vector <- unlist(df2) # determine which columns contain a match. If one or more matches are found, # attribute the row as 'TRUE' in the output vector to be used to subset the # row numbers match_map <- lapply(df1_transposed,FUN = `%in%`, df2_as_vector) %>% as.data.frame(stringsAsFactors = FALSE) %>% sapply(function(x) sum(x) > 0) # make a vector of row numbers using the logical match_map vector to subset matched_rows <- seq(1:nrow(df1))[match_map] return(matched_rows) } The following code loads the data, finds the matched rows, applies the highlight, and saves over the original file1.xlsx. The second tst_df1 and tst_df2 provide for an easy way of testing the find_matched_rows() function. As expected, it finds that the 1st and 3rd rows of the first dataframe contain a cell that matches a cell in second dataframe. # used to ensure that the correct rows are highlighted. the dataframe does not # include the header as an independent row unlike excel. file1_header_row <- 1 file2_header_row <- 1 tst_df1 <- openxlsx::read.xlsx("./file1.xlsx", startRow = file1_header_row) tst_df2 <- openxlsx::read.xlsx("./file2.xlsx", startRow = file2_header_row) #example data for testing tst_df1 <- data.frame(fname = c("John", "Bob", "Bill"), lname = c("Smith", "Johnson", "Samson"), wage = c(10, 15.23, 137.38), stringsAsFactors = FALSE) tst_df2 <- data.frame(a = c(10, 34, 284.2), b = c("Billy", "Bill", "Billy-Bob"), c = c("Samson", "Johansson", NA), stringsAsFactors = FALSE) df_matched_rows <- find_matched_rows(tst_df1, tst_df2) # any color found in colours() can be used here or hex color beginning with "#" highlight_style <- openxlsx::createStyle(fgFill = "yellow") file1_wb <- openxlsx::loadWorkbook(file = "./file1.xlsx") openxlsx::addStyle(wb = file1_wb, sheet = 1, style = highlight_style, rows = file1_header_row + df_matched_rows, cols = 1:ncol(tst_df1), stack = TRUE, gridExpand = TRUE) openxlsx::saveWorkbook(wb = file1_wb, file = "./file1.xlsx", overwrite = TRUE)