Exporting multiple R data frames to a single Excel sheet - r

I would like to export multiple data frames from R to a single Excel sheet. By using the following code:
write.xlsx(DF1, file="C:\\Users\\Desktop\\filename.xlsx", sheetName="sheet1",
col.names=TRUE, row.names=TRUE, append=FALSE)
write.xlsx(DF2, file="C:\\Users\\Desktop\\filename.xlsx", sheetName="sheet2",
col.names=TRUE, row.names=TRUE, append=TRUE)
I can export two data frames to a single excel workbook, but in two separate sheets. I would like to export them in a single sheet, andif possible, to determine the specific cells that these data frames will be placed in.
Any suggestions more than welcome.

This is not a ready to use answer but this should get you to your target. It would be a mess to write it into a comment.
Create the combined df with the tools of R
Write df to excel
a few notes to point 1.:
vertical offset the second df from the first by using Reduce(rbind,c(list(mtcars),rep(list(NA),3))) for a 3 cell offset for e.g.
rbind the colnames to your df rbind(names(mtcars),mtcars)
use numbers as colnames for so you will not have a problem rbinding different df with different variables. names(mtcars) <- seq_along(mtcars)
To point 2.:
Since your colnames are numbers now make sure you have your colnames set as FALSE.
Hope this helps and you can get your desired output.

Following most of your suggestions I realized that by using cbind.data.frame I get an output which is not optimal, but the amount of time that I need to restructure the data in EXCEL is really insignificant. So, I will proceed with this for the time being.
Thanks

I can't comment yet, so I'll provide my input here:
Using write.xlsx in R, how to write in a Specific Row or column in excel file
In that link it is suggested to organize your data in a single data frame to then write that into the excel sheet. You should have a look at that.
as slackline suggested, this is quite easy if your columns or rows are the same, using his suggested methods
Edit: To add spaces in between, just insert empty columns in between before writing

Related

Import excel (csv) data into R conducting bioinformatics task

I'm a new who is exploring bioinformatics via R. Right now I've encounter a trouble, where I imported my data in excel into R through changing it into csv format and using read.csv command, as you see in the pic there are 37 variables (column) where first column is supposed to be considered as fixed factor. And I would like to match it with another matirx which has only 36 variables in the downstream processing, what should I do to reduce variable numbers by fixing first column?
Many thanks in advance.
sure, I added str() properties of my data here.
If I am not mistaken, what you are looking for is setting the "Gene" column as metadata, indicating what gene those values in every row correspond to. You can try then to delete the word "Gene" in the Excel file because when you import it with the read.csv() function, the argument row.names = TRUE is set as default when "there is a header and the first row contains one fewer field than the number of columns".
You can find more information about this function using ?read.csv

Changing data type in data frame

I have some data tables in Excel spreadsheets which I am using in R. Some of the tables store numbers as text i.e. numeric values are stored as characters.
To clarify, it is not a formatting that is a problem but numbers themselves. The Excel (and R) sees such numbers as characters such as letters, rather then numbers.
Because formatting seems to be an issue, addStyle function in openxlsx did not work for me.
After some googling, I've decided to try and write a for loop that will check each value individually.I wrote a nested for loop that checks each value and overwrites it if it is a number (code is below).This seems to work logically but values do not get overwritten i.e. values that were stored as text are still there.
library(readxl)
library(openxlsx)
wb<-loadWorkbook(choose.files())
data0<-as.data.frame(read_excel(choose.files(),sheet=1,range = "B1:E1131"))
data<-data0
for(i in 1:ncol(data)){
for(j in 1:nrow(data)){
if(is.numeric(as.numeric(data[j,i]))&&!is.na(as.numeric(data[j,i]))){
data[j,i]<-as.numeric(data[j,i])
}
}
}
Desired outcome:
I would like to change data in column "Expenses" (in a picture below) to data in a column to its right via R.
coming from my comment:
You can use the col_types-argument in the readxl::read_excel()-function to force reading of text/numeric/date/... data

How to merge a set of lists into a single data frame

I am new to R and coding in general, so please bear with me.
I have a spreadsheet that has 7 sheets, 6 of these sheets are formatted in the same way and I am skipping the one that is not formatted the same way.
The code I have is thus:
lst <- lapply(2:7,
function(i) read_excel("CONFIDENTIAL Ratio 062018.xlsx", sheet = i)
)
This code was taken from this post: How to import multiple xlsx sheets in R
So far so good, the formula works and I have a large list with 6 sub lists that appears to represent all of my data.
It is at this point that I get stuck, being so new I do not understand lists yet, and really need the lists to be merged into one single data frame that looks and feels like the source data (so columns and rows).
I cannot work out how to get from a list to a single data frame, I've tried using R Bind and other suggestions from here, but all seem to either fail or only partially work and I end up with a data frame that looks like a list etc.
If each sheets has the same number of columns (ncol) and same names (colnames) then this will work. It needs the dplyr pacakge.
require(dplyr)
my_dataframe <- bind_rows(my_list)

nested for loops in R to parse csv files?

Edit: I've corrected the typo in the coding (copy and paste error). I can't add an example of the csv files, as its too complex to model in a simple example (I tried..)
I've spent hours looking through similarly titled questions to solve a for loop problem in R, and have tried a lot of different approaches, but I'm having no luck.
I have many different csv files, each of which has a set of 10 separate strings (variables) identifying a specific row (e.g., names = c("Delta values", "Scream factor", "nightmare mode"). Two rows below such a string, I need the max value of that row of data. I can create loops scanning files for such a value in single csv files using the following
test files-
test1.csv, test2.csv, test3.csv test4.csv
names<-list.files(pattern=".csv")
DF <- NULL
for (i in names){
dat <- read.csv(i, header=FALSE, stringsAsFactors=FALSE)
index <- which(dat=="Delta values", arr.ind=TRUE)
row=as.numeric(rownames(dat)[index[1]])
aver=dat[row+2,]
p=max(na.omit(as.numeric(aver)))
DF=rbind(DF, p)
colnames(DF)=dat[index]}
However, my problem comes in trying to generalize it, so that I get a data frame returned indicating the file each value was retrieved from as a row (not "p") and looping over the files so that I can retrieve the next several variables, while appending to the same data frame so that I end up with a data frame listing by row the filename the variable was derived from, and each variable listed in a separate column.
I'm pretty sure I need a nested loop listing the values I want to retrieve as calculated by "p" but I can't find any good examples describing how to iteratively loop using such an approach, and append the new variables to the growing data frame while staying consistent with the row numbering by file.
please help!

Organizing Messy Notepad data

I have some data in Notepad that is a mess. There is basically no space between any of the different columns which hold different data. I know the spaces for the data.
For example, Columns 1-2 are X, Columns 7-10 are Y....
How can I organize this? Can it be done in R? What is the best way to do this?
?read.fwf may be a good bet for this circumstance.
Set the path to the file:
temp <- "\pathto\file.txt"
Then set the widths of the variables within the file, as demonstrated below.
#1-2 = x, 3-10=y
widths <- c(2,8)
Then set the names of the columns.
cols <- c("X","Y")
Finally, import the data into a new variable in your session:
dataset <- read.fwf(temp,widths,header=FALSE,col.names=cols)
Something I've done in the past to handle that kind of mess is actually import it into excel as delimited width text, then save as a CSV.
Just a suggestion for you. If it's a one off project then that should be fine. no coding at all. But if it's a repeat offender... then you might look at regular expressions.
i.e. ^(.{6})(.{7})(.{2})(.{5})$ for 4 fields of 6,7,2 and 5 characters width in order.

Resources