I'm using the following code:
setwd("~/R/Test")
require(openxlsx)
file_list <- list.files(getwd())
for (file in file_list){
file = read.xlsx(file)
write.csv(file,file=file)
}
Where it opens each file in a directory, reads the excel file, and saves as a CSV. However, I'm trying to source the original file name, and save the CSV with the original file name. Is there a way to do this?
Thanks!
As pointed out in the comments, you're overwriting the variable file. I also recommend changing the extension of the file. Try this as your for loop:
for (file in file_list) {
file.xl <- read.xlsx(file)
write.csv(file.xl, file = sub("xlsx$", "csv", file))
}
Note that you'll need to change the "xlsx$" to "xls$" depending on what the extensions are of the files in your directory.
Related
I want to save each element of a list as a txt file and save all of txt files in a zip folder. I was able to create a function that currently saves element of the list into a txt file. Sample list could look like this:
I found that gzfile and readr allow creating zip for a single file. For eg
write_tsv(mtcars, file.path(dir, "mtcars.tsv.gz"))
#OR
write.csv(mtcars, file=gzfile("mtcars.csv.gz"))
Whereas, I want to be able to create a zip folder that contains data1.txt, data2.txt and data.txt.
Are there any packages that would allow this?
You can use zip() from base R to zip your text files. Something like this:
file_names = paste0(names(your_list), ".txt")
for(i in seq_along(your_list)) {
write_tsv(your_list[[i]], file.path(dir, file_names[i]))
}
zip(file.path(dir, "zipped_files.zip"), files = file.path(dir, file_names))
I have compressed file like cat.txt.tar.gz, I just need to load into R and process as follows
zip <-("cat.txt.tar.gz")
data <- read.delim(file=(untar(zip,"cat.txt")),sep="\t")
but "data" is empty while running the code.Is there any way to read a file from .tar.gz
Are you sure your file is named correctly?
Usually compressed files are named cat.tar.gz, excluding the .txt.
Second, try the following code:
tarfile <- "cat.txt.tar.gz" # Or "cat.tar.gz" if that is right
data <- read.delim(file = untar(tarfile,compressed="gzip"),sep="\t")
If this doesn't work, you might need to extract the file first, and then read the extracted file.
To read in a particular csv or txt within a gz archive without having to UNZIP it first one can use library(archive) :
library(archive)
library(readr)
read_csv(archive_read("cat.txt.tar.gz", file = 1), col_types = cols(), sep="\t")
should work.
I want to read files with extension .output with the function read.table.
I used pattern=".output" but its'not correct.
Any suggestions?
As an example, heres how you could read in files with the extension ".output" and create a list of tables
list.filenames <- list.files(pattern="\\.output$")
trialsdata <- lapply(list.filenames,read.table,sep="\t")
or if you just want to read them one at a time manually just include the extention in the filename argument.
read.table("ACF.output",sep=...)
So finally because i didn't found a solution(something is going wrong with my path) i made a text file including all the .output files with ls *.output > data.txt.
After that using :
files = read.table("./data.txt")
i am making a data.frame including all my files and using
files[] <- lapply(files, as.character)
Finally with test = read.table(files[i,],header=F,row.names=1)
we could read every file which is stored in i (i = no of line).
I have an R script which generates a csv file of nearly 80000 KB after calculations. I want to write this csv file to folder say D:/My_Work/Output with file name result.zip as a zipped file. Please suggest is there any function or any way that i could achieve this.
Use the zip function:
zip(*path to zip*,*path to csv*)
edit: Unfortunately you cannot go from data.frame straight to zipped csv. You need to explicitly make the csv, but it wouldn't be hard to write a wrapper that deletes the csv so that you never know its there like so:
zipped.csv <- function(df, zippedfile) {
# init temp csv
temp <- tempfile(fileext=".csv")
# write temp csv
write.csv(df, file=temp)
# zip temp csv
zip(zippedfile,temp)
# delete temp csv
unlink(temp)
}
If you want just save some space on the disk then it is more convenient to use *.gz compression.
write.csv(iris, gzfile("iris.csv.gz"), row.names = FALSE)
iris2 = read.csv("iris.csv.gz")
I currently have a folder containing all Excel (.xlsx) files, and using R I would like to automatically convert all of these files to CSV files using the "openxlsx" package (or some variation). I currently have the following code to convert one of the files and place it in the same folder:convert("team_order\\team_1.xlsx", "team_order\\team_1.csv")
I would like to automate the process so it does it to all the files in the folder, and also removes the current xlsx files, so only the csv files remain. Thanks!
You can try this using rio, since it seems like that's what you're already using:
library("rio")
xls <- dir(pattern = "xlsx")
created <- mapply(convert, xls, gsub("xlsx", "csv", xls))
unlink(xls) # delete xlsx files
library(readxl)
# Create a vector of Excel files to read
files.to.read = list.files(pattern="xlsx")
# Read each file and write it to csv
lapply(files.to.read, function(f) {
df = read_excel(f, sheet=1)
write.csv(df, gsub("xlsx", "csv", f), row.names=FALSE)
})
You can remove the files with the command below. However, this is dangerous to run automatically right after the previous code. If the previous code fails for some reason, the code below will still delete your Excel files.
lapply(files.to.read, file.remove)
You could wrap it in a try/catch block to be safe.