With the following
dir.create(paste0('hello ', Sys.Date()))
I created a directory whose name is hello 2020-08-10 (today's date). How can I write a csv file inside it? I might use setwd command but it requires the path of the directory. Is there a way to get the path of a directory?
That directory is created as a subdirectory of your current working directory. Thus, you should be able to write your csv file with the relative path "hello 2020-08-10/file.csv".
If you have data to write from a data frame then you can achieve it this way. You can always get the path of the working directory using getwd().
dir <- paste0('hello ', Sys.Date())
yourdf <- read.csv("file.csv")
wrtfile <- paste0(dir,"/filename.csv")
dir.create(dir)
write.csv(yourdf, file = wrtfile, row.names = FALSE)
Related
I want to read the CSV file "mydata.csv" as an input and create the output in the same directory using R. I have hard-coded for getting csv input(Domain_test.csv) and the output(MyData.csv) path as below. But I will have to share the same Rscript and the corresponding csv files with one of the users so that he/she can execute it and take the results. I want the user should to select his specific path where ever he wants and make it run without hard coding the input/output path in the script.
How it should be done in R?
#reading csv from this current directory
data <- read.csv("C:/Users/Desktop/input_output_directory/Domain_test.csv")
#generating the output In this same directory
write.csv(dataframe,"C:/Users/Desktop/input_output_directory/MyData.csv", row.names = FALSE)
You can use
wd <- choose.dir(default = "", caption = "Select folder")
setwd(wd)
I want to be able to read and edit spatial SQlite tables that are downloaded from a server. These come compressed.
These zip files have a folder in them that contains information about the model that has been run as the name of the folder, and as such these can sometimes be quite long.
When this folder name gets too long, unziping the folder fails. I ultimately dont need to unzip the file. But i seem to get the same error when I use unz within readOGR.
I cant think of how to recreate a replicate able example but I can give an example of a path that works and one that doesnt.
Works:
"S:\3_Projects\CRC00001\4699-12103\scenario_initialised model\performance_assessment.sqlite"
4699-12103 is the zip file name
and "scenario_initialised model" is the offending subfolder
Fails:
""S:\3_Projects\CRC00001\4699-12129\scenario_tree_canopy_7, number_of_trees_0, roads_False, compliance_75, year_2030, nrz_cover_0.6, green_roofs_0\performance_assessment.sqlite""
4699-12103 is the zip file name
and "scenario_tree_canopy_7, number_of_trees_0, roads_False, compliance_75, year_2030, nrz_cover_0.6, green_roofs_0" is the offending subfolder
The code would work in a similar fashion to this.
list_zips <- list.files(pattern = "*.zip", recursive = TRUE, include.dirs = TRUE)
unzip(zipfile = paste(getwd(),"/",list_zips[i],sep = ""),
exdir=substr(paste(getwd(),"/",list_zips[i],sep = ""),1,nchar(paste(getwd(),"/",list_zips[i],sep = ""))-4))
But I would prefer to directly be able to load the spatial file in without unzipping. Such as:
sq_path <- unzip(list_zips[i], list=TRUE)[2,1]
temp <- unz(paste(getwd(),"/",list_zips[i],sep = ""),sq_path)
vectorImport <- readOGR(dsn=temp, layer="micro_climate_grid")
Any help would be appreciated! Tim
Inside working directory I have folders names ending "*_txt" containing files inside folder, want to zip all folders with original name and files inside them. Everything is working perfectly but problem in .zip contains the name of directory as well that i don't want e.g "1202_txt.zip\1202_txt\files" needs to be "1202_txt.zip\files"
dir.create("1202_txt") # creating folder inside working directory
array <- list.files( , "*_txt")
for (i in 1:length(array)){
name <- paste0(array[i],".zip")
#zip(name, files = paste0(d,paste0("/",array[i])))
zip(name, files = array[i])
}
Above code is available Creating zip file from folders in R
Note: Empty folders can be skipped
Can you please try this? (using R 3.5.0, macOS High Sierra 10.13.6)
dir_array <- list.files(getwd(), "*_txt")
zip_files <- function(dir_name){
zip_name <- paste0(dir_name, ".zip")
zip(zipfile = zip_name, files = dir_name)
}
Map(zip_files, dir_array)
This should zip all the folders inside the current working directory with the specified name. The zipped folders are also housed in the current working directory.
Here is the approach I used to achieve my desired results tricky but still works
setwd("c:/test")
dir.create("1202_txt") # creating folder inside working directory and some CSV files in there
array <- list.files( , "*_txt")
for (i in 1:length(array)){
name <- paste0(array[i],".zip")
Zip_Files <- list.files(path = paste0(getwd(),"/", array[[i]]), pattern = ".csv$")
# Moving Working Directory
setwd(file.path("C:\\test\\",array[[i]]))
#zipping files inside the directory
zip::zip(zipfile = paste0(name[[i]]), files = Zip_Files)
# Moving zip File from Inside folder to outside
file.rename(name[i], paste0("C:\\test\\", name[i]))
print(name[i])
}
I am trying to create a new folder with all the files I need. I have a datatable called newlist with a column called fullpath which has the file path for each file.
I have tried the code below but the error message says "'from' path too long" so I don't think it recognises the values as separate file paths.
file.copy(from=newlist[,"fullpath"], to=destination, overwrite=TRUE, recursive=TRUE)
I think I need to specify the files to copy first using the list.files() function but I am unsure how to do this with a column of files in a datatable.
Try this:
lapply(newlist[,"fullpath"], function(x) file.copy(from=x, to=destination, overwrite=TRUE, recursive=TRUE))
Edit:
If your paths do not contain extensions, you can use this code instead:
lapply(newlist[,"fullpath"], function(x) {
# Find the file in the given directory with the basename and add a wildcard extension
f <- file.path(dirname(x), list.files(dirname(x), paste0(basename(x), ".*")))
file.copy(from=f, to=destination, overwrite=TRUE, recursive=TRUE)
})
I am trying to loop through many folders in a directory, looking for a particular xml file buried in one of the folders. I would then like to save the location of that file and then run my code against that file (I will not include that code in this). What I am asking here is to loop through all the folders and then open the specific file.
For example:
My main folder would be: C:\Parsing
It has two folders named "folder1" and "folder2".
each folder has an xml file that I am interested in, lets say its called "needed.xml"
I would like to have a scrip that loops through the directory and finds those particular scripts.
Do you know how I could that in R.
Using list.files and greplyou could look recursively through all sub-folders
rootPath="C:\Parsing"
listFiles=list.files(rootPath,recursive=TRUE)
searchFileName="needed.xml"
presentFile=grepl(searchFileName,listFiles)
if(nchar(presentFile)) cat("File",searchFileName,"is present at", presentFile,"\n")
Is this what you're looking for?
require(XML)
fol <- list.files("C:/Parsing")
for (i in fol){
dir <- paste("C:/Parsing" , i, "/needed.xml", sep = "")
if(file.exists(dir) == T){
needed <- xmlToList(dir)
}
}
This will locate your xml file and read it into R as a list. I wasn't clear from your question if you wanted the output to be the data itself or just the directory location of your data which could then be supplied to another function/script. If you just want the location, remove the 'xmlToList' function.
I would do something like this (replace *.xml with your filename.xml if you want):
list.files(path = "C:\Parsing", pattern = "*.xml", recursive = TRUE, full.names = TRUE)
This will recursively look for files with extension .xml in the path C:\Parsing and return the full path of the matched files.