I am using the function file.copy() and I have some issues. The idea is to copy files based on a list of file names from one folder to another. How can i set up the code to make it work.
list.files("C:/Test/PDF", recursive = TRUE) #
t<- read.csv2(file="C://Test//tes.csv") #is the list with the file names
a <- as.character(t[1:nrow(t),])
file.copy(a, "C:/Test/PDF verschoben" )
message:
problem copying .\Eidesstattliche.pdf to C:\Test\PDF verschoben\Eidesstattliche.pdf: No such file or directoryproblem copying .\Anmeldung_Defensio.pdf to C:\Test\PDF verschoben\Anmeldung_Defensio.pdf: No such file or directory[1] TRUE FALSE TRUE TRUE FALSE`
list.files("C:/Test/PDF", recursive = TRUE)
[1] "1/2014_Bewerbung Mietwohnung_Loe.pdf" "1/Anmeldung_Defensio.pdf" "1/Eidesstattliche.pdf" "2/Anhang_unterzeichnet.pdf"
[5] "2/Formular_unterschrieben.pdf"
a
[1] "Formular_unterschrieben.pdf" "Eidesstattliche.pdf" "Anhang_unterzeichnet.pdf" "2014_Bewerbung Mietwohnung_Loe.pdf"
Any ideas?
The question is not crystal clear, but the issue seems to be that list.files() does NOT return the entire path as you can see in the error message : copying .\Eidesstattliche.pdf.
I cannot see why and how the files you are listed are in the tes.csv file. Show us the result of getwd() and try:
a <- list.files(list.files("C:/Test/PDF", recursive = TRUE)
file.copy(paste0("C:/Test/PDF/", a), "C:/Test/PDF verschoben" )
Related
I have alot of R files in a folder structure to be modified
for example,
printlog(lvl_info, paste("RAM in use at start of a function:", memory.size(max = FALSE), "MB"))
the memory.size(max = FALSE) should replaced by a custom function memorySize()
is there any method to do this
I found an old thread (How do you read a password protected excel file into r?) that recommended that I use the following code to read in a password protected file:
install.packages("excel.link")
library("excel.link")
dat <- xl.read.file("TestWorkbook.xlsx", password = "pass", write.res.password="pass")
dat
However, when I try to do this my R immediately crashes. I've tried removing the write.res.password argument, and that doesn't seem to be the issue. I have a hunch that excel.link might not work with the newest version of R, so if you know of any other ways to do this I'd appreciate the advice.
EDIT: Using read.xlsx generates this error:
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "newInstance", .jfindClass(class), :
org.apache.poi.poifs.filesystem.OfficeXmlFileException:
The supplied data appears to be in the Office 2007+ XML.
You are calling the part of POI that deals with OLE2 Office Documents.
You need to call a different part of POI to process this data (eg XSSF instead of HSSF)
You can remove the password of the excel file without knowing it with the following function (adapted version of code available at https://www.r-bloggers.com/2018/05/remove-password-protection-from-excel-sheets-using-r/)
remove_Password_Protection_From_Excel_File <- function(dir, file, bool_XLSXM = FALSE)
{
initial_Dir <- getwd()
setwd(dir)
# file name and path after removing protection
if(bool_XLSXM == TRUE)
{
file_unlocked <- stringr::str_replace(basename(file), ".xlsm$", "_unlocked.xlsm")
}else
{
file_unlocked <- stringr::str_replace(basename(file), ".xlsx$", "_unlocked.xlsx")
}
file_unlocked_path <- file.path(dir, file_unlocked)
# create temporary directory in project folder
# so we see what is going on
temp_dir <- "_tmp"
# remove and recreate _tmp folder in case it already exists
unlink(temp_dir, recursive = TRUE)
dir.create(temp_dir)
# unzip Excel file into temp folder
unzip(file, exdir = temp_dir)
# get full path to XML files for all worksheets
worksheet_paths <- list.files(paste0(temp_dir, "/xl/worksheets"), full.name = TRUE, pattern = ".xml")
# remove the XML node which contains the sheet protection
# We might of course use e.g. xml2 to parse the XML file, but this simple approach will suffice here
for(ws in worksheet_paths)
{
file_Content <- readLines(ws, encoding = "windows1")
# the "sheetProtection" node contains the hashed password "<sheetProtection SOME INFO />"
# we simply remove the whole node
out <- str_replace(file_Content, "<sheetProtection.*?/>", "")
writeLines(out, ws)
}
worksheet_Protection_Paths <- paste0(temp_dir, "/xl/workbook.xml")
file_Content <- readLines(worksheet_Protection_Paths , encoding = "windows1")
out <- stringr::str_replace(file_Content, "<workbookProtection.*?/>", "")
writeLines(out, worksheet_Protection_Paths)
# create a new zip, i.e. Excel file, containing the modified XML files
old_wd <- setwd(temp_dir)
files <- list.files(recursive = T, full.names = F, all.files = T, no.. = T)
# as the Excel file is a zip file, we can directly replace the .zip extension by .xlsx
zip::zip(file_unlocked_path, files = files) # utils::zip does not work for some reason
setwd(old_wd)
# clean up and remove temporary directory
unlink(temp_dir, recursive = T)
setwd(initial_Dir)
}
Once the password is removed, you can read the Excel file. This approach works for me.
I want to, programmatically, source all .R files contained within a given array retrieved with the Sys.glob() function.
This is the code I wrote:
# fetch the different ETL parts
parts <- Sys.glob("scratch/*.R")
if (length(parts) > 0) {
for (part in parts) {
# source the ETL part
source(part)
# rest of code goes here
# ...
}
} else {
stop("no ETL parts found (no data to process)")
}
The problem I have is I cannot do this or, at least, I get the following error:
simpleError in source(part): scratch/foo.bar.com-https.R:4:151: unexpected string constant
I've tried different combinations for the source() function like the following:
source(sprintf("./%s", part))
source(toString(part))
source(file = part)
source(file = sprintf("./%s", part))
source(file = toString(part))
No luck. As I'm globbing the contents of a directory I need to tell R to source those files. As it's a custom-tailored ETL (extract, transform and load) script, I can manually write:
source("scratch/foo.bar.com-https.R")
source("scratch/bar.bar.com-https.R")
source("scratch/baz.bar.com-https.R")
But that's dirty and right now there are 3 different extraction patterns. They could be 8, 80 or even 2000 different patterns so writing it by hand is not an option.
How can I do this?
Try getting the list of files with dir and then using lapply:
For example, if your files are of the form t1.R, t2.R, etc., and are inside the path "StackOverflow" do:
d = dir(pattern = "^t\\d.R$", path = "StackOverflow/", recursive = T, full.names = T)
m = lapply(d, source)
The option recursive = T will search all subdirectories, and full.names = T will add the path to the filenames.
If you still want to use Sys.glob(), this works too:
d = Sys.glob(paths = "StackOverflow/t*.R")
m = lapply(d, source)
I'm just trying to get a list of R Markdown files I have on my computer. I thought this would be simple but it doesn't appear to be.
I'd like a list of all markdown files on the whole computer.
I tried:
Setting the working directory to the saved search
setwd("C:/Users/USERNAME/Desktop/.rmd.search-ms")
Error in setwd("C:/Users/USERNAME/Desktop/.rmd.search-ms") :
cannot change working directory
Code below resulted in empty lists:
files <- list.files(pattern = "\\.rmd$")
files <- list.files(pattern = "\\.rmd$", ignore.case=TRUE)
list <- list.files("C:/Users/USERNAME/Desktop/.rmd.search-ms", pattern = NULL, full.names = FALSE)
This resulted in character(0)
Sys.glob(file.path("C:/Users/USERNAME/Desktop/.rmd.search-ms", "*.rmd"))
character(0)
Thank you in advance!
For all users starting at the /Users/* path on windows OS...
Process is:
Get all file paths from the home /Users/ paths
Recursively loop through each resolved file path
Find all RMD file types
return the file paths that match
Edit for library clarification.....
library(magrittr)
Map(list.files, Sys.glob("/Users/*"),
full.names = TRUE,
no.. = TRUE, recursive = TRUE,
pattern = "\\.rmd$",
ignore.case = TRUE,
USE.NAMES = FALSE
) %>% unlist()
I am using the command file.copy in R and it throws an error, but I can't spot the reason.
file.copy(from="Z:/Ongoing/Test", to = "C:/Users/Darius/Desktop", overwrite = TRUE, recursive = TRUE)
Warning message:
In file.copy(from = "Z:/Ongoing/Test",:
problem copying Z:/Ongoing/Test to C:/Users/Darius/Desktop/Test: No such file or directory
Can anyone see the problem? The command line doesn't work even though it only gives you a warning message.
Actually, I don't think there is any straight forward way to copy a directory. I have written a function which might help you.
This function takes input two arguments:
from: The complete path of directory to be copied
to: The location to which the directory is to be copied
Assumption: from and to are paths of only one directory.
dir.copy <- function(from, to){
## check if from and to directories are valid
if (!dir.exists(from)){
cat('from: No such Directory\n')
return (FALSE)
}
else if (!dir.exists(to)){
cat('to: No such Directory\n')
return (FALSE)
}
## extract the directory name from 'from'
split_ans <- unlist(strsplit(from,'/'))
dir_name <- split_ans[length(split_ans)]
new_to <- paste(to,dir_name,sep='/')
## create the directory in 'to'
dir.create(new_to)
## copy all files in 'to'
file_inside <- list.files(from,full.names = T)
file.copy(from = file_inside,to=new_to)
## copy all subdirectories
dir_inside <- list.dirs(path=from,recursive = F)
if (length(dir_inside) > 0){
for (dir_name in dir_inside)
dir.copy(dir_name,new_to)
}
return (TRUE)
}
The file.copy() doesn't create directories. So it'll only work if you're copying to folders that already exist.
Had similar issue:
This blog was helpful. Slightly modified the code by adding full.names=T and overwrite = T.
current.folder <- "E:/ProjectDirectory/Data/"
new.folder <- "E:/ProjectDirectory/NewData/"
list.of.files <- list.files(current.folder, full.names = T)
# copy the files to the new folder
file.copy(list.of.files, new.folder, overwrite = T)