I am brand new to R and I am trying to run some existing code that should clean up an input .csv then save the cleaned data to a different location as a .RData file. This code has run fine for the previous owner.
The code seems to be pulling the .csv and cleaning it just fine. It also looks like the save is running (there are no errors) but there is no output in the specified location. I thought maybe R was having a difficult time finding the location, but it's pulling the input data okay and the destination is just a sub folder.
After a full day of extensive Googling, I can't find anything related to a save just not working.
Example code below:
save(data, file = "C:\\Users\\my_name\\Documents\\Project\\Data.RData", sep="")
Hard to believe you don't see any errors - unless something has switched errors off:
> data = 1:10
> save(data, file="output.RData", sep="")
Error in FUN(X[[i]], ...) : invalid first argument
Its a misleading error, the problem is the third argument, which doesn't do anything. Remove and it works:
> save(data, file="output.RData")
>
sep is used as an argument in writing CSV files to separate columns. save writes binary data which doesn't have rows and columns.
Related
I'm having trouble reading multiple .csv files in from a directory. It's odd because I read in files from two other directories using the same code with no issue immediately prior to running this code chunk.
setwd("C:\\Users\\User\\Documents\\College\\MLMLMasters\\Thesis\\TaggingEffectsData\\DiveStat")
my_dive <- list.files(pattern="*.csv")
my_dive
head(my_dive)
if(!require(plyr)){install.packages("plyr")}
DB = do.call(rbind.fill, lapply(my_dive, function(x) read.csv(x, stringsAsFactors = FALSE)))
DB
detach("package:plyr") ### I run this after I have finished creating all the dataframes because I sometimes have issues with plyr and dplyr not playing nice
if(!require(dplyr)){install.packages("dplyr")}
Then it throws this error:
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
no lines available in input
Which doesn't make any sense because the list.files function works and when I run head(my_dive) I get this output:
head(my_dive)
[1] "2004001_R881_TV3.csv" "2004002_R 57_TV3.csv" "2004002_R57_TV3.csv" "2004003_W1095_TV3.csv"
[5] "2004004_99AB_TV3.csv" "2004005_O176_TV3.csv"
Plus the Environment clearly shows that my list is populated with all 614 files as I would expect it to be.
All of the csv file sets have identical file names but different data so they have to be read in as separate data frames from separate directories (not my decision that's just how this dataset was organized). For that reason I can't seem to figure out why this set of files is giving me grief when the other two sets read in just fine with no issues. The only difference should be the working directory and the names of the lists and dataframes. I thought it might be something within the actual directory, but I checked and there are only the .csv files in the directory and the list.files functions works fine. I saw a previous question that was similar to mine but the poster didn't initially use the (pattern = "*.csv") argument and that was the cause for the error. I always use this argument so that seems unlikely to be the cause.
I'm not sure how to go about making this reproduceable but I appreciate any help offered.
I am trying to load an .rda file in r which was a saved dataframe. I do not remember the name of it though.
I have tried
a<-load("al.rda")
which then does not let me do anything with a. I get the error
Error:object 'a' not found
I have also tried to use the = sign.
How do I load this .rda file so I can use it?
I restared R with load("al.rda) and I know get the following error
Error: C stack usage is too close to the limit
Use 'attach' and then 'ls' with a name argument. Something like:
attach("al.rda")
ls("file:al.rda")
The data file is now on your search path in position 2, most likely. Do:
search()
ls(pos=2)
for enlightenment. Typing the name of any object saved in al.rda will now get it, unless you have something in search path position 1, but R will probably warn you with some message about a thing masking another thing if there is.
However I now suspect you've saved nothing in your RData file. Two reasons:
You say you don't get an error message
load says there's nothing loaded
I can duplicate this situation. If you do save(file="foo.RData") then you'll get an empty RData file - what you probably meant to do was save.image(file="foo.RData") which saves all your objects.
How big is this .rda file of yours? If its under 100 bytes (my empty RData files are 42 bytes long) then I suspect that's what's happened.
I had to reinstall R...somehow it was corrupt. The simple command which I expected of
load("al.rda")
finally worked.
I had a similar issue, and it was solved without reinstall R. for example doing
load("al.rda) works fine, however if you do
a <- load("al.rda") will not work.
The load function does return the list of variables that it loaded. I suspect you actually get an error when you load "al.rda". What exactly does R output when you load?
Example of how it should work:
d <- data.frame(a=11:13, b=letters[1:3])
save(d, file='foo.rda')
a <- load('foo.rda')
a # prints "d"
Just to be sure, check that the load function you actually call is the original one:
find("load") # should print "package:base"
EDIT Since you now get an error when you load the file, it is probably corrupt in some way. Try this and say what it prints:
file.info("a1.rda") # Prints the file size etc...
readBin("a1.rda", "raw", 50) # reads first 50 bytes from the file
Without having access to the file, it's hard to investigate more... Maybe you could share the file somehow (http://www.filedropper.com or similar)?
I usually use save to save only a single object, and I then use the following utility method to retrieve that object into a given variable name using load, but into a temporary namespace to avoid overwriting existing objects. Maybe it will be helpful for others as well:
load_first_object <- function(fname){
e <- new.env(parent = parent.frame())
load(fname, e)
return(e[[ls(e)[1]]])
}
The method can of course be extended to also return named objects and lists of objects, but this simple version is for me the most useful.
I am new to coding and very new to this forum, so I hope my request makes sense.
I am trying to select images listed in a .csv file and to copy them to a new folder. The pictures and the .csv file are both in the folder GRA04. The .csv file contain only one column with the picture names.
I used the following code:
#set working directory
setwd("E:/2019/GRA04")
#create and identify a new folder in R
targetdir <- dir.create("GRA04_age")<br/>
#find the files you want to copy
filestocopy <- read.csv("age.csv", header=FALSE) #read csv as data table (only one column, each raw being a file name)
filestocopy_v <- readLines(filestocopy)#convert data table in character vector
filestocopy_v #shows the character vector
#copy the files to the new folder
file.copy(filestocopy_v, targetdir, recursive = TRUE)
When reaching the line
filestocopy_v <- readLines(filestocopy)
I get this error message:
Error in readLines(filestocopy) : 'con' is not a connection
I looked online for solutions with no luck. I ran this code before (or else something similar... didn't back it up...) and it worked fine, so I am not sure what is happening...
Thanks!
Out of interest, would the following now do what you're trying to achieve?
filestocopy_v <- filestocopy[[1]]
I have 500+ .json files that I am trying to get a specific element out of. I cannot figure out why I cannot read more than one at a time..
This works:
library (jsonlite)
files<-list.files(‘~/JSON’)
file1<-fromJSON(readLines(‘~/JSON/file1.json),flatten=TRUE)
result<-as.data.frame(source=file1$element$subdata$data)
However, regardless of using different json packages (eg RJSONIO), I cannot apply this to the entire contents of files. The error I continue to get is...
attempt to run same code as function over all contents in file list
for (i in files) {
fromJSON(readLines(i),flatten = TRUE)
as.data.frame(i)$element$subdata$data}
My goal is to loop through all 500+ and extract the data and its contents. Specifically if the file has the element ‘subdata$data’, i want to extract the list and put them all in a dataframe.
Note: files are being read as ASCII (Windows OS). This does bot have a negative effect on single extractions but for the loop i get ‘invalid character bytes’
Update 1/25/2019
Ran the following but returned errors...
files<-list.files('~/JSON')
out<-lapply(files,function (fn) {
o<-fromJSON(file(i),flatten=TRUE)
as.data.frame(i)$element$subdata$data
})
Error in file(i): object 'i' not found
Also updated function, this time with UTF* errors...
files<-list.files('~/JSON')
out<-lapply(files,function (i,fn) {
o<-fromJSON(file(i),flatten=TRUE)
as.data.frame(i)$element$subdata$data
})
Error in parse_con(txt,bigint_as_char):
lexical error: invalid bytes in UTF8 string. (right here)------^
Latest Update
Think I found out a solution to the crazy 'bytes' problem. When I run readLines on the .json file, I can then apply fromJSON),
e.x.
json<-readLines('~/JSON')
jsonread<-fromJSON(json)
jsondf<-as.data.frame(jsonread$element$subdata$data)
#returns a dataframe with the correct information
Problem is, I cannot apply readLines to all the files within the JSON folder (PATH). If I can get help with that, I think I can run...
files<-list.files('~/JSON')
for (i in files){
a<-readLines(i)
o<-fromJSON(file(a),flatten=TRUE)
as.data.frame(i)$element$subdata}
Needed Steps
apply readLines to all 500 .json files in JSON folder
apply fromJSON to files from step.1
create a data.frame that returns entries if list (fromJSON) contains $element$subdata$data.
Thoughts?
Solution (Workaround?)
Unfortunately, the fromJSON still runs in to trouble with the .json files. My guess is that my GET method (httr) is unable to wait/delay and load the 'pretty print' and thus is grabbing the raw .json which in-turn is giving odd characters and as a result giving the ubiquitous '------^' error. Nevertheless, I was able to put together a solution, please see below. I want to post it for future folks that may have the same problem with the .json files not working nicely with any R json package.
#keeping the same 'files' variable as earlier
raw_data<-lapply(files,readLines)
dat<-do.call(rbind,raw_data)
dat2<-as.data.frame(dat,stringsasFactors=FALSE)
#check to see json contents were read-in
dat2[1,1]
library(tidyr)
dat3<-separate_rows(dat2,sep='')
x<-unlist(raw_data)
x<-gsub('[[:punct:]]', ' ',x)
#Identify elements wanted in original .json and apply regex
y<-regmatches(x,regexc('.*SubElement2 *(.*?) *Text.*',x))
for loops never return anything, so you must save all valuable data yourself.
You call as.data.frame(i) which is creating a frame with exactly one element, the filename, probably not what you want to keep.
(Minor) Use fromJSON(file(i),...).
Since you want to capture these into one frame, I suggest something along the lines of:
out <- lapply(files, function(fn) {
o <- fromJSON(file(fn), flatten = TRUE)
as.data.frame(o)$element$subdata$data
})
allout <- do.call(rbind.data.frame, out)
### alternatives:
allout <- dplyr::bind_rows(out)
allout <- data.table::rbindlist(out)
Sorry, everyone. First time using R. The company switched to it recently and I am trying to customize a script I was given.
The purpose of the script is to:
Open a CSV file
Filter the results by a code
Save the results as a new CSV file named as the code
Because of this, I have to provide the code three times and the location path twice. I am trying to streamline this so I only need to enter the code and path once, by assigning them to variables, and then the script would use those variables for everything else.
Here's what I have so far, but I'm getting an error "Error in paste(FINAL) : object 'FINAL' not found"
CODE <- '1234'
LOC <- 'C:/Users/myname/Documents/Raw Files/'
FINAL <- paste0(LOC,CODE,'.csv')
RawFile <- read_csv(paste0(LOC,'Raw File MERGED_Raw.csv'))
CODEofInterest <- RawFile %>% filter(ID == CODE)
write_csv(CODEofInterest,paste0(FINAL))
It was user error, I was not running the entire script, just the last line.