Here's the situation. My R code is supposed to check whether existing RData files in application's cache are up-to-date. I do that by saving the files with names consisting of base64-encoded names of a specific data element. However, data corresponding to each of these elements are being retrieved by submitting a particular SQL query per element, all specified in data collection's configuration file. So, in a situation when data for an element is retrieved, but afterwards I had to change that particular SQL query, data is not being updated.
In order to handle this situation, I decided to use R objects' attributes. I plan to save each data object's corresponding SQL query (request) - base64-encoded - as the object's attribute:
# save hash of the request's SQL query as data object's attribute,
# so that we can detect when configuration contains modified query
attr(data, "SQL") <- base64(request)
Then, when I need to verify whether the SQL has been query changed, I'd like to simply retrieve the object's corresponding attribute and compare it with the hash of the current SQL query. If they match - the query hasn't been changed and I skip processing this data request, if they don't match - the query has been changed and I go ahead with processing the request:
# check if the archive file has already been processed
if (DEBUG) {message("Processing request \"", request, "\" ...")}
if (file.exists(rdataFile)) {
# now check if request's SQL query hasn't been modified
data <- load(rdataFile)
if (identical(base64(request), attr(data, "SQL"))) {
skipped <<- skipped + 1
if (DEBUG) {message("Processing skipped: .Rdata file found.\n")}
return (invisible())
}
rm(data)
}
My question is whether it's possible to read/access object's attributes without fully loading the object from file. In other words, can I avoid the load() and rm() in the code above?
Your advice is much appreciated!
UPDATE: Additional question: What's wrong with my code, as it performs processing even when it shouldn't - in case, when all information is up-to-date (no changes in cache and in configuration file as well)?
UPDATE 2 (additional code per #MrFlick's answer):
# construct name from data source prefix and data ID (see config. file),
# so that corresponding data object (usually, data frame) will be saved
# later under that name via save()
dataName <- paste(dsPrefix, "data", indicator, sep = ".")
assign(dataName, srdaGetData())
data <- as.name(dataName)
# save hash of the request's SQL query as data object's attribute,
# so that we can detect when configuration contains modified query
attr(data, "SQL") <- base64(request)
# save current data frame to RData file
save(list = dataName, file = rdataFile)
# alternatively, use do.call() as in "getFLOSSmoleDataXML.R"
# clean up
rm(data)
You can't "really" do it, but you could modify the code in my cgwtools::lsdata function.
function (fnam = ".Rdata")
{
x <- load(fnam, envir = environment())
return(x)
}
This loads, thus taking time and briefly taking memory, and then the local environment disappears. So, add an argument for the items you want to check attributes for, add a line inside the function which does attributes(your_items) ->y ; return (list(x=x,y=y))
And there is a problem with the way you are using load(). When you use save/load you can "freeze-dry" multiple objects to an .RData file. They "re-infalte" into the current environemnt. As a result, when you call load(), it does not return the object(s), it returns a character vector with the names of all the objects that it restored. Since you didn't supply your save() code, i'm not sure what's actually in your load file, but if it was a variable called data, then just call
load(rdataFile)
not
data <- load(rdataFile)
Related
I am repeatedly applying a function to read and process a bunch of csv files. Each time it runs, the function creates a data frame (this.csv.data) and uses save() to write it to a .RData file with a unique name. Problem is, later when I read these .RData files using load(), the loaded variable names are not unique, because each one loads with the name this.csv.data....
I'd like to save them with unique tags so that they come out properly named when I load() them. I've created the following code to illustrate .
this.csv.data = list(data=c(1:9), unique_tag = "some_unique_tag")
assign(this.csv.data$unique_tag,this.csv.data$data)
# I want to save the data,
# with variable name of <unique_tag>,
# at a file named <unique_tag>.dat
saved_file_name <- paste(this.csv.data$unique_tag,"RData",sep=".")
save(get(this.csv.data$unique_tag), saved_file_name)
but the last line returns:
"Error in save(get(this_unique_tag), file = data_tag) :
object ‘get(this_unique_tag)’ not found"
even though the following returns the data just fine:
get(this.csv.data$unique_tag)
Just name the arguments you use. With your code the following works fine:
save(list = this.csv.data$unique_tag, file=saved_file_name)
My preference is to avoid the name in the RData file on load:
obj = local(get(load('myfile.RData')))
This way you can load various RData files and name the objects whatever you want, or store them in a list etc.
You really should use saveRDS/readRDS to serialize your objects.
save and load are for saving whole environments.
saveRDS(this.csv.data, saved_file_name)
# later
mydata <- readRDS(saved_file_name)
you can use
save.image("myfile.RData")
This worked for me:
env <- new.env()
env[[varname]] <- object_to_save
save(list=c(varname), envir=env, file='out.Rda')
You could probably do it without a new env (but I didn't try this):
.GlobalEnv[[varname]] <- object_to_save
save(list=c(varname), envir=.GlobalEnv, file='out.Rda')
You might even be able to remove the envir variable.
I am trying to merge multiple json files into one database and despite trying all the approaches found on SO, it fails.
The files provide sensor data. The stages I've completed are:
1. Unzip the files - produces json files saved as '.txt' files
2. Remove the old zip files
3. Parse the '.txt' files to remove some bugs in the content - random 3
letter + comma combos at the start of some lines, e.g. 'prm,{...'
I've got code which will turn them into data frames individually:
stream <- stream_in(file("1.txt"))
flat <- flatten(stream)
df_it <- as.data.frame(flat)
But when I put it into a function:
df_loop <- function(x) {
stream <- stream_in(x)
flat <- flatten(stream)
df_it <- as.data.frame(flat)
df_it
}
And then try to run through it:
df_all <- sapply(file.list, df_loop)
I get:
Error: Argument 'con' must be a connection.
Then I've tried to merge the json files with rbind.fill and merge to no avail.
Not really sure where I'm going so terribly wrong so would appreciate any help.
You need a small change in your function. Change to -
stream <- stream_in(file(x))
Explanation
Start with analyzing your original implementation -
stream <- stream_in(file("1.txt"))
The 1.txt here is the file path which is getting passed as an input parameter to file() function. A quick ?file will tell you that it is a
Function to create, open and close connections, i.e., “generalized
files”, such as possibly compressed files, URLs, pipes, etc.
Now if you do a ?stream_in() you will find that it is a
function that implements line-by-line processing of JSON data over a
connection, such as a socket, url, file or pipe
Keyword here being socket, url, file or pipe.
Your file.list is just a list of file paths, character/strings to be specific. But in order for stream_in() to work, you need to pass in a file object, which is the output of file() function which takes in the file path as a string input.
Chaining that together, you needed to do stream_in(file("/path/to/file.txt")).
Once you do that, your sapply takes iterates each path, creates the file object and passes it as input to stream_in().
Hope that helps!
Here's the situation. My R code is supposed to check whether existing RData files in application's cache are up-to-date. I do that by saving the files with names consisting of base64-encoded names of a specific data element. However, data corresponding to each of these elements are being retrieved by submitting a particular SQL query per element, all specified in data collection's configuration file. So, in a situation when data for an element is retrieved, but afterwards I had to change that particular SQL query, data is not being updated.
In order to handle this situation, I decided to use R objects' attributes. I plan to save each data object's corresponding SQL query (request) - base64-encoded - as the object's attribute:
# save hash of the request's SQL query as data object's attribute,
# so that we can detect when configuration contains modified query
attr(data, "SQL") <- base64(request)
Then, when I need to verify whether the SQL has been query changed, I'd like to simply retrieve the object's corresponding attribute and compare it with the hash of the current SQL query. If they match - the query hasn't been changed and I skip processing this data request, if they don't match - the query has been changed and I go ahead with processing the request:
# check if the archive file has already been processed
if (DEBUG) {message("Processing request \"", request, "\" ...")}
if (file.exists(rdataFile)) {
# now check if request's SQL query hasn't been modified
data <- load(rdataFile)
if (identical(base64(request), attr(data, "SQL"))) {
skipped <<- skipped + 1
if (DEBUG) {message("Processing skipped: .Rdata file found.\n")}
return (invisible())
}
rm(data)
}
My question is whether it's possible to read/access object's attributes without fully loading the object from file. In other words, can I avoid the load() and rm() in the code above?
Your advice is much appreciated!
UPDATE: Additional question: What's wrong with my code, as it performs processing even when it shouldn't - in case, when all information is up-to-date (no changes in cache and in configuration file as well)?
UPDATE 2 (additional code per #MrFlick's answer):
# construct name from data source prefix and data ID (see config. file),
# so that corresponding data object (usually, data frame) will be saved
# later under that name via save()
dataName <- paste(dsPrefix, "data", indicator, sep = ".")
assign(dataName, srdaGetData())
data <- as.name(dataName)
# save hash of the request's SQL query as data object's attribute,
# so that we can detect when configuration contains modified query
attr(data, "SQL") <- base64(request)
# save current data frame to RData file
save(list = dataName, file = rdataFile)
# alternatively, use do.call() as in "getFLOSSmoleDataXML.R"
# clean up
rm(data)
You can't "really" do it, but you could modify the code in my cgwtools::lsdata function.
function (fnam = ".Rdata")
{
x <- load(fnam, envir = environment())
return(x)
}
This loads, thus taking time and briefly taking memory, and then the local environment disappears. So, add an argument for the items you want to check attributes for, add a line inside the function which does attributes(your_items) ->y ; return (list(x=x,y=y))
And there is a problem with the way you are using load(). When you use save/load you can "freeze-dry" multiple objects to an .RData file. They "re-infalte" into the current environemnt. As a result, when you call load(), it does not return the object(s), it returns a character vector with the names of all the objects that it restored. Since you didn't supply your save() code, i'm not sure what's actually in your load file, but if it was a variable called data, then just call
load(rdataFile)
not
data <- load(rdataFile)
I've got a function that has a list output. Every time I run it, I want to export the results with save. After a couple of runs I want to read the files in and compare the results. I do this, because I don't know how many tasks there will be, and maybe I'll use different computers to calculate each task. So how should I name the archived objects, so later I can read them all in?
My best guess would be to dynamically name the variables before saving, and keep track of the object names, but I've read everywhere that this is a big no-no.
So how should I approach this problem?
You might want to use the saveRDS and readRDS functions instead of save and load. The RDS version functions will save and read single objects without the attached name. You would create your object and save it to a file (using paste0 or sprintf to create unique names), then when processing the results you can read in one object at a time, or read several into a list to work with them.
You can use scope to hide the retrieved name inside a function, so first you might save a list to a file:
mybiglist <- list(fred=1, john='dum di dum', mary=3)
save(mybiglist, file='mybiglist1.RData')
Then you can load it back in through a function and give it whatever name you like be it inside another list or just a plain object:
# Use the fact that load returns the name of the object loaded
# and that scope will hide this object
myspecialload <- function(RD.fnam) {
return(eval(parse(text=load(RD.fnam))))
}
# now lets reload that file but put it in another object
mynewbiglist <- myspecialload('mybiglist1.RData')
mynewbiglist
$fred
[1] 1
$john
[1] "dum di dum"
$mary
[1] 3
Note that this is not really a generic 'use it anywhere' type function, as for an RData file with multiple objects it appears to return the last object saved... so best stick with one list object per file for now!
One time I was given several RData files, and they all had only one variable called x. In order to read all of them in my workspace, I loaded sequentially each the variable to its environment, and I used get() to read its value.
tenv <- new.env()
load("file_1.RData", envir = tenv)
ls(tenv) # x
myvar1 <- get(ls(tenv), tenv)
rm(tenv)
....
This code can be repeated for each file.
I am repeatedly applying a function to read and process a bunch of csv files. Each time it runs, the function creates a data frame (this.csv.data) and uses save() to write it to a .RData file with a unique name. Problem is, later when I read these .RData files using load(), the loaded variable names are not unique, because each one loads with the name this.csv.data....
I'd like to save them with unique tags so that they come out properly named when I load() them. I've created the following code to illustrate .
this.csv.data = list(data=c(1:9), unique_tag = "some_unique_tag")
assign(this.csv.data$unique_tag,this.csv.data$data)
# I want to save the data,
# with variable name of <unique_tag>,
# at a file named <unique_tag>.dat
saved_file_name <- paste(this.csv.data$unique_tag,"RData",sep=".")
save(get(this.csv.data$unique_tag), saved_file_name)
but the last line returns:
"Error in save(get(this_unique_tag), file = data_tag) :
object ‘get(this_unique_tag)’ not found"
even though the following returns the data just fine:
get(this.csv.data$unique_tag)
Just name the arguments you use. With your code the following works fine:
save(list = this.csv.data$unique_tag, file=saved_file_name)
My preference is to avoid the name in the RData file on load:
obj = local(get(load('myfile.RData')))
This way you can load various RData files and name the objects whatever you want, or store them in a list etc.
You really should use saveRDS/readRDS to serialize your objects.
save and load are for saving whole environments.
saveRDS(this.csv.data, saved_file_name)
# later
mydata <- readRDS(saved_file_name)
you can use
save.image("myfile.RData")
This worked for me:
env <- new.env()
env[[varname]] <- object_to_save
save(list=c(varname), envir=env, file='out.Rda')
You could probably do it without a new env (but I didn't try this):
.GlobalEnv[[varname]] <- object_to_save
save(list=c(varname), envir=.GlobalEnv, file='out.Rda')
You might even be able to remove the envir variable.