R Shiny unsource sourced files - r

One of the powers of R / Shiny is the posiblity to "source" a other R file in the R code. I am doing this dynamicly so in the end there are a lot of sourced files. So far so good.
FileToSource <- paste("Folder/",df$filename,".R", sep = "")
source(FileToSource, chdir=T)
unsource(......) ???
But at some point i want to clean up. I can delete variables etc. but can i "unsource" the previously "sourced" files ?
I have been looking for code of a way to do this but no luck up till now.
You can wonder if it is nessesary to "unsource" files but i like to clean up once in a while and this can be part of it. Less chance of conflicting code etc...
Suggestions ?
Thanks in advance, if i find a way i'll post it here too

You might want to consider using a local environment. Let's say there is a file called ~/x.R that contains one line bb <- 10. You can create a new environment
envir <- new.env()
and then source the file in that environment by
source('~/x.R',local=envir)
Then, you will be able to obtain the value of bb as envir$bb, and you wouldn't see bb in your Global Environment. Afterwards, you can delete the environment envir by setting envir <- NULL or something like that.

Great i did this test to find out if/how it works:
A.R:
xx <- function(){
print("A print")
}
yy <- 11
B.R:
xx <- function(){
print("B print")
}
yy <- 99
Main.R:
(remove the # to get a Error : attempt to apply non-function)
A <- new.env()
B <- new.env()
source("A.R", local=A)
source("B.R", local=B)
A$xx()
print(A$yy)
B$xx()
print(B$yy)
A <- NULL
#A$xx()
#print(A$yy)
B$xx()
print(B$yy)
B <- NULL
#A$xx()
#print(A$yy)
#B$xx()
#print(B$yy)
So in the end Main.R is
EMPTY & CLEAN & TIDY
<< just wat i wanted ! >>
THANKS #MARAT

Related

Load multiple rda files and keep their names in the global environment in R [duplicate]

When you save a variable in an R data file using save, it is saved under whatever name it had in the session that saved it. When I later go to load it from another session, it is loaded with the same name, which the loading script cannot possibly know. This name could overwrite an existing variable of the same name in the loading session. Is there a way to safely load an object from a data file into a specified variable name without risk of clobbering existing variables?
Example:
Saving session:
x = 5
save(x, file="x.Rda")
Loading session:
x = 7
load("x.Rda")
print(x) # This will print 5. Oops.
How I want it to work:
x = 7
y = load_object_from_file("x.Rda")
print(x) # should print 7
print(y) # should print 5
If you're just saving a single object, don't use an .Rdata file, use an .RDS file:
x <- 5
saveRDS(x, "x.rds")
y <- readRDS("x.rds")
all.equal(x, y)
I use the following:
loadRData <- function(fileName){
#loads an RData file, and returns it
load(fileName)
get(ls()[ls() != "fileName"])
}
d <- loadRData("~/blah/ricardo.RData")
You can create a new environment, load the .rda file into that environment, and retrieve the object from there. However, this does impose some restrictions: either you know what the original name for your object is, or there is only one object saved in the file.
This function returns an object loaded from a supplied .rda file. If there is more than one object in the file, an arbitrary one is returned.
load_obj <- function(f)
{
env <- new.env()
nm <- load(f, env)[1]
env[[nm]]
}
You could also try something like:
# Load the data, and store the name of the loaded object in x
x = load('data.Rsave')
# Get the object by its name
y = get(x)
# Remove the old object since you've stored it in y
rm(x)
Similar to the other solutions above, I load variables into an environment variable. This way if I load multiple variables from the .Rda, those will not clutter my environment.
load("x.Rda", dt <- new.env())
Demo:
x <- 2
y <- 1
save(x, y, file = "mydata.Rda")
rm(x, y)
x <- 123
# Load 'x' and 'y' into a new environment called 'dt'
load("mydata.Rda", dt <- new.env())
dt$x
#> [1] 2
x
#> [1] 123
Rdata file with one object
assign('newname', get(load('~/oldname.Rdata')))
In case anyone is looking to do this with a plain source file, rather than a saved Rdata/RDS/Rda file, the solution is very similar to the one provided by #Hong Ooi
load_obj <- function(fileName) {
local_env = new.env()
source(file = fileName, local = local_env)
return(local_env[[names(local_env)[1]]])
}
my_loaded_obj = load_obj(fileName = "TestSourceFile.R")
my_loaded_obj(7)
Prints:
[1] "Value of arg is 7"
And in the separate source file TestSourceFile.R
myTestFunction = function(arg) {
print(paste0("Value of arg is ", arg))
}
Again, this solution only works if there is exactly one file, if there are more, then it will just return one of them (probably the first, but that is not guaranteed).
I'm extending the answer from #ricardo to allow selection of specific variable if the .Rdata file contains multiple variables (as my credits are low to edit an answer). It adds some lines to read user input after listing the variables contained in the .Rdata file.
loadRData <- function(fileName) {
#loads an RData file, and returns it
load(fileName)
print(ls())
n <- readline(prompt="Which variable to load? \n")
get(ls()[as.integer(n)])
}
select_var <- loadRData('Multiple_variables.Rdata')
Following from #ricardo, another example of using (effectively) a separate environment
load_rdata <- function(file_path) {
res <- local({
load(file_path)
return(get(ls()))
})
return(res)
}
Similar caveats with only expects one object to be returned

Write function to load set of predefined paths or files

I have recently made my first R package with specific tools for processing a large set of data that I am working with. In this project, there are several paths and files that I have to call and access at various points.
Is it possible to write functions that, when called will load a set of predefined paths or data to my global environment?
For example, the function
load_foo_paths()
would return
foo_path_1 <- "path/to/foo/1/"
foo_path_2 <- "path/to/foo/2/"
And the function
load_foo_data()
would return
foo_data_1 <- read.csv("foo_data_1.csv")
foo_data_2 <- read.csv("foo_data_2.csv")
How would I go about doing something like this?
Thanks
Maybe you can adapt the following to your use case:
loadHist <- function() {
HistFile <- c(".Rhistory", ".Rsession")
ruser <- Sys.getenv("R_USER") # C:\cygwin64\home\xxxx
whome <- Sys.getenv("HOME") # C:\cygwin64\home\xxxx
uprofile <- Sys.getenv("USERPROFILE") # C:\Users\xxxx
# Setting up History Paths (to .Rhistory)
hP1 <- c(getwd(), ruser, whome, uprofile)
hP2 <- c(file.path(hP1, HistFile[1]))
fe <- file.exists(hP2) # file.exists(file.path(getwd(), HistFile[1]))
# Load first find
fen = length(fe); i=1
while (i <= fen) {
if(fe[i]) {
cat('\nLoaded history from:\n',hP2[i],'\n', sep='')
try(utils::loadhistory(file=hP2[i]))
break
}
i = i + 1
}
#cat('\nDone!\n')
}

Reproducible saveRDS with environments

I am building an R package and using data-raw and data to store a library of pre-defined RxODE models. This works very well.
However, the resulting .rda files change at every generation. Some models contain an R environment, and the serialization seems to contain a 'creation time' timestamp. This means every time the data/ directory is regenerated, all files have changed...
Is there some way to modify the serialization of an R environment to be reproducible?
storeFile <- function(file) {
env <- new.env()
fun <- function(x) {x+3}
environment(fun) <- env
save('fun', file = file, ascii=TRUE)
}
storeFile('fileA.rda')
storeFile('fileB.rda')
message("Files are identical? ", identical(readLines('fileA.rda'), readLines('fileB.rda')) )
very interesting question. There is an oddly behaviour:
storeFile <- function(file) {
env <- new.env()
fun <- function(x) {x+3}
environment(fun) <- env
save.image(file = file, ascii=TRUE)
}
storeFile('fileA.rda')
storeFile('fileB.rda')
message("Files are identical? ", identical(readLines('fileA.rda'), readLines('fileB.rda')) )
storeFile('fileA.rda')
storeFile('fileB.rda')
message("Files are identical? ", identical(readLines('fileA.rda'), readLines('fileB.rda')) )
My output is FALSE in the first identical but TRUE in the second. I do not clearly know why.
Also I'm using save.image instead of save, so I do not know if it suits you!
Best!

Importing .rda file in R environment in a dataframe [duplicate]

When you save a variable in an R data file using save, it is saved under whatever name it had in the session that saved it. When I later go to load it from another session, it is loaded with the same name, which the loading script cannot possibly know. This name could overwrite an existing variable of the same name in the loading session. Is there a way to safely load an object from a data file into a specified variable name without risk of clobbering existing variables?
Example:
Saving session:
x = 5
save(x, file="x.Rda")
Loading session:
x = 7
load("x.Rda")
print(x) # This will print 5. Oops.
How I want it to work:
x = 7
y = load_object_from_file("x.Rda")
print(x) # should print 7
print(y) # should print 5
If you're just saving a single object, don't use an .Rdata file, use an .RDS file:
x <- 5
saveRDS(x, "x.rds")
y <- readRDS("x.rds")
all.equal(x, y)
I use the following:
loadRData <- function(fileName){
#loads an RData file, and returns it
load(fileName)
get(ls()[ls() != "fileName"])
}
d <- loadRData("~/blah/ricardo.RData")
You can create a new environment, load the .rda file into that environment, and retrieve the object from there. However, this does impose some restrictions: either you know what the original name for your object is, or there is only one object saved in the file.
This function returns an object loaded from a supplied .rda file. If there is more than one object in the file, an arbitrary one is returned.
load_obj <- function(f)
{
env <- new.env()
nm <- load(f, env)[1]
env[[nm]]
}
You could also try something like:
# Load the data, and store the name of the loaded object in x
x = load('data.Rsave')
# Get the object by its name
y = get(x)
# Remove the old object since you've stored it in y
rm(x)
Similar to the other solutions above, I load variables into an environment variable. This way if I load multiple variables from the .Rda, those will not clutter my environment.
load("x.Rda", dt <- new.env())
Demo:
x <- 2
y <- 1
save(x, y, file = "mydata.Rda")
rm(x, y)
x <- 123
# Load 'x' and 'y' into a new environment called 'dt'
load("mydata.Rda", dt <- new.env())
dt$x
#> [1] 2
x
#> [1] 123
Rdata file with one object
assign('newname', get(load('~/oldname.Rdata')))
In case anyone is looking to do this with a plain source file, rather than a saved Rdata/RDS/Rda file, the solution is very similar to the one provided by #Hong Ooi
load_obj <- function(fileName) {
local_env = new.env()
source(file = fileName, local = local_env)
return(local_env[[names(local_env)[1]]])
}
my_loaded_obj = load_obj(fileName = "TestSourceFile.R")
my_loaded_obj(7)
Prints:
[1] "Value of arg is 7"
And in the separate source file TestSourceFile.R
myTestFunction = function(arg) {
print(paste0("Value of arg is ", arg))
}
Again, this solution only works if there is exactly one file, if there are more, then it will just return one of them (probably the first, but that is not guaranteed).
I'm extending the answer from #ricardo to allow selection of specific variable if the .Rdata file contains multiple variables (as my credits are low to edit an answer). It adds some lines to read user input after listing the variables contained in the .Rdata file.
loadRData <- function(fileName) {
#loads an RData file, and returns it
load(fileName)
print(ls())
n <- readline(prompt="Which variable to load? \n")
get(ls()[as.integer(n)])
}
select_var <- loadRData('Multiple_variables.Rdata')
Following from #ricardo, another example of using (effectively) a separate environment
load_rdata <- function(file_path) {
res <- local({
load(file_path)
return(get(ls()))
})
return(res)
}
Similar caveats with only expects one object to be returned

How to add variables to several functions in R and run them in the command line

I have a script that is composed of several functions. A summarised example of my script looks like that
>Test.R
massive.process_1 <- function() {
seed(123)
x <- do_something()
save(x, '/home/Result1.RData')
}
massive.process_2 <- function() {
seed(4)
x <- do_something()
save(x, '/home/Result2.RData')
}
massive.process_1()
massive.process_2()
I have to execute this script but instead of 2 _massive.processs_I need to run 100 of them but changing the seed value and the name of the data saved in each step. I can do it manually, adding 100 massive.process functions but I would like to know if is there any way to put it on a script to avoid typing 100 functions?
Many thanks
My bash file to run it is the following:
#!/bin/bash
echo Started analysis at: `date`
rfile="Test.R"
Rscript $rfile
echo Finished analysis at: `date`
Adding to Dennis's answer...
to change the filename you can use "paste".
massive.process <- function(i) {
seed(i)
x <- do_something()
outname = paste("/home/Result", i, ".RData", sep="")
save(x, outname)
x
}
for (i in 1:100){
massive.process(i);
}
or
X = lapply(1:100, massive.process)
If you use the list approach, to access the ith x, just use X[i]
another way to write the lapply loop is with an anonymous function. This might make more clear what's going on.
X = lapply(1:100, function(i){
massive.process(i)
})
The previous notation is the same, just more compact.
Why not adding the seed as parameter to the functions?
massive.process <- function(seedValue) {...}
And it would probably a good idea to implement the loop in R instead of using a shell script.

Resources