Best way to check if temp logging file exist - r

I have a function in an R Script file that when called it initializes and creates a temp log file locally using logger and saves specific things from the function.
function_name <- function(source, dimensions, metrics, filters){
tmp_log_file <- init_logger()
log_info('Currently in Function -function_name- from -myscript.R-')
cat("Location of log file:", tmp_log_file)
... # Do more things in function while still logging certain things
results <- second_function()
return(results)
}
Now the function where logger gets initialized is the following,
init_logger <- function(create_temp_file = TRUE) {
if (!requireNamespace("logger", quietly = TRUE)) {
stop("Package logger needed. Please install it.",
call. = FALSE)
}
library(logger)
if(isTRUE(create_temp_file)){
# create temp file in temp dir, it gets deleted automatically when done with session.
tmp <- tempfile(fileext = '.txt')
log_appender(appender_file(tmp))
}
log_threshold(TRACE)
log_formatter(formatter_paste)
log_layout(layout_simple)
log_appender(appender_file(tmp))
return(tmp)}
Now I want to expand this to other functions, some are called inside my first function 'function_name()', but other functions are not. If user decides to use some other functions I will like to save more logging information in the same original temp file otherwise if file does not exist then I want to create/initialize a temp log file.
Is there a way to check if logger has been initialized and is currently saving logging information in a temp file?
Some general information:
R 4.1.3
Logger package 0.2.2
RStudio 2022.02.3+492
My current solution is to check logger::log_appender() and see if the log entries are been appended to the console or to a file since the default is to console, but this does not seem to be a great way to do it.
Perhaps even trying to do this is bad practice(would like to know if that is the case and the reasons). I only need the log file if something goes wrong and the user will send the log file in that case. Otherwise since is a temp file created in R, tempfile(), it will be deleted automatically after user is done with the session.
I dont have much experience with logging in general so bare with me.

Related

Test package function that writes to disk

I am trying to write a test for a package function in R.
Let's say we have a function that simply writes a string x to disk using writeLines():
exporting_function <- function(x, file) {
writeLines(x, con = file)
invisible(NULL)
}
One way of testing it would be to check if a file exists. Typically, it should not exist at first, but after the exporting function was run it should. Also, you might want to test the file size to be greater than 0:
library(testthat)
test_that("file is written to disk", {
file = 'output.txt'
expect_false(file.exists(file))
exporting_function("This is a test",
file = file)
expect_true(file.exists(file))
expect_gt(file.info('output.txt')$size, 0)
})
Is this a good way to test it? In the CRAN Repository Policy it states that Packages should not write in the user’s home filespace (including clipboards), nor anywhere else on the file system apart from the R session’s temporary directory. Would this test violate this constraint?
There is a expect_output_file function. From the documentation and examples I am not sure if this is a more appropriate expectation to test the function. It requires a.o. an object argument which should be the object to test. What is the object to test in my case?
That looks as if it violates CRAN policy. Why not simply write to the temporary directory, using
file <- tempfile()
in place of
file = 'output.txt'
?
As to whether it is a good test: wouldn't it be better to try reading the file back in, and confirming that what was read matches what was written? That's easy in your toy example. It might be harder in the real one, but having an import function paired with your export function is always a good idea.

R extension write local data

I am creating a package and would like to store settings data locally, since it is unique for each user of the package and so that the setting does not have to be set each time the package is loaded.
How can I do this in the best way?
You could save your necessary data in an object and save it using saveRDS()
whenever a change it made or when user is leaving or giving command for saving.
It saves the R object as it is under a file name in the specified path.
saveRDS(<obj>, "path/to/filename.rds")
And you can load it next time when package is starting using loadRDS().
The good thing of loadRDS() is that you can assign a new name to the obj. (So you don't have to remember its old obj name. However the old obj name is also loaded with the object and will eventually pollute your namespace.
newly.assigned.name <- loadRDS("path/to/filename.rds")
# or also possible:
loadRDS("path/to/filename.rds") # and use its old name
Where to store
Windows
Maybe here:
You can use %systemdrive%%homepath% environment variable to accomplish
this.
The two command variables when concatenated gives you the desired
user's home directory path as below:
Running echo %systemdrive% on command prompt gives:
C:
Running echo %homepath% on command prompt gives:
\Users\
When used together it becomes:
C:\Users\
Linux/OsX
Either in the package location of the user,
path.to.package <- find.package("name.of.your.pacakge",
lib.loc = NULL, quiet = FALSE,
verbose = getOption("verbose"))
# and then construct with
destination.folder.path <- file.path(path.to.package,
"subfoldername", "filename")`
# the path to the final destination
# You should use `file.path()` to construct such paths, because it detects automatically the correct ('/' or '\') separators for the file paths in Unix-derived systems (Linux/Mac Os X) versus Windows.
Or use the $HOME variable of the user and there in a file - the name of which beginning with "." - this is convention in Unix-systems (Linux/Mac OS X) for such kind of file which save configurations of software programs.
e.g. ".your-packages-name.rds".
If anybody has a better solution, please help!

Copy script after executing

Is there a possibility to copy the executed lines or basically the script into the working directory?
So my normal scenario is, I have stand alone script which just need to be sourced within a working directory and they will do everything I need.
After a few month, I made update to these scripts and I would love to have a snapshot from the script when I executed the source...
So basically file.copy(ITSELF, '.') or something like this.
I think this is what you're looking for:
file.copy(sys.frame(1)$ofile,
to = file.path(dirname(sys.frame(1)$ofile),
paste0(Sys.Date(), ".R")))
This will take the current file and copy it to a new file in the same directory with the name of currentDate.R, so for example 2015-07-14.R
If you want to copy to the working directory instead of the original script directory, use
file.copy(sys.frame(1)$ofile,
to = file.path(getwd(),
paste0(Sys.Date(), ".R")))
Just note that the sys.frame(1)$ofile only works if a saved script is sourced, trying to run it in terminal will fail. It is worth mentioning though that this might not be the best practice. Perhaps looking into a version control system would be better.
Explanation:
TBH, I might not be the best person to explain this (I copied this idea from somewhere and use it sometimes), but I'll try. Basically in order to have information about the script file R needs to be running it as a file inside an environment with that information, and when that environment is a source call it contains the ofile data. We use (1) to select the next (source()'s) environment following the global environment (which is 0). When you're running this from terminal, there's no frame/environment other than Global (that's the error message), since no file is being ran - the commands are sent straight to terminal.
To illustrate that, we can do a simple test:
> sys.frame(1)
Error in sys.frame(1) : not that many frames on the stack
But if we call that from another function:
> myf <- function() sys.frame(1)
> myf()
<environment: 0x0000000013ad7638>
Our function's environment doesn't have anything in it, so it exists but, in this case, does not have ofile:
> myf <- function() names(sys.frame(1))
> myf()
character(0)
I just wanted to add my solution since I decided to use a try function before executing the copy command... Because I have the feeling I miss some control...
try({
script_name <- sys.frame(1)$ofile
copy_script_name <-
paste0(sub('\\.R', '', basename(script_name)),
'_',
format(Sys.time(), '%Y%m%d%H%M%S'),
'.R')
file.copy(script_name,
copy_script_name)
})
This will copy the script into the current directory and also adds a timestamp to the filename. In case something goes wrong, the rest of the script will still execute.
I originally posted this other thread, and I think it addresses your problem: https://stackoverflow.com/a/62781925/9076267
In my case, I needed a way to copy the executing file to back up the original >script together with its outputs. This is relatively important in research.
What worked for me while running my script on the command line, was a mixure of >other solutions presented here, that looks like this:
library(scriptName)
file_dir <- paste0(gsub("\\", "/", fileSnapshot()$path, fixed=TRUE))
file.copy(from = file.path(file_dir, scriptName::current_filename()) ,
to = file.path(getwd(), scriptName::current_filename()))
Alternatively, one can add to the file name the date and our to help in >distinguishing that file from the source like this:
file.copy(from = file.path(current_dir, current_filename()) ,
to = file.path(getwd(), subDir, paste0(current_filename(),"_", Sys.time(), ".R")))

R Logging display name of the script

this is an atomic example of my current issue:
For the moment I have a project containing several R scripts (all in the same directory named DIR). I have a main script in DIR sourcing all the R files and, containing a basicconfig:
basicConfig()
I take two scripts in DIR, dog.r and cat.r. I have currently only one function in these scripts. In dog.r :
feedDog <- function(){
loginfo("The dog is happy to eat!", logger="dog.r")
}
And in cat.r :
feedCat <- function(){
loginfo("The cat is voracious", logger="cat.r")
}
It's fine with this example. But in real I have something like 20 scripts and 20 possible error messages in each. So that instead of writting:
loginfo("some message", logger="name of script")
I would like to write:
loginfo("some message", logger=logger)
And configure different loggers.
The issue is that if I declare a logger in each R scripts, only one will be taken into account when I source all files with my main ... I dunno how to bypass this issue.
PS: in Python it is possible to define a logger in each file taking automatically the name of the script like this:
logger = logging.getLogger(__name__)
But I am afraid it is not possible in R ?
If your source() a file, the functions created in that file will have an attribute called srcref that stored the location from the sourced file that the function came from. If you have a name that points to that function, you can use getSrcFilename to get the filename the function came from. For example, create a file that we can source
# -- testthis.R --
testthis <- function() {
loginfo("Hello")
}
Now if we enter R, we can run
loginfo <-function(msg) {
fnname <- sys.call(-1)[[1]]
fnfile <- getSrcFilename(eval(fnname))
paste(msg, "from", deparse(fnname), "in", fnfile)
}
source("testthis.R")
testthis()
# [1] "Hello from testthis in testthis.R"
The function loginfo uses sys.call(-1) to see what function it was called from. Then it extracts the name from that call (with [[1]]) and then we use eval() to turn that "name" object into the actual function. Once we have the function, we can get the source file name. This is the same as running
getSrcFilename(testthis)
if you already knew the name of the function. So it is possible, it's just a bit tricky. I believe this special attribute is only added to functions. Other than that, each source file doesn't get it's own namespace or anything so they can't each have their own logger.

R sometimes does not save my history

I have a program in R. Sometimes when I save history, they do not write into my history file. I lost some histories a few times and this really drive me crazy.
Any recommendation on how to avoid this?
First check your working directory (getwd()). savehistory() saves the history in the current working directory. And to be honest, you better specify the filename, as the default is .History. Say :
savehistory('C:/MyWorkingDir/MySession.RHistory')
which allows you to :
loadhistory('C:/MyWorkingDir/MySession.RHistory')
So the history is not lost, it's just in a place and under a name you weren't aware of. See also ?history.
To clarify : the history is no more than a text file containing all commands of that current session. So it's a nice log of what you've done, but I almost never use it. I construct my "analysis log" myself by using scripts, as hinted in another answer.
#Stedy has provided a workable solution to your immediate question. I would encourage you to learn how to use .R files and a proper text editor, or use an integrated development environment (see this SO page for suggestions). You can then source() in your .R file so that you can consistently replicate your analysis.
For even better replicability, invest the time into learning Sweave. You'll be glad you did.
Check the Rstudio_Desktop/history_database file - it stores every command for any working directory.
See here for more details How to save the whole sequence of commands from a specific day to a file?
Logging your console on a regular basis to **dated* files is handy. The package TeachingDemos has a great function for logging your console session, but it's written as a singleton, which is problematic for automatic logging, since you wouldn't be able to use that function to create teaching demo's if you use it for logging. I re-used that function using a bit of meta-programming to make a copy of that functionality that I include in the .First function in my local .Rprofile, as follows:
.Logger <- (function(){
# copy local versions of the txtStart,
locStart <- TeachingDemos::txtStart
locStop <- TeachingDemos::txtStop
locR2txt <- TeachingDemos:::R2txt
# creat a local environment and link it to each function
.e. <- new.env()
.e.$R2txt.vars <- new.env()
environment(locStart) <- .e.
environment(locStop) <- .e.
environment(locR2txt) <- .e.
# reference the local functions in the calls to `addTaskCallback`
# and `removeTaskCallback`
body(locStart)[[length(body(locStart))-1]] <-
substitute(addTaskCallback(locR2txt, name='locR2txt'))
body(locStop)[[2]] <-
substitute(removeTaskCallback('locR2txt'))
list(start=function(logDir){
op <- options()
locStart(file.path(logDir,format(Sys.time(), "%Y_%m_%d_%H_%M_%S.txt")),
results=FALSE)
options(op)
}, stop = function(){
op <- options()
locStop()
options(op)
})
})()
.First <- function(){
if( interactive() ){
# JUST FOR FUN
cat("\nWelcome",Sys.info()['login'],"at", date(), "\n")
if('fortunes' %in% utils::installed.packages()[,1] )
print(fortunes::fortune())
# CONSTANTS
TIME <- Sys.time()
logDir <- "~/temp/Rconsole.logfiles"
# CREATE THE TEMP DIRECORY IF IT DOES NOT ALREADY EXIST
dir.create(logDir, showWarnings = FALSE)
# DELETE FILES OLDER THAN A WEEK
for(fname in list.files(logDir))
if(difftime(TIME,
file.info(file.path(logDir,fname))$mtime,
units="days") > 7 )
file.remove(file.path(logDir,fname))
# sink() A COPY OF THE TERMINAL OUTPUT TO A DATED LOG FILE
if('TeachingDemos' %in% utils::installed.packages()[,1] )
.Logger$start(logDir)
else
cat('install package `TeachingDemos` to enable console logging')
}
}
.Last <- function(){
.Logger$stop()
}
This causes a copy of the terminal contents to be copied to a dated log file. The nice thing about having dated files is that if you use multiple R sessions the log files won't conflict, unless you start multiple interactive sessions in the same second).

Resources