I'm running R within a HPC and I'd like to be notified when a long running process is finished (within R terminal, not as a Rscript).
I know the way to do it is through triggers but I have no idea how to... any idea, please?
It's simpler what I thought, but as anyone didn't answer, here there's my (posible) solution:
Write a simple function called fin(): fin <- function(){print("fin")}
Then go to Preferences > Profiles > Advanced > Triggers and add a new trigger based on function's output: Regular expression = \[1\] "fin", Action = Post Notification... et voilá!
Related
I'm having an issue where if I execute several lines of code at once and one of them has an error, the lines below don't get executed.
For example if I have:
table(data$AGE)
table(dataREGION)
table(date$SEXE)
I get the table for the first line, and then
Error in table(dataREGION) : object 'dataREGION' not found
>
And the last line does not execute.
Does anyone know why it does that and how to fix it?
(I work with R 4.2.2 and RStudio 2022.12.0+353 "Elsbeth Geranium" Release)
Thanks!
Have a nice day,
Cassandra
Fixed: In Global Options > Console, under "Execution" uncheck the "Discard pending console input on error"
It seems like you want to use try().
try(table(data$AGE), silent = F, outFile = T)
try(table(dataREGION)) # also works without any params
try(table(date$SEXE))
You can also use tryCatch() if you want more control but it doesn't seem necessary for your purpose.
__
As for why your dataREGION doesn't exectue:
Hazarding a guess it might be because you forgot the $ between data and REGION
I'm trying to speed up my R code using future package by using mutlicore plan on Linux. In future definition I'm creating a java object and trying to pass it to .jcall(), But I'm getting a null value for java object in future. Could anyone please help me out to resolve this. Below is sample code-
library("future")
plan(multicore)
library(rJava)
.jinit()
# preprocess is a user defined function
my_value <- preprocess(a = value){
# some preprocessing task here
# time consuming statistical analysis here
return(lreturn) # return a list of 3 components
}
obj=.jnew("java.custom.class")
f <- future({
.jcall(obj, "V", "CustomJavaMethod", my_value)
})
Basically I'm dealing with large streaming data. In above code I'm sending the string of streaming data to user defined function for statistical analysis and returning the list of 3 components. Then want to send this list to custom java class [ java.custom.class ]for further processing using custom Java method [ CustomJavaMethod ].
Without using future my code is running fine. But I'm getting 12 streaming records in one minute and then my code is getting slow, observed delay in processing.
Currently I'm using Unix with 16 cores. After using future package my process is done fast. I have traced back my code, in .jcall something happens wrong.
Hope this clarifies my pain.
(Author of the future package here:)
Unfortunately, there are certain types of objects in R that cannot be sent to another R process for further processing. To clarify, this is a limitation to those type of objects - not to the parallel framework use (here the future framework). This simplest example of such an objects may be a file connection, e.g. con <- file("my-local-file.txt", open = "wb"). I've documented some examples in Section 'Non-exportable objects' of the 'Common Issues with Solutions' vignette (https://cran.r-project.org/web/packages/future/vignettes/future-4-issues.html).
As mentioned in the vignette, you can set an option (*) such that the future framework looks for these type of objects and gives an informative error before attempting to launch the future ("early stopping"). Here is your example with this check activated:
library("future")
plan(multisession)
## Assert that global objects can be sent back and forth between
## the main R process and background R processes ("workers")
options(future.globals.onReference = "error")
library("rJava")
.jinit()
end <- .jnew("java/lang/String", " World!")
f <- future({
start <- .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
# Error in FALSE :
# Detected a non-exportable reference ('externalptr') in one of the
# globals ('end' of class 'jobjRef') used in the future expression
So, yes, your example actually works when using plan(multicore). The reason for that is that 'multicore' uses forked processes (available on Unix and macOS but not Windows). However, I would try my best to limit your software to parallelize only on "forkable" systems; if you can find an alternative approach I would aim for that. That way your code will also work on, say, a huge cloud cluster.
(*) The reason for these checks not being enabled by default is (a) it's still in beta testing, and (b) it comes with overhead because we basically need to scan for non-supported objects among all the globals. Whether these checks will be enabled by default in the future or not, will be discussed over at https://github.com/HenrikBengtsson/future.
The code in the question is calling unknown Method1 method, my_value is undefined, ... hard to know what you are really trying to achieve.
Take a look at the following example, maybe you can get inspiration from it:
library(future)
plan(multicore)
library(rJava)
.jinit()
end = .jnew("java/lang/String", " World!")
f <- future({
start = .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
value(f)
[1] "Hello World!"
So I'm trying to write a function that prints an error message, but only on the first time the user calls the function. If they open R, load the library, and call the function, it will print a warning message. If they call the function again, it will not print this warning message. If they close R and do the same process, it will print the warning message for the first call and not on the second. I understand the idea of the basic warning() function in R, but I don't see any documentation in the help file for this kind of condition. Does anyone know of a function or a condition that could be used with the warning() function that would be able to solve this? Thanks! I am working on a project where the professor in charge needs this for some sort of copyright thing and he wants it to be this way.
One package that does this is quantmod. When you use the getSymbols function, it warns you about the upcoming change to the defaults. It does so using options.
"getSymbols" <- function(Symbols=NULL,...) {
if(getOption("getSymbols.warning4.0",TRUE)) {
# transition message for 0.4-0 to 0.5-0
message(paste(
' As of 0.4-0,',sQuote('getSymbols'),'uses env=parent.frame() and\n',
'auto.assign=TRUE by default.\n\n',
'This behavior will be phased out in 0.5-0 when the call will\n',
'default to use auto.assign=FALSE. getOption("getSymbols.env") and \n',
'getOptions("getSymbols.auto.assign") are now checked for alternate defaults\n\n',
'This message is shown once per session and may be disabled by setting \n',
'options("getSymbols.warning4.0"=FALSE). See ?getSymbol for more details'))
options("getSymbols.warning4.0"=FALSE)
}
#rest of function....
}
So they check for an option named "getSymbols.warning4.0" and default to TRUE if not found. Then if it's not found, they display a message (you may display a warning) and then set that option to FALSE so the message will not display next time.
Lots of packages have messages that popup and since there is a mechanism for detecting second efforts at loading futher calls to library or require are silently ignored. They often are using .onLoad(libname, pkgname). See
?.onLoad
As the title implies I would like to be able to run a function asynchronous in a GUI created with RGtk2.
The function in itself is an R wrapper for a system command, thus the bulk of the processing time is used on a system() call, and the processing time can range from 10 min to an hour. I would like the GUI to still be responsive in that period.
As it is now the function is put in a gSignalConnect(GtkButton, 'clicked') and the rest of the GUI is thus unresponsive until the 'clicked' signal is terminated.
Does anyone have an idea regarding whether this is possible?
best
Thomas
There might be a more direct way, but I think you can do this with gTimeoutAdd:
library(RGtk2)
w <- gtkWindow()
g <- gtkVBox(); w$add(g)
b1 <- gtkButton("Start timer"); g$packStart(b1)
b2 <- gtkButton("click me"); g$packStart(b2)
gSignalConnect(b1, "clicked", function(...) {
id <- gTimeoutAdd(1, function(...) {
Sys.sleep(5) # replace me
message("Okay, I'm up")
FALSE # one shot
})
})
gSignalConnect(b2, "clicked", function(...) message('clicked me'))
What might work (although not tested, and I am not too familiar with RGtk so no guarantees) is to use the wait=FALSE option in the system call. The system call is then executed asynchronously. In your gtk GUI you would then have to periodically check if your system call has finished. I believe it is possible using RGtk to have a function using that is periodically called (from the documentation of RGtk this is probably gtkTimeoutAdd()).
I have written a function, which takes some time to run (due to a 1000+ loop on a huge dataset in combination with forecasting model testing).
To have any idea on the status, while the function is called, I use the message command inside the for-loop in the function. The problem is that all the messages are shown in the console after the function is finished, instead of showing immediately. So it doesn't help me :)
I tried to find a solution on Stackoverflow, but didn't found one. I looked for instance on the question "showing a status message in R". All answers and example codes in that topic still give me only text in the console after a function is processed instead of immediately.
How to solve this? Is there maybe a setting in R which prevents immediate printing of message text in the console?
note: examples I tried below, which give the same results as my function; showing text after processing the function.
example1 (Joshua Ulrich):
for(i in 1:10) {
Sys.sleep(0.2)
# Dirk says using cat() like this is naughty ;-)
#cat(i,"\r")
# So you can use message() like this, thanks to Sharpie's
# comment to use appendLF=FALSE.
message(i,"\r",appendLF=FALSE)
flush.console()
}
example2 (Tyler):
test.message <- function() {
for (i in 1:9){
cat(i)
Sys.sleep(1)
cat("\b")
}
}
edit: the first example does work ('flush console' was the problem)...but when I tested it, I commented out flush console for some reason :S
test.message <- function() {
for (i in 1:9){
cat(paste(as.character(i),'\n'))
flush.console()
Sys.sleep(1)
}
}
which is similar to the recommendation by fotNelton.
Edit: ttmaccer is most likely right. I've just tested on a Ubuntu server and the code works without flushing the console.
I seem to think this maybe a windows specific problem. On linux or running R in a cygwin shell the flush.console() may not be needed.
You may be interested in using one of the progress bar functions (winProgressBar, tkProgressBar, or txtProgressBar). The win version only works on windows, but the win and tk versions have the advantage that they do not clutter your output, but rather open another small window and display the progress there.
The progress through a loop can be shown with the progress bar, but other detailed information can be updated and shown with the label argument.