I am running a few scheduled jobs on RStudio Server I host on my home computer.
It seems like there is no straightforward way to set an email warning when something goes bust.
I am using package cronR for scheduling jobs and emayili for sending emails.
I can see that it is possible from within Linux, but I have it all already set up in R..
Any way I can combine these two and make this work from within R?
Related
1) Is it possible to start R sessions on Linux (e.g. Rsession1) and submit multiple jobs in batch mode to the same R Session (e.g. job1 to Rsession1 and then later on based on user action submit job2 to Rsession1)?
This is equivalent to opening interactive R Session and submitting job1 and the user can submit job2 in same session (which will be available until the user closes interactive R session)
2) Is it possible to start two R sessions on Linux (e.g. Rsession1 and Rsession2) and submit multiple jobs in batch mode but specify session-id during job submission?
This is equivalent to opening two interactive R Sessions and submitting jobs to different R sessions by clicking on the window manually submitting the job.
I'm not sure what your end goal is, but have you considered something like the futures package that will allow R to send work off to another thread to be completed? That way the work can be done but doesn't lock up the main R session while the work is being completed. This way, via the main R session, you could launch job1 and then while that was still being worked on, launch job2.
You can use save.image at the end of each job to store the workspace and load at the beginning of the following jobs to restore it.
By choosing different file names the session-id could be specified.
I am working in a Shiny app that connects to Comscore using their API. Any attempt at executing POST commands inside future/promises fails with the cryptic error:
Warning: Error in curl::curl_fetch_memory: Bulk data encryption algorithm failed in selected cipher suite.
This happens with any POST attempt, not only when/if I try to call Comscore´s servers. As an example of a simple, harmless and uncomplicated POST request that fails, here is one:
rubbish <- future(POST('https://appsilon.com/an-example-of-how-to-use-the-new-r-promises-package/'))
print(value(rubbish))
But everything works fine if I don not use futures/promises.
The problem I want to solve is that currently we have an app that works fine in a single user environment, but it must be upgraded for a multiuser scenario to be served by a dedicated Shiny Server machine. The app makes several such calls in a row (from a few dozen to some hundreds), taking from 5 to 15 minutes.
The code is running inside an observeEvent block, triggered by a button clicked by the user when he has configured the request to be submitted.
My actual code is longer, and there are other lines both before and after the POST command in order to prepare the request and then process the received answer.
I have verified that all lines before the POST command get executed, so the problem seems to be there, in the attempt to do a POST connecting to the outside world from inside a promise.
I am using RStudio Server 1.1.453 along with R 3.5.0 in a RHEL server.
Package versions are
shiny: 1.1.0
httr: 1.3.1
future; 1.9.0
promise: 1.0.1
Thanks in advance,
I'm running a really long process and it would be great if there was a way to get R to Call, Email or Text me when its finished. Is there a way to setup an R-email script to be run when a program terminates or perhaps something that might employ IFTTT to send me a text message or Call in case I'm sleeping.
I'm using RStudio as my IDE so maybe there is such a feature through there.
If there is a way to track progress that would be nice too, but not 100% required
From this article:
http://alicebrawley.com/getting-r-to-notify-you-when-its-finished/
My general solution is to combine the R package mail, written by Lin
Himmelmann, and variations on an IFTTT (If This, Then That) recipe. I
use mail to send an email using functions in R, then IFTTT to notify
me immediately of that particular email.
Once you’ve installed mail, use the following functions to send
yourself an email when your code is completed.
#Have R email you when it's done running.
###Calculating - your wish is R's command.
library(mail)
#Send yourself an email - specify your preferred email address, subject, and message. The password is fixed at "rmail".
sendmail("xxxxx#xxxxx.com", subject="Notification from R", message="Conditions finished running!", password="rmail")
You can then use IFTT triggered by the email.
If you're sleeping next to your computer, consider also: Is there a way to make R beep/play a sound at the end of a script?
We have an SSIS package that is run via a SQLAgent job. We are initiating the job (via sp_startjob) from within an ASP.NET web page. The user that is logged onto the UI needs to be logged with the SSIS package that the user initiates - hence we require the userId to be passed to the SSIS package. The issue is we cannot pass parameters to sp_startjob.
Does anyone know how this can be achieved and/or know of an alternative to the above approach
It cannot be done through sp_startjob. You can't pass a parameter to a job step so that option is out.
If you have no concern about concurrency, and given that you can't have the same job running at the same time, you could probably hack it by changing your job step from type SQL Server Integration Services to something like a OS Command. Have the OS Command called a batch script that the web page creates/modifies. Net result being you start your package like dtexec.exe /file MyPackage /Set \Package.Variables[User::DomainUser].Properties[Value];\"Domain\MyUser\" At this point, the variable DomainUser in your package would have the value of Domain\MyUser.
I don't know your requirements so perhaps you can just call into the .NET framework and start your package from the web page. Although you'd probably want to make sure that call asynchronously. Otherwise unless your SSIS package is very fast, the users might try and navigate away, spam refresh etc waiting for it to the page to "work".
All of this by the way is simply pushing a value into an SSIS package. In this case, a user name. It doesn't pass along their credentials so calls to things like SYSTEM_USER would report the SQL Agent user account (or the operator of the job step).
I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.