Rscript slave call doesn't exit properly - r

I have a Desktop Entry that calls a shiny app.
Here's my desktop entry
[Desktop Entry]
Type=Application
Comment=App
Name=App
Exec=/usr/bin/Rscript -e "shiny::runApp('~/raspberry_IP/app.R', launch.browser=TRUE)"
Icon=/home/path/to/logo/rpi_logo.png
Terminal=false
Categories=Utility
StartupNotify=true
This runs well and the Shiny application works as expected. However, when I close my browser tab, I see the process is still running.
I am running this application from Ubuntu 20.04 LTS.
If I change my terminal=false to terminal=true. I see that the output of my terminal freezes (the app no longer updates, hence no diagnostic printing). But the terminal is still active (which makes sense given the process is still running). I can CTRL+C out of it to kill it from my terminal, but that's not the desired user behavior.
I see some strategies to stop the app using stopApp() like here. This strategy uses a separate action button to kill the app, so the user has to "manually" stop it. This might work, but I believe it's natural for the user to try to close the tab or Browser (and there's no way for them to know the process is still running, unless they check).
Using this on the server side, does kill the process, but it's recommended against (see here).
session$onSessionEnded(stopApp)
In this case, because my app is running on a local machine, I think I could potentially get away with this (there is only one user at a time), but I was wondering whether there's better practice to implement.

Related

Ensuring RStudio always restarts in administrator mode

Sometimes I receive notifications that I am trying to updating already-loaded packages from RStudio and am prompted to choose if I want to restart R(Studio) or not. I've noticed that sometimes I will have write permission issues after this, despite having RStudio set to run as an administrator. My hunch is that this is because I have set the permissions for a specific shortcut (see below) and not for RStudio itself (if this even happens) -- accordingly when it restarts, it isn't restarting the same way as I ran the program in the first place and thus is not in administrator mode.
Am I correct that this can happen and, if so, is there some sore of way to consistently ensure it is run in administrator mode? I'm asking this on SO since I'm guessing there is probably a way within R or on the command line to ensure this, if it can indeed happen. I'm on a single-user Windows 10 machine.

RInno shiny app unable to launch if previous session closed unexpectedly

I have made a Desktop shiny app for windows using the amazing package RInno but I have been experiencing some inconsistencies with launching the application. Occasionally the application will not quit its session properly, leaving R running in the background. Their github suggests using this chunk of code in the server function to insure R is properly terminated when the session ends. I can tell when the application quits properly because I include a custom function that copies the log files to a new indexed name for records and debugging.
if (!interactive()) {
session$onSessionEnded(function() {
write_logs_out() #My custom function. renames logs and stores backups of session data in .rds files
stopApp()
q("no")
})
}
In these cases a new log file is not made so I assume stopApp() and q("no") are never executed as well, leaving R running in the background. One major issue is I do not know how to reproduce this error. My current guess is that this occurs when the R session locally becomes unresponsive. While R is still running in the background, the shiny app never fully opens to a web browser. I will first need to open task manager and quit any R sessions (usually with the name "R front-end") in order to get the app working again.
Since I dont know how to expect this random error, I was wondering if any windows experts could help me write a script that will check if an R process is running, and then kill it. This would at least
I know that
tasklist | findstr Rscript.exe
will list all the current Rscripts running (which I believe is the "R front-end" i see in task manager), but I don't know if it is bad practice to simply include taskkill /IM "Rscript.exe" /F before the line that calls the .wsf file that starts the app.
The main problem I see with this is that if the session is running fine and the user for some reason clicks the app again, it will force a complete restart.
Any suggestions?

RStudio Server on Microsoft Azure instance

I am currently running R on a Microsoft Azure instance (Ubuntu virtual machine) using RStudio as my IDE, to which I connect simply through my browser. I am trying to run some commands that take quite some time to complete from within RStudio and figured that I could simply close my tab with RStudio open and the process would keep running. However, when I try to reconnect to see how the process is doing, the page keeps loading but I am unable to see RStudio.
I have a few questions regarding running RStudio on a server:
First, am I correct in thinking that I can close my tab and keep the process running?
Second, is it normal behaviour that I am unable to connect to the server while the process is running?
Third, am I going about this the correct way or are there better ways?
Yes, you can close your tab and keep it running.
RStudio Server waits on updates from the R process to update the UI. This means that if you have a long-running computation, your tab may not fully reload until it's finished. You may also have seen this in the middle of a session: when R is busy, you can have problems saving scripts that are open in the editor pane.
Logging out in the middle of a computation should be safe, but be aware that RStudio will save your workspace and shut R down after a period of inactivity. It then reloads everything when you log back in. But this only extends to objects in memory; if you have any files saved in your temp directory, they'll have disappeared when you come back. They're probably still on the disk, but since your new R session has a new temp directory, you'll have to do a manual search for them.

make a jupyter notebook run even if the page is closed

I love notebooks. I love them so much that I have many of them running at the same time, often in different browsers, sometimes on different remote clients. I miss one feature: when I close the tab corresponding to a running notebook, it warns that the corresponding run will be stopped.
My question:
How do I make a jupyter notebook resume it's run even if the page is closed ?
such that I can:
re-open the tab in another browser (possibly on a remote computer such as a tablet),
restart a browser when it needs to,
close those with long running time for later inspection.
From what I understand, the client-server architecture could make that possible, but that there may be issues with multiple concurrent runs...
PS: I created an issue on GitHub
In fact, this was answered in the github issue:
takluyver commented on 26 Apr 2017: Anything already running in the
notebook will keep running, and the kernel it started for that will
stay running - so it won't lose your variables. However, any output
produced while the notebook isn't open in a browser tab is lost; there
isn't an easy way to change this until we have the notebook server
able to track the document state, which has been on the plan for ages.
Thanks!

Running a meteor.js application forever

I have a personal localhost meteor application running on my laptop which silently stops running every time the computer goes to sleep. The way I run it simply using the "meteor" command, after which i background and disown the process and close terminal.
Is there a way to prevent the app from stopping, to have it run forever on my machine unless i explicitly close it?
You need to create a server daemon for your application in the same way you'd do on a production server. There are several ways to do this, probably the easiest one is to use demeteorizer to create a plain Node.js program with your app, and then run it with forever.

Resources