run local Rscript on remote R server - r

I am trying to run some Rscripts on a remote server which has a fixed IP-Adress.
Background is the following.
I connected a Front-end web app with R and am able to insert different values, which are then passed to R, where different outcomes are evaluated.
Everything works fine when I run it locally.
I want to be able to run the Web-app from every location, so I set up an RServer with username, password etc.
What I want to next is to establish a connection between R and the server.
I wasn't able to find anything on that topic yet, so I hope you can help me with this.
Establish a connection to the Rserver with some IP.
Login with my username and password
Tell my R script where to find the Rscript.exe file
Doing that locally I just tell my php-file:
$command = '"C:\Users\Username\Documents\R\R-3.3.1\bin\Rscript.exe" "'.__DIR__.'\rfile.R" '.$json;
So I tell him where to find my RScript.exe and to read with that my rfile.R.
Thanks for your help.

Related

R studio server browser freezes upon login

I have been working on my R studio session hosted by a Linux server and recently, ran a piece of code that was taking way too long to execute and I decided to kill it.
Here is the sequence of steps that I took - none of them helped me restore the health of my session.
1) Hit the stop button on R studio and be patient.
2) Ssh into my Linux server and ran the following command to kill all the processes running with my userid
killall -u myuserid
3) Removed the.RData,.Renviron,.Rhistory files from my workspace.
4) Ran the following R command via the Linux server for garbage collection
gc(reset=TRUE)
4) Restarted the entire Linux server.
I am running out of ideas and would really appreciate any other suggestions before I take more drastic steps like revoking access and granting it again(not sure if that would be the right fix)
Note: The browser window freezes every time I login, and it happens only for my R studio session, the rest of the users in the same network have no issues.
I solved this problem - Rstudio-serverfreezing. I think it was a network problem since I couldn't receive any response from calling "~~~~~~.cache.js". In this case, you can find out "~~~~~~~~~.cache.js" no response with pushing key before you click log-in button.
Anyway, here is my way.
Reset your Network with following orders
you can insert these into cmd terminal as an admin mode.
netsh winsock reset
netsh int ip reset
Reboot
The IP information may be erased. So if you're using fixed IP address, fill the blanks with as-is IP address.
That's all.
You may follow this way to recover the connection.

R - Connect via ssh and execute a command

I would like to connect via ssh to certain equipment in a network.
The requisites are:
It must run a command and capture the output of the ssh session in R (or in bash, or any other programming language, but I would prefer it in R language)
It must enter a plain-text password (as this equipment hasn't been accessed before, and can't be changed with a rsa keypair), so the ssh.utils package doesn't meet this requirement
sshpass can't be used, as I have noticed that it doesn't work for some devices I tested.
I've read all this posts but I can't find an effective way to perform it: link 1, link 2, link 3, link 4
I know the requirements are hard to accomplish, but thank you for your effort!
EDIT:
Sorry if I didn't make myself understandable. I mean I work locally in R and I want to connect to +3000 devices in all of my network via ssh. It is Ubiquiti equipment, and the only open ports are 80 and 22.
If ssh doesn't work, I will use the RSelenium package for R and extract info from port 80. But first I will try with ssh pory 22 as it is a lot more efficient than opening an emulated browser.
The big problem in all these Ubiquiti equipment is that they have a password to log in. That's why requisite No.2 is needed. When I must enter a server that I know, I spend time setting up the rsa keypair so that I don't have to enter a password everytime I connect to a specific server, but it's impossible (or at least, for me it's impossible) to configure all +3000 Ubiquiti equipment with these keypairs.
That's why I don't use snmp, for example, as this equipment maybe they have it activated or not, or the snmp configuration is mistaken. I mean, I have to use something that's activated by default, and in a way, ordered. And only port 80 and port 22 are activated and I know all the user's and password's equipment.
And sshpass is an utility in UNIX/Linux like this link explains that works for servers but doesn't work for Ubiquiti equipment, as long as I've tested it. So I can't use it.
The command I need to extract the output from is mca-status. Simply by entering that into the console makes it print some stats I will like to get from the Ubiquiti equipment.
Correct me, please, if I am wrong in something I've posted. Thanks.
I think you have this wrong. I also have no idea what you are trying to say in point 2, and I have not idea what point 3 is supposed to say.
Now: ssh is a authentication mechanism allowing you (trusted) access to another machine and the ability to run a command. This can be as simple as
edd#max:~$ ssh bud Rscript -e '2+2'
[1] 4
edd#max:~$
where I invoke R (or rather, Rscript) on the machine 'bud' (my desktop) from a session on the machine 'max' (my server). That command could be anything including something which writes to temporary or permanent files. You can then retrieve those files via scp.
Authentication is handled independently -- on Unix we often use ssh-agent which run in the background and against you authenticate on login.
Finally I solved it using the rPython package and the python's paramiko module, as there was no way to do it purely via R.
library(rPython)
python.exec(python.code = c("import paramiko",
"ssh = paramiko.SSHClient()",
"ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())",
sprintf('ssh.connect("%s", username="USER", password="PASSWORD") ', IP),
'stdin, stdout, stderr = ssh.exec_command("mca-status")',
'stats = stdout.readlines()'))

Running remote R session from the local instance of vim

I am using vim-r-plugin to send commands from vim to a running R session (and get back info about object list and auto-completions).
My aim is to get communication between local vim and remote session of R. I manged to send commands to R with screen.vim plugin. However this type of communication was one-way only.
For a while I thought that this is not really possible (or at least not very easy to achieve) however I discovered one site: http://manuals.bioinformatics.ucr.edu/home/programming-in-r/vim-r
The author there mentions accessing remote R sessions from local vim multiple times:
"Flexible code sending options from local vim instances to R sessions on remote machines or among remote machines."
"The vim session can run on a local computer, while the R session can run on the same or a remote system."
However nowhere on that site is there any description telling how to achieve this exactly.
I also asked the same question directly on the gougle-group of vim-r-plugin, and the author replied with an option to run everything remotely: https://groups.google.com/forum/#!topic/vim-r-plugin/293VyyQntZ0 . I managed to do that, but it's not what I am after and I didn't want to bother him any further.
So my question: is it possible? If not directly - maybe there are work-arounds of not having to duplicate my vim configuration on all the remote servers I am using?
Reply after more than half of a year!
I tend to agree with others that it is a better idea to run everything remotely. However, this vimdoc may give what you want to achieve:

Azure Virtual Network Point-to-Site (ex. Azure Connect) autoconnect

While Azure Connect is being retired and Azure Virtual Network provides similar feature with better speed, i've noticed few drawbacks though.
Azure Connect was capable of maintaining connection automatically, without user even having to log in. Azure Virtual Network however requires user to interactively connect/reconnect to VPN. This makes it quite unusable in production environment. Are there any ways to overcome this obstacle?
To solve this problem you can use rasdial.
First time i used rasdial i run into this problem:
This function is not supported on this system. Don't get fooled by this message because its just that you didn't give the correct syntax.
rasdial "Your VPN name" /phonebook:%userprofile%\AppData\Roaming\Microsoft\Network\Connections\Cm\Your-VPN\Your-VPN.pbk"
%userprofile% is de user profiel you used to install Azure vpn with.
Your-VPN is de name of the azure vpn connection.
A simpel methode is to make a batch script:
SET VPN_NAME=azureVPN
:loop
rasdial %VPN_NAME% /PHONEBOOK:C:\Users\bas\AppData\Roaming\Microsoft\Network\Connections\Cm\%VPN_NAME%\%VPN_NAME%.pbk
timeout 10
goto loop
result will be:
Connecting to test...
Verifying username and password...
Registering your computer on the network...
Successfully connected to test.
Command completed successfully.
after 10 seconds:
You are already connected to test.
Command completed successfully.
To let this script start when the computer starts use the taskscheduler.
This works you just need to go to the folder and get the long name for the phone book from that folder. Also the AzureVPN (the name) should be replaced with the same thing without .pbk

Best way to collect logs from a remote server

I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.

Resources