How to configure FastRWeb to use RServer built-in web server - r

I'm new to RServe (and FastRWeb). I installed RServe 1.7.0 as I want to use its built-in webserver. As I already have apache running on this machine I want to run RServe/FastRWeb on a custom port.
I did cd /usr/local/lib/R/site-library/FastRWeb;sudo ./install.sh, which created /var/FastRWeb/ directory tree.
I'm not seeing any configuration file that mentions port. The default /var/FastRWeb/code/rserve.conf looks like this:
socket /var/FastRWeb/socket
sockmod 0666
source /var/FastRWeb/code/rserve.R
control enable
I'm guessing that means it uses unix sockets, by default? So I think my question is what exactly do I have to put in (and remove from) that file to, say, have it listen on TCP port 8888? And is there anything else I need to do? (I want to be able to connect from other machines, not just localhost.)
Possibly related, is I've looked at /var/FastRWeb/web/index.html and it contains javascript that is going to connect to /cgi-bin/R/ Is that path specific to when using Apache, or is it going to be fine, as-is, when using RServe?

There is an explanation of setting port in the Rserve 1.7.0 release announcement. Therefore, at the top of rserve.conf, I added this line: http.port 8888 Then I used the start script (as root), to start it.
This got me halfway as now http://127.0.0.1:8888/ works, but gives me a page that says:
Error in try(.http.request("/", NULL, NULL, c(48, 6f, 73, 74, 3a, 20, :
could not find function ".http.request"
The second half of the solution is to add this to the top of /var/FastRWeb/code/rserve.R:
library(FastRWeb)
.http.request <- FastRWeb:::.http.request
Then start things going by running /var/FastRWeb/code/start. There is no default handler, so you can test it with http://127.0.0.1:8888/info. Or a more interesting example is http://127.0.0.1:8888/example1.png (to view a chart) or http://127.0.0.1:8888/example2 (to view a mix of html and chart)
Note: I did not delete or edit any other configuration to get this working. That means we also have the unix socket listening. If that is not needed remove those two lines from the Rserve.conf file.
If you want it listening on all IP addresses, not just localhost, then add remote enable to your Rserve.conf file. NOTE: Make sure you understand the security consequences before opening your server to the world.
So, after those two changes, my /var/FastRWeb/code/Rserve.conf file looks like:
http.port 8888
remote enable
source /var/FastRWeb/code/rserve.R
control enable

Did you see Jay Emerson's write-up from a while back about how to use RServe as a backend for web-driven analysis? As I recall, one still uses Apache for the redirection, rather than an explicit port as you surmise here.
Jay's setup was very impressive. He used Rserve to provide mixed table/chart pages written via the grid package, all very slick and very fast, based of an immense data set (from a UN agency, or the World Bank, or something). But I can't find a link to that report right now...

Related

Looking for SFTP-Stresser/Fuzzer

I am working for a company that is providing File-Share-Software for all sorts of Protocols such as FTP, SFTP, FTPS and so on. One of our customers is facing an issue with Key-Auth and spontaneously login-problems.
Going trough the code I am pretty certain that the server collapses with too many requests at the same time. What I need right now is a simple tool to test a situation just like this. I need a simple SFTP-Fuzzer or Stresser, sending invalid or broken Auth-Attempts to the SFTP-Server.
I am not a developer but a technician and instead of writing something myself (which would take forever) I would love to have a simple script or toolset to go...if there is one.
Ok, found one faster than I thought.
Steps:
Download Kali Linux (or any Distro that contains Metasploit)
Fire up Kali Linux and put it in the same subnet as your SFTP-Server
Start Metasploit and use the SSH-Fuzzer /auxiliary/fuzzer/ssh/ssh_version_2
Set RHOST and RPORT to the relevant IP and port your server is listening to
Exploit and see what will happen

Disable internet access when calling java -jar

I'm testing six distinct .jar-files that all need to handle the possibility of no online access.
Unfortunately, I am on a network disc, so disabling the network connection or pulling the ethernet cable does not work unless I move all the files to /tmp or /scratch and change my $HOME environment variable, all of which I'd rather not have to do as it ends up being a lot of work.
Is there a way to invoke java -jar and disable the process from accessing the internet? I have not found any such flag in the man-pages. Is there perhaps a UNIX-way of doing this, as in:
disallowinternetaccess java -jar Foo.jar
Tell your Java program to access the network through a proxy. For all internet access this would be a SOCKS5 proxy.
java -DsocksProxyHost=socks.example.com MyMain
I believe that if no proxy is running you should get an appropriate exception in your program. If you need full control of what is happening, you can look into - and possibly modify - http://jsocks.sourceforge.net/
See http://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html for details.
Note: You can do this without any native Unix stuff, so this question fits perfectly fine on SO.
You need just turn on SecurityManager: -Djava.security.manager=default
see details - https://stackoverflow.com/a/4645781/814304
With this solution you can even handle which resource you want to show and which to hide.

Serving two websites written in Google Go within a single VM

I have a VM from Digital Ocean. It currently has two domains linked to the VM.
I do not use any other web server but Golang's built in http module. Performance-wise I like it, and I feel like I have a full control over it.
Currently I am using a single Go program that has multiple websites built in.
http.HandleFunc("test.com/", serveTest)
http.HandleFunc("123.com/", serve123)
http.HandleFunc("/", serve123)
As they are websites, Go program is using port 80 for that.
And the problem is when I am trying to update only 1 website, I have to recompile whole thing as they are written in the same code.
1) Is there a way to make it hot-swappable only with Golang (without Nginx or Apache)
2) What would be a standard best practice?
Thank you so much!
Well, you can do hotswapping in go, but I really wouldn't want to do that unless really ncecessary as the complexity added isn't negligible (and I'm not talking about code).
You can have something close with a kind of proxy that would sit in front of the program and do a graceful swap whenever your binary change : the principle is to have the binary on one port, the proxy on another. When a new binary is ready, you run it on another port, and make the proxy redirect to the new port, then gracefully shutdown the old one.
There was a tool for that in Go that I can't remember the name of…
EDIT: not the one I had in mind, but close call https://github.com/rcrowley/goagain
Personnal advice: use a reverse proxy for that, its much more simple to do. My personnal setup is to use h2o to terminate SSL, HTTP2, etc, and send the requests to the various websites running on the background. Not only Go ones, though, but also PHP ones, a Gitlab instance, etc. Its much more flexible, and the performance penalty of the proxy is small…

R - Connect via ssh and execute a command

I would like to connect via ssh to certain equipment in a network.
The requisites are:
It must run a command and capture the output of the ssh session in R (or in bash, or any other programming language, but I would prefer it in R language)
It must enter a plain-text password (as this equipment hasn't been accessed before, and can't be changed with a rsa keypair), so the ssh.utils package doesn't meet this requirement
sshpass can't be used, as I have noticed that it doesn't work for some devices I tested.
I've read all this posts but I can't find an effective way to perform it: link 1, link 2, link 3, link 4
I know the requirements are hard to accomplish, but thank you for your effort!
EDIT:
Sorry if I didn't make myself understandable. I mean I work locally in R and I want to connect to +3000 devices in all of my network via ssh. It is Ubiquiti equipment, and the only open ports are 80 and 22.
If ssh doesn't work, I will use the RSelenium package for R and extract info from port 80. But first I will try with ssh pory 22 as it is a lot more efficient than opening an emulated browser.
The big problem in all these Ubiquiti equipment is that they have a password to log in. That's why requisite No.2 is needed. When I must enter a server that I know, I spend time setting up the rsa keypair so that I don't have to enter a password everytime I connect to a specific server, but it's impossible (or at least, for me it's impossible) to configure all +3000 Ubiquiti equipment with these keypairs.
That's why I don't use snmp, for example, as this equipment maybe they have it activated or not, or the snmp configuration is mistaken. I mean, I have to use something that's activated by default, and in a way, ordered. And only port 80 and port 22 are activated and I know all the user's and password's equipment.
And sshpass is an utility in UNIX/Linux like this link explains that works for servers but doesn't work for Ubiquiti equipment, as long as I've tested it. So I can't use it.
The command I need to extract the output from is mca-status. Simply by entering that into the console makes it print some stats I will like to get from the Ubiquiti equipment.
Correct me, please, if I am wrong in something I've posted. Thanks.
I think you have this wrong. I also have no idea what you are trying to say in point 2, and I have not idea what point 3 is supposed to say.
Now: ssh is a authentication mechanism allowing you (trusted) access to another machine and the ability to run a command. This can be as simple as
edd#max:~$ ssh bud Rscript -e '2+2'
[1] 4
edd#max:~$
where I invoke R (or rather, Rscript) on the machine 'bud' (my desktop) from a session on the machine 'max' (my server). That command could be anything including something which writes to temporary or permanent files. You can then retrieve those files via scp.
Authentication is handled independently -- on Unix we often use ssh-agent which run in the background and against you authenticate on login.
Finally I solved it using the rPython package and the python's paramiko module, as there was no way to do it purely via R.
library(rPython)
python.exec(python.code = c("import paramiko",
"ssh = paramiko.SSHClient()",
"ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())",
sprintf('ssh.connect("%s", username="USER", password="PASSWORD") ', IP),
'stdin, stdout, stderr = ssh.exec_command("mca-status")',
'stats = stdout.readlines()'))

Development Branch on a GoLang Server, multiple listenAndServes

I'm working on a development branch for my server. The idea is to use two clones of the same git branch in two different folders, which more or less are identical. The first folder will be live, the second will be my developer.
At the moment, I'm launching the live-branch go project and later launching the developer-branch go project. Then, the entire site gets mirrored on "www.k.com/" and "www.k.com/developer/".
The problem is, this doesn't work. When I launch my second go application, everything runs fine but the last line "http.ListenAndServe" doesn't catch. There's no error that I know of that gets thrown by Listen and Serve either. This leaves my server functional but the developer/ pages throw 404's, which means the second scipt isn't doing anything. This happens to whichever I run second, meaning if I run developer then live the developer section works and the normal site doesn't.
prefix := "/"
if(utilities.Dev()){
prefix = "/developer"
}
router := mux.NewRouter()
subrouter := router.PathPrefix(prefix).Subrouter()
subrouter.HandleFunc("/",controllers.HomeHandler).Methods("GET")
subrouter.HandleFunc("/",controllers.HomeSessionHandler).Methods("POST")
subrouter.HandleFunc("/team", controllers.TeamHandler).Methods("GET")
subrouter.HandleFunc("/contact", controllers.ContactHandler).Methods("GET")
http.Handle("/", router)
http.ListenAndServe(":80", nil)
So how can I use ListenAndServe in two different processes to combine routes from two projects? Surely there must be a way, and if not, how else would I go about creating a development environment like this?
Your problem is that you're trying to listen on port 80 from two different applications (I assume you're running these on the same machine?). To be honest, I'm not sure why you're not getting an error from http.ListenAndServe - you should definitely be getting a "bind: address already in use" error.
In terms of how to get this to work, the short answer is that you can't. At least not if you want to run this from two separate subprocesses.
The longer answer is that you can if you're willing to set up an HTTP proxy to intercept web traffic and then route it to the proper application (that is, you'd have both applications listening on ports other than 80, and your proxy would listen on port 80 and route to the proper application depending on the URL).
Another alternative would be to have your functionality implemented as packages. One would be the production package and one would be the development package, and your main package would ask each of these packages to register handlers. Then the main package could run http.ListenAndServe itself, and you'd still get to develop your production and development branches separately.

Resources