R - Removing all ability to connect to the internet - r

I have been trying to make the case to have R installed at my place of employment. However, the IT department has come back with a risk assessment that R has potential risks. After much debating (and brick-head-interaction), I suggested removing all internet connectivity from R.
I know (hope?) in principle that it can be done, since 'base' R is open source and can be edited. My questions are:
How do I disable all internet possibility from 'base' R by editing the source code?
Once 1. is done, will the lack of internet flow on to packages? That is, will all internet be cut off from any package, no matter what the package is trying to do?
(Sorry, I'm a stats/maths guy, not so much a 'deep' programming dude.)

Wow, what a very strict work condition! Yeah, data analysis can be sometimes very dangerous. :-)
Generaly speaking it's possible to use R without internet connection, you just have to download the packages and install them from source (.zip/.tar.gz files).
But adjusting R source code would be unnecessary effort. I think that your IT department should be able to block access to internet connection in the firewall settings for R apps (R, RGui and/or RStudio), which takes only few minutes to set up. E.g. in Windows they can use Windows Defender Firewall with Advanced Security to block outbound connections from R exe files:
If they use another firewall or network rules, they should be able to set it up correctly and quickly as well.

Related

Correct way to create a software install script which can manage dependencies

I'm currently working on an university research related software which uses statistical models in it in order to process some calculations around Item Response Theory. The entire source code was written in Go, whereas it communicates with a Rscript server to run scripts written in R and return the generated results. As expected, the software itself has some dependencies needed to work properly (one of them, as seen before, is to have R/Rscript installed and some of its packages).
Due to the fact I'm new to software development, I can't find a proper way to manage all these dependencies on Windows or Linux (but I'm prioritizing Windows right now). What I was thinking is to have a kind of script which checks if [for example] R is properly installed and, if so, if each used package is also installed. If everything went well, then the software could be installed without further problems.
My question is what's the best way to do anything like that and if it's possible to do the same for other possible dependencies, such as Python, Go and some of its libraries. I'm also open to hear suggestions if installing programming languages locally on the machine isn't the proper way to manage software dependencies, or if there's a most convenient way to do it aside from creating a script.
Sorry if any needed information is missing, I would also like to know.
Thanks in advance

Run R script and hide the actual code from user

I have created an R code script that:
Reads some data from a database
Makes some transformations and..
exports into a csv the modified table.
This code needs to run in a client's machine, but we need to "hide" the actual code from the user.
Is there any useful suggestions on how we can achieve that?
Up front
... it will be nearly impossible to deploy an R <something> to another computer in a way that prevents curious users from accessing the source code.
From a mailing list conversation in 2011, in response to "I would not like anyone to be able to read the code.",
R is an open source project, so providing ways for you to do this is not
one of our goals.
Duncan Murdoch https://stat.ethz.ch/pipermail/r-help/2011-July/282755.html
(Prof Murdoch was on the R Core Team and R Foundation for many years.)
Background
Several (many?) programming languages provide the ability to compile a script or program into an executable, the .exe you reference. For example, python has tools like py2exe and PyInstaller. The tools range from merely compactifying the script into a zip-ball, perhaps obfuscating the script; ... to actually creating a exe with the script either tightly embedded or such. (This part could use some more citations/research.)
This is usually good enough for many people, by keeping the honest out. I say it that way because all you need to do is google phrases like decompile py2exe and you'll find tools, howtos, tutorials, etc, whose intent might be honestly trying to help somebody recover lost code. Regardless of the intentions, they will only slow curious users.
Unfortunately, there are no tools that do this easily for R.
There are tools with the intent of making it easy for non-R-users to use R-based tools. For instance, RInno and DesktopDeployR are two tools with the intent of creating Windows (no mac/linux) installers that support R or R/shiny tools. But the intent of tools like this is to facilitate the IT tasks involved with getting a user/client to install and maintain R on their computer, not with protecting the code that it runs.
Constrain R.exe?
There have been questions (elsewhere?) that ask if they can modify the R interpreter itself so that it does not do everything it is intended to do. For instance, one could redefine base::print in such a way that functions' contents cannot be dumped, and debug doesn't show the code it's about to execute, and perhaps several other protective steps.
There are a few problems with this approach:
There is always another way to get at a function's contents. Even if you stop print.default and the debugger from doing this, there are others ways to get to the functions (body(.), for one). How many of these rabbit holes do you feel you will accurately traverse, get them all ... with no adverse effect on normal R code?
Even if you feel you can get to them all, are you encrypting the source .R files that contain your proprietary content? Okay, encrypting is good, except you need to decrypt the contents somehow. Many tools that have encrypted contents do so to thwart reverse-engineering, so they also embed (obfuscatedly, of course) the decryption key in the application itself. Just give it time, somebody will find and extract it.
You might think that you can download the key on start-up (not stored within the app), so that the code is decrypted in real-time. Sorry, network sniffers will get the key. Even if you retrieve it over https://, tools such as https://mitmproxy.org/ will render this step much less effective.
Let's say you have recompiled R to mask print and such, have a way to distribute source code encrypted, and are able to decrypt it in a way that does not easily reveal the key (for full decryption of the source code files). While it takes a dedicated user to wade through everything above to get to the source code, none of the above steps are required: they may legally compel you to release your changes to the R interpreter itself (that you put in place to prevent printing function contents). This doesn't reveal your source code, but it will reveal many of your methods, which might be sufficient. (Or just the risk of legal costs.)
R is GPL, and that means that anything that links to it is also "tainted" with the GPL. This means that anything compiled with Rcpp, for instance, will also be constrained/liberated (your choice) by the GPL. This includes thoughts of using RInside: it is also GPL (>= 2).
To do it without touching the GPL, you'd need to write your interpreter (relatively from scratch, likely) without code from the R project.
Alternatives
Ultimately, if you want to release R-based utilities/apps/functionality to clients, the only sure-fire way to allow them to use your code without seeing it is to ... control the computers on which R will run (and source code will reside). I'll add more links supporting this claim as I find them, but a small start:
https://stat.ethz.ch/pipermail/r-help/2011-July/282717.html
https://www.researchgate.net/post/How_to_make_invisible_the_R_code
Options include anything that keeps the R code and R interpreter completely under your control. Simple examples:
Shiny apps, self-hosted (or on shinyapps.io if you trust their security); servers include Shiny Server (both free and commercial versions), RStudio Connect (commercial only), and ShinyProxy. (The list is not known to be exclusive.)
Rplumber is an API server, not a shiny server. The intent is for single HTTP(s) endpoint calls, possibly authenticated, supporting whatever HTTP supports (post, get, etc). This can be served in various ways, see its hosting page for options.
Rserve. I know less about this, but from what I've experienced with it, I've not had as much luck integrating with enterprise systems (where, e.g., authentication and fine-control over authorization is important). This does allow near-raw access to R, so it might not be what you want (especially when the intent is to give to clients who may not be strong R users themselves).
OpenCPU should be discussed, but not as a viable candidate for "protect your code". It is very similar to rplumber in that it provides HTTP endpoints, but it supports endpoints for every exported function in every package installed in its R library. This includes the base package, so it is not at all difficult to get the source code of any function that you could get on the R console. I believe this is a design feature, even if it is perfectly at odds with your intent to protect your code.
Anything that can call R or Rscript. This might be PHP or mod_python or similar. Any web-page serving language that can exec("/usr/bin/Rscript",...) can take its output and turn it around to the calling agent. (It might also be possible, for example, for a PHP front-end to call an opencpu endpoint that only permits connections from the PHP-serving host.)

RStudio and git workflow on a closed network

Is anyone aware of an R-based data analysis setup that works well in a research data centre with no internet access? I would like to use good reproducible analysis practices, but I do not have the permission upload files to a repository, for example. Also, RStudio preferences (such as path to a local package repository) are not saved. So far I know:
The miniCRAN package helps with gathering all R package dependencies.
local git version control does not require internet access. (Plus, the data centre technician may be willing to use it to release results and R scripts if the learning curve is outweighed by time savings in the long run.)
I am considering writing up a proposal for a pilot project, but I don't want to re-invent the wheel - especially if a more comprehensive platform already exists. I have looked, but haven't found one, unless Microsoft's R Open essentially does it (not clear to me).
Thanks for your consideration!

working on a remote R session

The R session that I am working on is on a remote cluster due to memory contraints and the data is stored remotely. I am therefore using notepad++ to edit my files and just paste them into my SSH session as I go along. What is the best way to integrate with the remote session to take advantage of code completion and other things available in editors like RStudio. Any best practice suggestions about working on remote connections? I imagine this must be the case for most R users who work with large data sets.
From: http://www.sciviews.org/_rgui/projects/Editors.html
The famous Vim editor now also provides syntax highlighting for R. You can get a plugin to integrate R here. For Windows, there is another plugin using R-DCOM here. There is an alternate, multiplatform, plugin here. Also look at the VIM web site, because there are other interesting resources there for R users (see here for an overview). There is also an R package to better integrate debugging facilites in Vim: adtdbg

Encrypting R script under MS-Windows

I have a bunch of R scripts which I am running on a Windows machine and want to ensure that the code remains unread by those not intended to see it. On a Linux box, I could wrap the R code in a bash script #! and make an encrypted (and perhaps even a limited-life) executable shell script. What are my options to do something on similar lines under Windows?
My answer is a bit late, but I believe this is a good question. Unfortunately, I don't believe that there is a solution, or at least an easy one, at the present time.
The difficulty is common because, for most interpreted languages, including R, it is often possible to turn on logging and inspection of all commands being run. This can negate many tricks to obfuscate the code.
For those who prefer to think of code being open == good, one should know that a common reason to obfuscate the code is if one is consulting with a client that hires multiple vendors. It is not uncommon for a client to take scripts from vendor A and ask vendor B why it doesn't work with their system. (This may be done by a low-level IT flunkie, rather than someone responsible for the NDA contracts.) If A & B are competitors, A's code has just been handed to B. When scripts == serious programs, then serious code has been given away.
The ways I've seen this addressed are:
Make a call to a compiled language, and use standard protections available there.
Host the executable on a different server, and use calls to the server to execute the calculations. (In R, there are multiple server-side options.)
Use compiled (preprocessed / bytecode) code within the language.
Option 2 is actually easier and better when the code may be widely distributed, not just for IP reasons. A major advantage is that it lets you upgrade the code without having to go through the pain of a site-wide release process. If new libraries are needed, no problem - update the server.
Option 3 is done in Matlab with .p files, and can be done with py2exe for Python on Windows. In R, the new bytecode compilation may be analogous, but I am not familiar enough with it to address any differences between .Rc files in the R context and .p files in the Matlab context. For more info on the compiler, see: http://www.inside-r.org/r-doc/compiler/compile
Hosting computations on the server is great for working with unsophisticated users, because it is easier to iterate quickly in response to bugs or feature requests. The IP protection is simply a benefit.
This is not a specifically R-oriented strategy. (And it's a bit unclear what your constraints or goals really are anyway.) If you want a cross-platform encryption method, you should look into the open-source program TrueCrypt. It supports creating encrypted files that can be mounted as volumes on any machine that supports the volume formatting method. I have tested this across the Mac PC divide , since the Mac can read FAT files, but have no experience with how it might work across the Linux-PC chasm.
(Their TODO list for Windows includes;"Command line options for volume creation (already implemented in Linux and Mac OS X versions)". So I don't see any clear way to use this from within R without you running the program from the OS.)
I don't think this is possible because the R interpreter has to be able to decrypt and read the code in order to execute it which means that whoever is using that interpreter will also be able to decrypt and read the code.
I am by no means an expert, so I reserve the right to be 100% wrong about that statement.
I believe the best solution is to ensure value comes from the expertise and services provided by your company and it's employers---not from keeping secrets.
Failing that, you could try separating the code into a client/server model. That way the client just sends data and receives results---they never have access to the code that runs on the server.
However, the scientist in me just said "that solution sucks and I would never trust results provided under such conditions".

Resources