I have built a Dashboard that has multiple calculations using R Scripts and now I want to publish this to our internal Server. I get the following error message:
"This worksheet contains R scripts, which cannot be viewed on the target platform until the administrator configures an Rserve connection."
From what I understand, the administrator has to configure the Rserve, but what about the rest of the installed packages I have in use? Should the administrator install those too and every time I use a new package, I should inform him to install that particular package?
You need to install the packages on the server that your script will use. Then make sure to start Rserve there and connect your Tableau workbook to the server where Rserve is used (using Rserve connection in Tableau).
Tableau described the process pretty well:
http://kb.tableau.com/articles/knowledgebase/r-implementation-notes
Related
I'm barely new to CI/CD piplines with GIT and hope that my questions is related to that functionality. I want to achieve the following:
Whenever a new commit (or tag) of a specific package in our companies local git server is created, I want to execute a shell script on a remote server. This shell script should download this new version of the package and install it into the global R-Installation, making it available for different processes that use this R environment. So far this is more like a manual step but I really would like to automate the process. Is this possible?
Thanks!
We are evaluating the migration from our current client/server application to .NET Core. The 3.0 release added the support for WinForms we need for our client, but ClickOnce will not be supported.
Our solution is installed on-premise and we need to include settings (among others) like the address to the application server. We create dynamically ClickOnce packages that can be used to install and update the clients and include the settings. This is working like a charm today. The users install the client using the ClickOnce package and every time we update the software we regenerate these packages at the customer's site and they get automatically the new version with the right settings.
We are looking at MSIX as an alternative, but we have got a question:
- Is it possible to add some external settings files to the MSIX package that will be used (deployed) when installing?
The package for the software itself could be statically generated, but how could we distribute the settings to the clients on first install / update?
MSIX has support for modification packages. This is close to what you want, the customization is done with a separate package installed after you install the main MSIX package of your app.
It cannot be installed at the same time as your main app. The OS checks if the main app is installed when you try to install the modification package and it will reject its installation if the main is not found on the machine.
The modification package is a standalone package, installed in a separate location. Check the link I included, there is a screenshot of a PS window where you can see the install path for the main package and the modification are different.
At runtime (when the user launches the app) the OS knows these two packages are connected and merges their virtual files and registry system, so the app "believes" all the resources are in one package.
This means you can update the main app and the modification package separately, and deploy them as you wish.
And if we update the modification package itself (without touching the main), will it be re-installed to all the clients that used it?
How do you deploy the updates? Do you want to use an auto-updater tool over the internet? Or ar these users managed inside an internal company network and get all the app updates from tools like SCCM?
The modification packages were designed mostly for IT departments to use them, and this is what I understood you would need too.
A modification package is deployed via SCCM or other tools just like the main package, there are no differences.
For ISVs I believe optional packages are a better solution.
I'm developing a web application that should interact with some R scripts and I would very much like to use openCPU. However, I do not see if there is any way I can do other AJAX requests besides calling the R scripts or fetching their results.
I need to send R script descriptions and other stuff which can change so it has to be done in runtime by requests to server.
If anyone would be kind enough to briefly explain if this is possible, I would be very grateful.
I'm assuming when you say update descriptions that you mean the DESCRIPTION file that acts as the definition of the R package itself. When you change this or the contents of the R script, you will need to publish a new version to Open CPU. A few notes from my experience, which seems similar to yours:
I have had some trouble having scripts running inside of OpenCPU install packages that are in CRAN but not available in the OpenCPU package list. OpenCPU can pull packages in from Github using the install_github function found in the devtools package. You might have to manually install in your R script using install.packages if your script is using an R function that the public OpenCPU doesn't have. This might be helpful if calling library or install.packages by itself doesn't work.
library('devtools')
install.packages("BIOMASS", repos = "https://cran.opencpu.org", method = "libcurl")
library("BIOMASS")
The list of installed packages on the public OpenCPU is here If you are using another package that is available on CRAN, you will need to add it as a dependency in your Imports section in the R package DESCRIPTION file. You can also use namespacing to avoid having to use PACKAGENAME::FUNCTIONNAME in your script.
If you publish to the public OpenCPU, you can only update your package once every 24 hours.
The pipeline I've found helpful is to develop my package, write some test code locally that uses it, and once I'm fairly confident, push it to my github repository. Here I have a webhook setup to publish the new package to the public OpenCPU instance. Depending on how you have your development environment setup, you might publish it manually instead. For example, if you are hosting your own OpenCPU instance, it would make more sense to publish it to your instance instead of the public one.
The relevant section in the OpenCPU API documentation is where it talks about the R Package API. There's also documentation in the server manual about how to install packages if you are hosting your own OpenCPU.
If you happen to be using Meteor, my experience was that it was best to make direct calls to the ReSTful / HTTP API in OpenCPU directly. The Javascript client package didn't work for me in Meteor and the HTTP API works just fine.
I'm trying to install IBM BPM 8.5.6 in a linux environment with Oracle database.
Steps I followed to install was
Installed the IBM Installation
Manager using BPM PFS
Installed WAS
and BPM Process Center using The
installation manager.
Created 3 oracle schema for shred db, process
server and performance server
Configured the installation using
sample single cluster process center
file provided by IBM. : using
BPMConfig –create option
The installation was successful and I could see all tables being created. Then I started started it using BPMConfig –start option. That too completed successfully.
I didn't change any ports so it should be using all default ports. Afterwards when I try to access the console like http://servername:9080/ProcessAdmin or http://servername:9080/ProcessCenter or anything i'm getting a 404 error message Error 404: com.ibm.ws.webcontainer.servlet.exception.NoTargetForURIException: No target servlet configured for uri: /ProcessAdmin
Do I have to do anything else? Or what is the starting point or default url to get to process portal or admin console. The WAS admin console is working fine.
Any help is appreciated. Thanks.
Since you probably used custom installation, you have to properly initialize data calling following command:
bootstrapProcessServerData.bat -clusterName cluster_name
I use Excel + R on Windows on a rather slow desktop. I have a full admin access to very fast Ubuntu-based server. I am wondering: how to remotely execute commands on the server?
What I can do is to save the needed variables with saveRDS, and load them on server with loadRDS, execute the commands on server, and then save the results and load them on Windows.
But it is all very interactive and manual, and can hardly be done on regular basis.
Is there any way to do the stuff directly from R, like
Connect with the server via e.g. ssh,
Transfer the needed objects (which can be specified manually)
Execute given code on the server and wait for the result
Get the result.
I could run the whole R remotely, but then it would spawn a network-related problems. Most R commands I do from within Excel are very fast and data-hungry. I just need to remotely execute some specific commands, not all of them.
Here is my setup.
Copy your code and data over using scp. (I used github, so I clone my code from github. This has the benefit of making sure that my work is reproducible)
(optional) Use sshfs to mount the remote folder on your local machine. This allows you to edit the remote files using your local text editor instead of ssh command line.
Put all things you want to run in an R script (on the remote server), then run it via ssh in R batch mode.
There are a few options, the simplest is to exchange secure keys to avoid entering SSH/SCP passwords manually all the time. After this, you can write a simple R script that will:
Save necessary variables into a data file,
Use scp to upload the data file to ubuntu server
Use ssh to run remote script that will process the data (which you have just uploaded) and store the result in another data file
Again, use scp command to transfer the results back to your workstation.
You can use R's system command to run scp and ssh with necessary options.
Another option is to set up cluster worker at the remote machine, then you can export the data using clusterExport and evaluate expressions using clusterEvalQ and clusterApply.
There are a few more options:
1) You can do the stuff directly from R by using Rserve. See: https://rforge.net/
Keep in mind that Rserve can accept connections from R clients, see for example how to connect to Rserve with an R client.
2) You can set up cluster on your linux machine and then use these cluster facilities from your windows client. The simplest is to use Snow, https://cran.r-project.org/package=snow, also see foreach and many other cluster libraries.