Connection between R and qlikview with opencpu - r

I want to create connection between R and QlikView using 'opencpu' package R.
I've seen some examples but I did not understand how to use the opencpu R package to create the connection between R and QlikView.

With its version 3.1 release, Qlik engine will be able to pass data in and out of both R and Python, including analysis context about a data set

Related

Keras in R and tensorflow backend

I need to limit the number of threads running my neural network using these instructions here: https://github.com/keras-team/keras/issues/4740.
However, I am using keras in R, and I am not sure how do I access the tensorflow implementation used in keras I load in R using
library("keras")
I can call library(tensorflow), however, isn't it loading a library copy unrelated to the one loaded by keras? And I cannot find any functionality in R that allows to load tensorflow backend associated with keras in Rstudio. Also I cannot find any links to anyone doing the same.
Can someone suggest a way to do the operations in the link from R, given keras loaded with library("keras") (in the link tensorflow backend for keras is used to set the number of threads per core). It would also be good to know how to check which version is loaded into R by keras.

How to run a neural network in R and connect to an OPEN instance of Excel?

I have Excel-VBA code which runs in an Excel-instance. While running the VBA-code I would like to call R to send input to a neural network in R and read the outcome back into the instance of Excel.
For this purpose I am looking for an R-package to connect R with an OPEN instance of Excel.
I have investigated Readxl, WriteXLS, Xlsx and XLConnect although these packages seem to need a saved Excel-file to read and a closed Excel file to write back to.
Does someone know if there is an R-package to connect R with an OPEN instance of Excel?
Thanks a lot!

R: Is there a way to get the sessionInfo/packages of other session R?

Imagine that I open two session R.
In the first (R1) I loaded the package dplyr.
Now, my questions is, is there way to get the sessionInfo/packages loaded in R1
through R2??
UPDATE:
I am writing a R help system in Atom editor. Atom editor currently not support the function help of R. So i am creating one. And to find the help of the function you need to search into packages where this function is, the best way is know what packages are loaded in your current session R. And that is my difficult. One way to solution this is to forgett the loaded packages and search in all installed packages, but it is to slowly if you have a lot of packages installed.
So in my script R i have a line that has this code:
pkg <- .packages() # all packages loaded in this currently session
But when I run this script R1 in other script R2, it not get the packages loaded in the currently script R2, but the script R1.
Use the Services API to interact with Hydrogen
The following details interacting with other packages in atom: http://flight-manual.atom.io/behind-atom/sections/interacting-with-other-packages-via-services/
Hydrogen is an interface to a jupyter kernel. It's is maintaining the session with the kernel, and it has a plugin API currently which you could use to get the connection information to the backing kernel. https://nteract.gitbooks.io/hydrogen/docs/PluginAPI.html. Using that you could send your call to packages().
There is also r-exec, but I believe that's Mac only. In that case, you could get the

Run a R Model using SparkR

Thanks in advance for your input. I am a newbie to ML.
I've developed a R model (using R studio on my local) and want to deploy on the hadoop cluster having R Studio installed. I want to use SparkR to leverage high performance computing. I just want to understand the role of SparkR here.
Will SparkR enable the R model to run the algorithm within Spark ML on the Hadoop Cluster?
OR
Will SparkR enable only the data processing and still the ML algorithm will run within the context of R on the Hadoop Cluster?
Appreciate your input.
These are general questions, but they actually have a very simple & straightforward answer: no (to both); SparkR wiil do neither.
From the Overview section of the SparkR docs:
SparkR is an R package that provides a light-weight frontend to use Apache Spark from R.
SparkR cannot even read native R models.
The idea behind using SparkR for ML tasks is that you develop your model specifically in SparkR (and if you try, you'll also discover that it is much more limited in comparison to the plethora of models available in R through the various packages).
Even conveniences like, say, confusionMatrix from the caret package, are not available, since they operate on R dataframes and not on Spark ones (see this question & answer).

Efficient switching between 32bit and 64bit R versions

I am working with large datasets that are available in *.mdb (i.e access database) format. I am using RODBC R package to extract data from access database. I figured out that I have 32 bit office installed on my machine. Since, I have 32 bit office installed, it seems I can use only 32 bit R in order to connect to the access database using RODBC. After I read the data using 32 bit R, then doing some exploratory analysis (plotting data, summary / regression), I got the memory issues which I didn't get while using 64-bit R.
Currently, I am using Rstudio to run all my code and I could change the version of R that I use from Options >> Global Options >> R version:
However, I don't want to switch to 32-bit while reading access database using RODBC and then go back to R-studio to revert back to 64-bit for analysis. Is there an automatic solution which allows me to specify 32-bit or 64-bit ? Can we do that using batch file ? If anyone could shed some light that would be great.
Write your code that extracts the data as one R script. Have that script save the output data that you need for your analysis to an .RData file.
Write the code that you run your analyses in, to be run in 64-bit R. Using the answer found here, run your code using the 32-bit R. Then, the next line can be reading the data in from the .RData file. If needed to allow things to load, use Sys.sleep to have your first program wait a few seconds for the load to complete.

Resources