Unable to use RJDBC at Shinyapp.io - r

I have written a Shiny App which runs perfectly in my local machine. I have used RJDBC to connect to the DB2 database in IBM Cloud. The code is as follows.
#Load RJDBC
dyn.load('/Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/Home/lib/server/libjvm.dylib')
# dyn.load('/Users/parthamajumdar/Documents/Solutions/PriceIndex/libjvm.dylib')
library(rJava)
library(RJDBC)
As the path is hard coded, I copied the file libjvm.dylib to the Project directory and pointed to that. When I do this, R gives a fatal error.
I remove the absolute path and replaced with "./libjvm.dylib" and deployed the application on ShinyApp.io website. When I run the program, it gives a fatal error.
#Values for you database connection
dsn_driver = "com.ibm.db2.jcc.DB2Driver"
dsn_database = "BLUDB" # e.g. "BLUDB"
dsn_hostname = "dashdb-entry-yp-lon02-01.services.eu-gb.bluemix.net" # e.g. replace <yourhostname> with your hostname, e.g., "Db2 Warehouse01.datascientstworkbench.com"
dsn_port = "50000" # e.g. "50000"
dsn_protocol = "TCPIP" # i.e. "TCPIP"
dsn_uid = "<UID>" # e.g. userid
dsn_pwd = "<PWD>" # e.g. password
#Connect to the Database
#jcc = JDBC("com.ibm.db2.jcc.DB2Driver", "/Users/parthamajumdar/lift-cli/lib/db2jcc4.jar");
jcc = JDBC("com.ibm.db2.jcc.DB2Driver", "db2jcc4.jar");
jdbc_path = paste("jdbc:db2://", dsn_hostname, ":", dsn_port, "/", dsn_database, sep="");
conn = dbConnect(jcc, jdbc_path, user=dsn_uid, password=dsn_pwd)
Similarly, I copied the file "db2jcc4.jar" to my local project directory. If I point to the local project directory for this file in my local machine, the program works. However, when I deploy on ShinyApp.io, it gives fatal error.
Request your please letting me know what I need to do so that the application runs properly on the ShinyApp.io website.
The error is as follows when I run the application from Shiny server:
Attaching package: ‘lubridate’
The following object is masked from ‘package:base’:
date
Loading required package: nlme
This is mgcv 1.8-23. For overview type 'help("mgcv-package")'.
Error in value[[3L]](cond) :
unable to load shared object '/srv/connect/apps/ExpenseAnalysis/Drivers/libjvm.dylib':
/srv/connect/apps/ExpenseAnalysis/Drivers/libjvm.dylib: invalid ELF header
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Execution halted

What works for me is the following and it is independent of OS.
Create your own R package that contains the file you need somewhere in the extdata folder. As an example, your package could be yourpackage and the file would be something like extdata/drivers/mydriver.lib. Typically this would be stored at this location inst/extdata/drivers. See http://r-pkgs.had.co.nz/inst.html for details.
Store this package on github and if you want privacy you will need to work out how to grant an access token.
Use the devtools package to install it. The command would be something like this, devtools::install_github("you/yourpackage", auth_token = "youraccesstoken"). Do this once before deploying to Shiny.io. Ensure that you also do library(yourpackage). The package submission process will work out that it needs to fetch from Github.
Use the following R code to find the file.
system.file('extdata/drivers/mydriver.lib, package='yourpackage'). This will give you the full path to the file and you can use it.

Related

R CMD check fails with ubuntu when trying to download file, but function works within R

I am writing an R package and one of its functions download and unzips a file from a link (it is not exported to the user, though):
download_f <- function(download_dir) {
utils::download.file(
url = "https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php",
destfile = file.path(download_dir, "fines.rar"),
mode = 'wb',
method = 'libcurl'
)
utils::unzip(
zipfile = file.path(download_dir, "fines.rar"),
exdir = file.path(download_dir)
)
}
This function works fine with me when I run it within some other function to compile an example in a vignette.
However, with R CMD check in github action, it fails consistently on ubuntu 16.04, release and devel. It [says][1]:
Error: Error: processing vignette 'IBAMA.Rmd' failed with diagnostics:
cannot open URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php'
--- failed re-building ‘IBAMA.Rmd’
SUMMARY: processing the following file failed:
‘IBAMA.Rmd’
Error: Error: Vignette re-building failed.
Execution halted
Error: Error in proc$get_built_file() : Build process failed
Calls: <Anonymous> ... build_package -> with_envvar -> force -> <Anonymous>
Execution halted
Error: Process completed with exit code 1.
When I run devtools::check() it never finishes running it, staying in "creating vignettes" forever. I don't know if these problems are related though because there are other vignettes on the package.
I pass the R CMD checks with mac os and windows. I've tried switching the "mode" and "method" arguments on utils::download.file, but to no avail.
Any suggestions?
[1]: https://github.com/datazoompuc/datazoom.amazonia/pull/16/checks?check_run_id=2026865974
The download fails because libcurl tries to verify the webservers certificate, but can't.
I can reproduce this on my system:
trying URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php'
Error in utils::download.file(url = "https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php", :
cannot open URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php'
In addition: Warning message:
In utils::download.file(url = "https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php", :
URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php': status was 'SSL peer certificate or SSH remote key was not OK'
The server does not allow you to download from http but redirects to https, so the only thing to do now is to tell libcurl to not check the certificate and accept what it is getting.
You can do this by specifying the argument -k to curl
download_f <- function(download_dir) {
utils::download.file(
url = "https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php",
destfile = file.path(download_dir, "fines.rar"),
mode = 'wb',
method = 'curl',
extra = '-k'
)
utils::unzip(
zipfile = file.path(download_dir, "fines.rar"),
exdir = file.path(download_dir)
)
}
This also produces some download progress bar, you can silence this by setting extra to -k -s
This now opens you up to a Machine In The Middle Attack. (You possibly already are attacked this way, there is no way to check without verifying the current certificate with someone you know at the other side)
So you could implement an extra check, e.g. check the sha256sum of the downloaded file and see if it matches what you expect to receive before proceeding.
myfile <- system.file("fines.rar")
hash <- sha256(file(myfile))

Uploading R Shiny app that reads external CSV data file

I have created an R Shiny app. It seems to run fine on my computer. I now need to upload it so others can use it. I created an account at: https://www.shinyapps.io/ and use the following two lines within the default R GUI:
library(rsconnect)
rsconnect::deployApp('C:/Users/mark_/Documents/simple_RShiny_files/surplus10')
I get the following warning where line 4 reads the external CSV file in the subfolder data which is followed by an error below. The app.R file is in the folder surplus10:
The following potential problems were identified in the project files:
-----
app.R
-----
The following lines contain absolute paths:
4: policy.data <- read.csv('C:/Users/mark_/Documents/simple_RShiny_files/surplus10/data/policy.outputs_June6_2020.csv', header = TRUE, stringsAsFactors = FALSE)
Paths should be to files within the project directory.
Do you want to proceed with deployment? [Y/n]: Y
Preparing to deploy application...DONE
Uploading bundle for application: 2430142...--- Please select a CRAN mirror for use in this session ---
DONE
Deploying bundle: 3246501 for application: 2430142 ...
Waiting for task: 744319366
building: Building image: 3633169
building: Fetching packages
building: Installing packages
An error has occurred
The application failed to start (exited with code 1).
Warning in file(file, "rt") :
cannot open file 'C:/Users/mark_/Documents/simple_RShiny_files/surplus10/data/policy.outputs_June6_2020.csv': No such file or directory
Error in value[[3L]](cond) : cannot open the connection
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Execution halted
I imagine that once the app is uploaded the path to the data file is no longer valid. If that is the case which path should I use to read the CSV file? Which path do I use in the deployApp statement? I have never attempted to upload an app before and do not know what a project directory is. Sorry for my beginner's confusion.
When I changed line 4 in the app.R file that reads the external CSV data file to:
policy.data <- read.csv('data/policy.outputs_June6_2020.csv', header = TRUE, stringsAsFactors = FALSE)
the app uploaded without error.
I did not change my original path in the deployApp statement:
library(rsconnect)
rsconnect::deployApp('C:/Users/mark_/Documents/simple_RShiny_files/surplus10')

Requested version of python cannot be used as another version has been initialized in shiny apps

Getting the error message
ERROR: The requested version of Python
('~/.virtualenvs/python_environment/bin/python') cannot be used, as
another version of Python ('/usr/bin/python3') has already been
initialized. Please restart the R session if you need to attach
reticulate to a different version of Python.
Error in value[[3L]](cond) :
failed to initialize requested version of Python
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
In addition: Warning message:
In py_initialize(config$python, config$libpython, config$pythonhome, :
'.Random.seed[1]' is not a valid integer, so ignored
Execution halted
When loading my web app on shiny apps. The app only loads after I refresh the web page
Here is part of my code:
library(shinyWidgets)
library(tidyverse)
library(reticulate)
library(DT)
library(data.table)
virtualenv_create(envname = "python_environment",python="python3")
virtualenv_install("python_environment", packages =c('pandas','catboost'))
use_virtualenv("python_environment",required = TRUE)
When you run library(reticulate), the reticulate package will try to initialize a version of Python, which may not be the version that you intend to use. To avoid this, (in a new R session) run your set-up commands without importing the full reticulate library with the :: syntax like this:
reticulate::virtualenv_create(envname = 'python_environment',
python= 'python3')
reticulate::virtualenv_install("python_environment",
packages=c('pandas','catboost'))
reticulate::use_virtualenv("python_environment",required = TRUE)

Knitting: Error: pandoc document conversion failed with error 61

Problem
Our End User fails to produce html files, gets this error:
Error: pandoc document conversion failed with error 61
Execution halted
TS Performed
We set up the proxy for a previous error message.
This previous error was:
pandoc.exe: Could not fetch \\HHBRUNA01.hq.corp.eurocontrol.int\alazarov$\R\win-library\3.5\rmarkdown\rmd\h\jquery\jquery.min.js
ResponseTimeout
Error: pandoc document conversion failed with error 67
Execution halted
For this we added "self_contained: no" to RProfile.Site>
We also tried "Self_Contained: yes" .
Current Error Message
Could not fetch http://?/UNC/server.contoso.int/username$/R/win-library/3.5/rmarkdown/rmd/h/default.html
HttpExceptionRequest Request {
host = ""
port = 80
secure = False
requestHeaders = []
path = "/"
queryString = "?/UNC/server.contoso.int/username$/R/win-library/3.5/rmarkdown/rmd/h/default.html"
method = "GET"
proxy = Just (Proxy {proxyHost = "pac.contoso.int", proxyPort = 9512})
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(InvalidDestinationHost "")
Error: pandoc document conversion failed with error 61
Execution halted
I had the same issue on Windows 10, with user path located on a network drive.
Could not fetch http://?/UNC/...
Error: pandoc document conversion failed with error 61
The solution was to run R as administrator, remove the package 'rmarkdown', and reinstall it.
Additional to the answer by Malte: When you do not have administrator rights you can just change the library directory towards a directory where you have full rights, C: for example. The default option is your network folder "?/UNC/server.contoso.int/username$/R/win-library/3.5/rmarkdown/rmd/h/default.html", where you have not sufficient rights and therefore R can not knit the markdown file.
In RStudio, click on Tools>Install Packages.. Under "Install to library" you can see the default option (in your case it should be "?/UNC/server.contoso.int/username$/R/win-library/3.5/rmarkdown/rmd/h/default.html"). The second option here should be "C:/Program Files/R/R-3.6.2/library".
To change this order, i.e. to make the "C:/Program Files/R/R-3.6.2/library" folder the default folder, you have to use the following code (execute the code in a new R file) :
bothPaths <- .libPaths() # extract both paths
bothPaths <- c(bothPaths [2], bothPaths [1]) # change order
.libPaths(bothPaths ) # modify the order
After that, you might have to install the markdown package again. This time, it will be directly installed into the "C:/Program Files/R/R-3.6.2/library" folder.
Now, knitting should be working, because R will use the package straight from a folder where you have full rights.
Aand issue was resolved. Someone changed a rule on the server hosting the files without documenting/logging....

spark-warehouse error in R

I have installed spark spark-2.0.0-bin-hadoop2.7 on my Windows 10 PC and I want to use SparkR package in R.
But when I run the following example code:
library(SparkR)
# Initialize SparkSession
sparkR.session(appName = "SparkR-DataFrame-example")
# Create a simple local data.frame
localDF <- data.frame(name=c("John", "Smith", "Sarah"), age=c(19, 23, 18))
# Convert local data frame to a SparkDataFrame
df <- createDataFrame(localDF)
it throws an exception:
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/Users/Louagyd/Desktop/EDU%20%202016-2017/Data%20Analysis/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(Session
Any ideas how to fix that?
I was getting the same error too but there was no help on Net. However, I solved this with below steps:
Prep Work
Download winutils.exe from here and install it.
Create a folder called "C:\tmp\hive". This folder will be used as a warehouse directory.
In command prompt (cmd) run winutils.exe chmod 777 \tmp\hive. Ensure that winutils is in your classpath. If not, add it in the environment variables.
Ensure that SPARK is installed in your system. In my case, it was installed under "C:/spark-2.0.0-bin-hadoop2.7" folder.
Main
After opening RStudio create a new project in any directory (say, "C:/home/Project/SparkR")
In RStudio's script window, run the following commands in the same order:
# Set Working Dir - The same folder under which R Project was created
setwd("C:/home/Project/SparkR")
# Load Env variable SPARK_HOME, if not already loaded.
# If this variable is already set in Window's Env variable, this step is not required
if (nchar(Sys.getenv("SPARK_HOME")) < 1) {
Sys.setenv(SPARK_HOME = "C:/spark-2.0.0-bin-hadoop2.7")
}
# Load SparkR library
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))
# Create a Config variable mapping Memory to be allocated and Warehouse directory to be referred during runtime.
sparkConf = list(spark.driver.memory = "2g", spark.sql.warehouse.dir="C:/tmp")
# Create SparkR Session variable
sparkR.session(master = "local[*]", sparkConfig = sparkConf)
# Load existing data from SparkR library
DF <- as.DataFrame(faithful)
# Inspect loaded data
head(DF)
With the above steps, I could successfully load the data and view them.

Resources