Sparklyr connection to YARN cluster fails - r

I am trying to connect to a Spark cluster using sparklyr on yarn-client mode.
On local mode (master = "local") my spark setup works, but when I try to connect to the Cluster, I get the following error
Error in force(code) :
Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
(see full error log below)
The setup is as follows. The spark cluster (hosted on AWS), setup with Ambari, runs on yarn 3.1.1, spark 2.3.2, hdfs 3.1.1, and some other services and works with other platforms (i.e., non R/Python applications setup with Ambari. Note that a setup using Ambari is not possible, as the R machine runs on Ubuntu, and the Spark cluster on CentOS 7).
On my R machine I use the following code. Note that I have installed java 8-openjdk and the correct spark version.
Inside of my YARN_CONF_DIR I have created the yarn-site.xml file, as exported from Ambari (Services -> Download All Client Configs). I have also tried to copy the files hdfs-site.xml and hive-site.xml with the same result.
library(sparklyr)
library(DBI)
# spark_install("2.3.2")
spark_installed_versions()
#> spark hadoop dir
#> 1 2.3.2 2.7 /home/david/spark/spark-2.3.2-bin-hadoop2.7
# use java 8 instead of java 11 (not supported with Spark 2.3.2 only 3.0.0+)
Sys.setenv(JAVA_HOME = "/usr/lib/jvm/java-8-openjdk-amd64/")
Sys.setenv(SPARK_HOME = "/home/david/spark/spark-2.3.2-bin-hadoop2.7/")
Sys.setenv(YARN_CONF_DIR = "/home/david/Spark-test/yarn-conf")
conf <- spark_config()
conf$spark.executor.memory <- "500M"
conf$spark.executor.cores <- 2
conf$spark.executor.instances <- 1
conf$spark.dynamicAllocation.enabled <- "false"
sc <- spark_connect(master = "yarn-client", config = conf)
#> Error in force(code) :
#> Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
#> ...
I am not really sure how to debug this, on which machine the error originates, or how to fix it, thus any help and or hint is greatly appreciated!
Edit / Progress
So far I have found out, that the spark version installed by sparklyr (from here), depends on glassfish, whereas my cluster depends on an oracle java installation (hence the com/sun/... path).
This applies to the following java packages:
library(tidyverse)
library(glue)
ll <- list.files("~/spark/spark-2.3.2-bin-hadoop2.7/jars/", pattern = "^jersey", full.names = TRUE)
df <- map_dfr(ll, function(f) {
x <- system(glue("jar tvf {f}"), intern = TRUE)
tibble(file = f, class = str_extract(x, "[^ ]+$"))
})
df %>%
filter(str_detect(class, "com/sun")) %>%
count(file)
#> # A tibble: 4 x 2
#> file n
#> <chr> <int>
#> 1 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//activation-1.1.1.jar 15
#> 2 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//derby.log 1194
#> 3 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//jersey-client-1.19.jar 108
#> 4 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//jersey-server-2.22.2.jar 22
I have tried to load the latest jar files from maven (e.g., from this) for the files jersey-client.jar and jersey-core.jar and now the connection takes ages and does not finish (at least not the same error anymore, Yay I guess...). Any idea what the cause of this issue is?
Full Error log
Error in force(code) :
Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createTimelineClient(YarnClientImpl.java:181)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:168)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:151)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:147)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:136)
at sparklyr.StreamHandler.read(stream.scala:61)
at sparklyr.BackendHandler$$anonfun$channelRead0$1.apply$mcV$sp(handler.scala:58)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 49 more
Log: /tmp/RtmpIKnflg/filee462cec58ee_spark.log
---- Output Log ----
20/07/16 10:20:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/07/16 10:20:42 INFO sparklyr: Session (3779) is starting under 127.0.0.1 port 8880
20/07/16 10:20:42 INFO sparklyr: Session (3779) found port 8880 is not available
20/07/16 10:20:42 INFO sparklyr: Backend (3779) found port 8884 is available
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is registering session in gateway
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is waiting for registration in gateway
20/07/16 10:20:42 INFO sparklyr: Backend (3779) finished registration in gateway with status 0
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is waiting for sparklyr client to connect to port 8884
20/07/16 10:20:43 INFO sparklyr: Backend (3779) accepted connection
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is waiting for sparklyr client to connect to port 8884
20/07/16 10:20:43 INFO sparklyr: Backend (3779) received command 0
20/07/16 10:20:43 INFO sparklyr: Backend (3779) found requested session matches current session
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is creating backend and allocating system resources
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is using port 8885 for backend channel
20/07/16 10:20:43 INFO sparklyr: Backend (3779) created the backend
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is waiting for r process to end
20/07/16 10:20:43 INFO SparkContext: Running Spark version 2.3.2
20/07/16 10:20:43 WARN SparkConf: spark.master yarn-client is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
20/07/16 10:20:43 INFO SparkContext: Submitted application: sparklyr
20/07/16 10:20:43 INFO SecurityManager: Changing view acls to: ubuntu
20/07/16 10:20:43 INFO SecurityManager: Changing modify acls to: ubuntu
20/07/16 10:20:43 INFO SecurityManager: Changing view acls groups to:
20/07/16 10:20:43 INFO SecurityManager: Changing modify acls groups to:
20/07/16 10:20:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); groups with view permissions: Set(); users with modify permissions: Set(ubuntu); groups with modify permissions: Set()
20/07/16 10:20:43 INFO Utils: Successfully started service 'sparkDriver' on port 42419.
20/07/16 10:20:43 INFO SparkEnv: Registering MapOutputTracker
20/07/16 10:20:43 INFO SparkEnv: Registering BlockManagerMaster
20/07/16 10:20:43 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/07/16 10:20:43 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/07/16 10:20:43 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-583db378-821a-4990-bfd2-5fcaf95d071b
20/07/16 10:20:44 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/07/16 10:20:44 INFO SparkEnv: Registering OutputCommitCoordinator
20/07/16 10:20:44 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
20/07/16 10:20:44 INFO Utils: Successfully started service 'SparkUI' on port 4041.
20/07/16 10:20:44 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{SPARK IP}
Then in the /tmp/RtmpIKnflg/filee462cec58ee_spark.log file
20/07/16 10:09:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/07/16 10:09:07 INFO sparklyr: Session (11296) is starting under 127.0.0.1 port 8880
20/07/16 10:09:07 INFO sparklyr: Session (11296) found port 8880 is not available
20/07/16 10:09:07 INFO sparklyr: Backend (11296) found port 8882 is available
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is registering session in gateway
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for registration in gateway
20/07/16 10:09:07 INFO sparklyr: Backend (11296) finished registration in gateway with status 0
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for sparklyr client to connect to port 8882
20/07/16 10:09:07 INFO sparklyr: Backend (11296) accepted connection
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for sparklyr client to connect to port 8882
20/07/16 10:09:07 INFO sparklyr: Backend (11296) received command 0
20/07/16 10:09:07 INFO sparklyr: Backend (11296) found requested session matches current session
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is creating backend and allocating system resources
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is using port 8883 for backend channel
20/07/16 10:09:07 INFO sparklyr: Backend (11296) created the backend
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for r process to end
20/07/16 10:09:08 INFO SparkContext: Running Spark version 2.3.2
20/07/16 10:09:08 WARN SparkConf: spark.master yarn-client is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
20/07/16 10:09:08 INFO SparkContext: Submitted application: sparklyr
20/07/16 10:09:08 INFO SecurityManager: Changing view acls to: david
20/07/16 10:09:08 INFO SecurityManager: Changing modify acls to: david
20/07/16 10:09:08 INFO SecurityManager: Changing view acls groups to:
20/07/16 10:09:08 INFO SecurityManager: Changing modify acls groups to:
20/07/16 10:09:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(david); groups with view permissions: Set(); users with modify permissions: Set(david); groups with modify permissions: Set()
20/07/16 10:09:08 INFO Utils: Successfully started service 'sparkDriver' on port 44541.
20/07/16 10:09:08 INFO SparkEnv: Registering MapOutputTracker
20/07/16 10:09:08 INFO SparkEnv: Registering BlockManagerMaster
20/07/16 10:09:08 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/07/16 10:09:08 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/07/16 10:09:08 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d7b67ab2-508c-4488-ac1b-7ee0e787aa79
20/07/16 10:09:08 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/07/16 10:09:08 INFO SparkEnv: Registering OutputCommitCoordinator
20/07/16 10:09:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/07/16 10:09:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{THE INTERNAL SPARK IP}:4040
20/07/16 10:09:08 INFO SparkContext: Added JAR file:/home/david/R/x86_64-pc-linux-gnu-library/4.0/sparklyr/java/sparklyr-2.3-2.11.jar at spark://{THE INTERNAL SPARK IP}:44541/jars/sparklyr-2.3-2.11.jar with timestamp 1594894148685
20/07/16 10:09:09 ERROR sparklyr: Backend (11296) failed calling getOrCreate on 11: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createTimelineClient(YarnClientImpl.java:181)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:168)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:151)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:147)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:136)
at sparklyr.StreamHandler.read(stream.scala:61)
at sparklyr.BackendHandler$$anonfun$channelRead0$1.apply$mcV$sp(handler.scala:58)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 49 more

Related

Failed to dial target host "kong-proxy-service-external-ip:443": context deadline exceeded

I am trying to use Kong API Gateway with Using Ingress with gRPC but getting below error.
Failed to dial target host "kong-proxy-service-external-ip:443": context deadline exceeded
I am using minikube cluster for deplyment.
I followed all the steps mentioned here - https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/using-ingress-with-grpc/ but when I tried to run grpcurl -v -d '{"greeting": "Kong Hello world!"}' -insecure $PROXY_IP:443 hello.HelloService.SayHello
I got the error Failed to dial target host
If I use port forwarding on service with command kubectl port-forward service/grpcbin 9001:9001 then this works so that mean the issue is somewhere with ingress or some configuration issue.
Request you to help me with this issue.

Shinyproxy error 500 :Failed to start container

I encountered Error 500 when I try to run shinyproxy. these are the errors I got.
Caused by: com.spotify.docker.client.exceptions.DockerException: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Caused by: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
And application.yml
proxy:
title: Open Analytics Shiny Proxy
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 3838
authentication: simple
admin-groups: scientists
hide-navbar: true
# Example: 'simple' authentication configuration
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
# Example: 'ldap' authentication configuration
ldap:
url: ldap://ldap.forumsys.com:389/dc=example,dc=com
user-dn-pattern: uid={0}
group-search-base:
group-search-filter: (uniqueMember={0})
manager-dn: cn=read-only-admin,dc=example,dc=com
manager-password: password
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
specs:
- id: Try2
display-name: Try2
description: Application which demonstrates the basics of a Shiny app
port: 3838
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: gokce/euler
access-groups: [scientists, mathematicians]
logging:
file:
shinyproxy.log
I read some comments on windows firewall may cause the problem so I allowed port:3838 in windows firewall. But didnt help

Unable to access shiny server in docker container

I have a Dockerized R Shiny app on Google Cloud VM. It was running well but from yesterday I am not able to access the shiny server. I have checked the docker logs and it gives me below message:
**[2019-03-13T14:56:11.496] [INFO] shiny-server - Shiny Server v1.5.7.890 (Node.js v8.10.0)
[2019-03-13T14:56:11.498] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2019-03-13T14:56:11.559] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root.
[2019-03-13T14:56:11.562] [INFO] shiny-server - Starting listener on 0.0.0.0:3838**
So, does it means the shiny server is running? If YES then why I am not able to access it from a browser (on the browser it gives me an error ERR_CONNECTION_REFUSED)?
Note: The port 3838 (default shiny server port) is whitelisted from VM.

RServe Connection Issue

need help with setting up connection from Tableau to a server where R is installed. Installed R and Rserve in linux.Started the Rserve from R console using
library(Rserve)
Rserve()
In Tableau trying to set up connection and then test connection by providing Server name and port as 6311 I get below error
"Connected party did not properly responded after a period of time, or
established connection failed because connected host has failed to
respond"
tried chkconfig iptables off and service iptables stop but no use.

Change IP Cloudify Manager

I did the Cloudify Manager installation on the Amazon cloud (http://getcloudify.org/guide/3.2/getting-started-bootstrapping.html), successfully.
However, to turn off the machine and start again, the IP is changed and when running:
cfy status
I get:
Getting management services status... [ip=54.83.41.97]
('Connection aborted.', error(110, 'Connection timed out'))
How do I change the IP 54.83.41.97 within the Coudify?
The internal IP is set during bootstrap of the manager.
If your internal IP has changed you should tear it down and bootstrap again.
If it is only the Elastic IP that changed you run:
cfy use -t your_new_ip
And the the CLI will connect to the manager with the new IP

Resources