Connect sparklyr 0.8.4 to remote spark 2.2.1 connection - r

I'm trying to connect from R to a remote spark cluster.
The spark cluster is build on debian jessie and the R version i can install on it is at most 3.3 but I need 3.4 to be able to run FactoMineR. So I installed R on another machine and try to connect the cluster using sparklyr 0.8.4
> sc <- spark_connect(master = "spark://spark-cluster-m:7077", spark_home="/usr/lib/spark/", version="2.2.1")
Error in start_shell(master = master, spark_home = spark_home, spark_version = version, :
SPARK_HOME directory '/usr/lib/spark/' not found
spark isn't installed on the local machine but on the spark-cluster-m, it is :
jc#spark-cluster-m:/usr/lib/spark$ ls
bin conf data examples external jars LICENSE licenses NOTICE python R README.md RELEASE sbin work yarn
Have I missed something ?
The spark cluster is on google cloud (test account) and so is the VM with R. How do I verify the port spark can be connected to ?
Thanks for your clues
#user16... You're right, this particular problem seems to be solved but my way is not ended.
I installed the same spark version (2.2.1 with hadoop > 2.7)
Here is my new error message :
Error in force(code) :
Failed during initialize_connection: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:137)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Log: /tmp/RtmpTUh0z6/file5d231368db0_spark.log
---- Output Log ----
at io.netty.channel.nio.NioEventLoop.processS
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:24:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-cluster-m:7077...
18/07/21 18:24:59 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to spark-cluster-m/10.142.0.3:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
... 4 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: spark-cluster-m/10.142.0.3:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:631)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:25:19 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
18/07/21 18:25:19 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
18/07/21 18:25:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46811.
18/07/21 18:25:19 INFO NettyBlockTransferService: Server created on 10.142.0.5:46811
18/07/21 18:25:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/07/21 18:25:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMasterEndpoint: Registering block manager 10.142.0.5:46811 with 366.3 MB RAM, BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO SparkUI: Stopped Spark web UI at http://10.142.0.5:4040
18/07/21 18:25:19 INFO StandaloneSchedulerBackend: Shutting down all executors
18/07/21 18:25:19 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/07/21 18:25:19 WARN StandaloneAppClient$ClientEndpoint: Drop Unregist
I can see it can resolve the name (=> 10.142.0.3)
Also, it seems to be the good port as if I use port 7000, i have the error :
18/07/21 18:32:54 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from spark-cluster-m/10.142.0.3:7000 is closed
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Could not connect to spark-cluster-m:7000: java.io.IOException: Connection reset by peer
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7000
But I can't figure out what this means.
You say my configuration is "particular". If there is a better (and simple) approach, i would be glad to use it.
Here is how I proceeded in my tests :
I created a google dataproc cluster with spark (2.2.1)
I added Cassandra on each node
At this stage, everything works ok.
Then, i need to install FactoMineR as I'd like to try HMFA. It is said to run with R > 3.0.0 so it seems to be ok but it depends on nlme which can't be installed on R < 3.4.0 (and the one in the debian jessie backports is 3.3.)
So, what can I do ?
I must admit that i'm not very enthusiastic in restarting a full spark / cassandra cluster install from scratch...

Related

Deploy apache archiva 2.2.8 war to Tomcat 9: not accept HTTP connection

Deploy apache archiva 2.2.8 war to Tomcat 9: could not accept HTTP connections.
There are many errors like this on server side:
WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
Did not accept any HTTP connection from browser: http://localhost:8080. HTTP status 404 Not Found.

wso2 apimanager Active-Active deployment

I have deploy API manager 4.0.0 All-in-one on 2 VMs, front the system with a load balancer.
When one node shutdown by command "sh api-manager.sh stop", another swithes success and runs well , but there are some error in console like below:
TID: [-1] [] [2022-03-14 10:14:13,270] ERROR {org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker} - Error while trying to connect
to the endpoint. Cannot borrow client for ssl://10.32.73.10:9711 org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://10.32.73.10:9711 at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:147)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointException: Error while opening socket to 10.32.73.10:9711. Connection refused (Conne
ction refused) at org.wso2.carbon.databridge.agent.endpoint.binary.BinarySecureClientPoolFactory.createClient(BinarySecureClientPoolFactory.java:75)
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:137)
... 6 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:287)
at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:146)
at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
at org.wso2.carbon.databridge.agent.endpoint.binary.BinarySecureClientPoolFactory.createClient(BinarySecureClientPoolFactory.java:58)
... 9 more
TID: [-1] [] [2022-03-14 10:14:15,158] WARN {org.wso2.carbon.databridge.agent.endpoint.DataEndpointGroup} - No receiver is reachable at URL Endpoint/
Endpoints [tcp://10.32.73.10:9611], will try to reconnect every 30 sec
Are there anything wrong in the deployment.toml?
In APIM active-active setup, each node is publishing throttling data to itself and to the other node. When you stop the other node, it can't publish the throttling data to the other node. Hence you see connection refused errors and this is expected. No harm having these error logs. It will recover when the other node is started. If you look at the deployment.toml, can find the other node details under the throttling configurations.

Error when making device connect to Kaa middleware

I've recently setup a Kaa docker environment to use for development/testing.
I can successfully generate a SDK for my endpoint, and make it connect the first time.
But when i restart the kaa docker container i get the following error from the endpoint logs when connecting endpoint:
I/*ultOperationTcpChannel: Can't sync. Channel [default_operation_tcp_channel] is waiting for CONNACK message + KAASYNC message
The Kaa log contains the following error:
2017-02-15 20:52:27,305 [pool-9-thread-2] TRACE o.k.k.s.b.s.t.BootstrapTransportService - Message processing failed
javax.crypto.BadPaddingException: Decryption error
at sun.security.rsa.RSAPadding.unpadV15(RSAPadding.java:380) ~[na:1.8.0_121]
at sun.security.rsa.RSAPadding.unpad(RSAPadding.java:291) ~[na:1.8.0_121]
at com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:363) ~[sunjce_provider.jar:1.8.0_112]
at com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:389) ~[sunjce_provider.jar:1.8.0_112]
at javax.crypto.Cipher.doFinal(Cipher.java:2165) ~[na:1.8.0_121]
at org.kaaproject.kaa.common.endpoint.security.MessageEncoderDecoder.decodeSessionKey(MessageEncoderDecoder.java:189) ~[endpoint-shared-0.10.0.jar:na]
at org.kaaproject.kaa.common.endpoint.security.MessageEncoderDecoder.decodeData(MessageEncoderDecoder.java:218) ~[endpoint-shared-0.10.0.jar:na]
at org.kaaproject.kaa.server.bootstrap.service.transport.BootstrapTransportService$BootstrapMessageHandler$1.decodeEncryptedRequest(BootstrapTransportService.java:269) ~[kaa-node-0.10.0.jar:na]
at org.kaaproject.kaa.server.bootstrap.service.transport.BootstrapTransportService$BootstrapMessageHandler$1.decodeRequest(BootstrapTransportService.java:250) ~[kaa-node-0.10.0.jar:na]
at org.kaaproject.kaa.server.bootstrap.service.transport.BootstrapTransportService$BootstrapMessageHandler$1.run(BootstrapTransportService.java:185) ~[kaa-node-0.10.0.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
I've set the docker compose file to use persistent storage for sql and nosql databases.
Regenerating the SDK again, solves the problem until next reboot of Kaa.

Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener

I downloaded SonarQube 5.5 with fresh database on oracle 11g and also configured the sonar.properties file. I am unable to start the server and getting the error as shown below:
2016.05.27 09:36:45 INFO web[o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2016.05.27 09:36:46 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 5.5 / 5773a4aab0ef6c0de79d3038e82f8a051049d6d0
2016.05.27 09:36:46 INFO web[o.sonar.db.Database] Create JDBC data source for jdbc:oracle:thin:#10.101.18.252:1522:orcl1
2016.05.27 09:36:48 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
org.sonar.api.utils.MessageException: Current version is too old. Please upgrade to Long Term Support version firstly.
2016.05.27 09:36:48 ERROR web[o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2016.05.27 09:36:48 ERROR web[o.a.c.c.StandardContext] Context [] startup failed due to previous errors
2016.05.27 09:36:48 WARN web[o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Unknown Source)
java.util.TimerThread.run(Unknown Source)
2016.05.27 09:36:48 WARN web[o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
oracle.jdbc.driver.BlockSource$ThreadedCachingBlockSource$BlockReleaser.run(BlockSource.java:327)
2016.05.27 09:36:48 INFO web[o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.05.27 09:36:48 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
2016.05.27 09:36:48 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.05.27 09:36:48 WARN web[o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-5.5.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:48) ~[sonar-server-5.5.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-5.5.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:69) ~[sonar-server-5.5.jar:na]
2016.05.27 09:36:48 INFO web[o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.05.27 09:36:49 INFO web[o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.05.27 09:36:49 INFO web[o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.05.27 09:36:49 INFO web[o.s.s.a.TomcatAccessLog] Web server is stopped
2016.05.27 15:06:50 INFO app[o.s.p.m.Monitor] Process[es] is stopping
2016.05.27 15:06:50 INFO es[o.s.p.StopWatcher] Stopping process
2016.05.27 15:06:50 INFO es[o.elasticsearch.node] [sonar-1464341797936] stopping ...
2016.05.27 15:06:50 INFO es[o.elasticsearch.node] [sonar-1464341797936] stopped
2016.05.27 15:06:50 INFO es[o.elasticsearch.node] [sonar-1464341797936] closing ...
2016.05.27 15:06:50 INFO es[o.elasticsearch.node] [sonar-1464341797936] closed
2016.05.27 15:06:50 INFO app[o.s.p.m.Monitor] Process[es] is stopped
<-- Wrapper Stopped
I am not able to understand which version need to upgrade.
Thank you for help in advance !!!
Every time you have this error see your database, it is because it has data when you error.
Try:
delete your database sonar;
drop database sonar;
redo
My problem is solved with fresh and clean the database and also delete the logs and data folder before running the first instance.

Running SparkR on YARN outputs an "Rscript execution" error

I have Spark 1.4.1 installed on an Hadoop 2.7 cluster.
I've started the SparkR shell without error:
bin/sparkR --master yarn-client
I've run the R command without error (introductory example from spark.apache.org) :
df <- createDataFrame(sqlContext, faithful)
When I run the command :
head(select(df, df$eruptions))
I get the following error at 15/09/02 10:08:29 on the executor node :
"Rscript execution error: No such file or directory".
Any hint would be greatly appreciated.
Spark tasks other than SparkR run OK on my YARN cluster.
R 3.2.1 is installed and runs OK on the driver node.
15/09/02 10:04:06 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/09/02 10:04:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/09/02 10:04:10 INFO spark.SecurityManager: Changing view acls to: yarn,root
15/09/02 10:04:10 INFO spark.SecurityManager: Changing modify acls to: yarn,root
15/09/02 10:04:10 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, root); users with modify permissions: Set(yarn, root)
15/09/02 10:04:11 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/09/02 10:04:12 INFO Remoting: Starting remoting
15/09/02 10:04:12 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#datanode1.hp.com:46167]
15/09/02 10:04:12 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 46167.
15/09/02 10:04:12 INFO spark.SecurityManager: Changing view acls to: yarn,root
15/09/02 10:04:12 INFO spark.SecurityManager: Changing modify acls to: yarn,root
15/09/02 10:04:12 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, root); users with modify permissions: Set(yarn, root)
15/09/02 10:04:12 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/09/02 10:04:12 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/09/02 10:04:12 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
15/09/02 10:04:12 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/09/02 10:04:12 INFO Remoting: Starting remoting
15/09/02 10:04:13 INFO util.Utils: Successfully started service 'sparkExecutor' on port 47919.
15/09/02 10:04:13 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#datanode1.hp.com:47919]
15/09/02 10:04:13 INFO storage.DiskBlockManager: Created local directory at /data2/hadoop/yarn/local/usercache/root/appcache/application_1441180800595_0001/blockmgr-5e435e40-bd36-4746-9acd-8cf1619033ae
15/09/02 10:04:13 INFO storage.DiskBlockManager: Created local directory at /data3/hadoop/yarn/local/usercache/root/appcache/application_1441180800595_0001/blockmgr-28dfabe6-8e0d-4e49-bc95-27b3428c10a0
15/09/02 10:04:13 INFO storage.MemoryStore: MemoryStore started with capacity 534.5 MB
15/09/02 10:04:13 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver#192.1.1.1:45596/user/CoarseGrainedScheduler
15/09/02 10:04:13 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver
15/09/02 10:04:13 INFO executor.Executor: Starting executor ID 2 on host datanode1.hp.com
15/09/02 10:04:14 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34166.
15/09/02 10:04:14 INFO netty.NettyBlockTransferService: Server created on 34166
15/09/02 10:04:14 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/09/02 10:04:14 INFO storage.BlockManagerMaster: Registered BlockManager
15/09/02 10:06:35 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 0
15/09/02 10:06:35 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
15/09/02 10:06:35 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0
15/09/02 10:06:35 INFO storage.MemoryStore: ensureFreeSpace(854) called with curMem=0, maxMem=560497950
15/09/02 10:06:35 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 854.0 B, free 534.5 MB)
15/09/02 10:06:35 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 159 ms
15/09/02 10:06:35 INFO storage.MemoryStore: ensureFreeSpace(1280) called with curMem=854, maxMem=560497950
15/09/02 10:06:35 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1280.0 B, free 534.5 MB)
15/09/02 10:06:35 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 11589 bytes result sent to driver
15/09/02 10:08:28 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1
15/09/02 10:08:28 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 1)
15/09/02 10:08:28 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1
15/09/02 10:08:28 INFO storage.MemoryStore: ensureFreeSpace(4022) called with curMem=0, maxMem=560497950
15/09/02 10:08:28 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.9 KB, free 534.5 MB)
15/09/02 10:08:28 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 13 ms
15/09/02 10:08:28 INFO storage.MemoryStore: ensureFreeSpace(9536) called with curMem=4022, maxMem=560497950
15/09/02 10:08:28 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 9.3 KB, free 534.5 MB)
15/09/02 10:08:29 INFO r.BufferedStreamThread: Rscript execution error: No such file or directory
15/09/02 10:08:39 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.net.SocketTimeoutException: Accept timed out
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
at java.net.ServerSocket.implAccept(ServerSocket.java:530)
at java.net.ServerSocket.accept(ServerSocket.java:498)
at org.apache.spark.api.r.RRDD$.createRWorker(RRDD.scala:425)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Resources