can we only install x-pack for kibana, not install for elasticserach? - kibana

Unhandled rejection [illegal_argument_exception] No endpoint or operation is available at [_xpack] :: {"path":"/_xpack","statusCode":400,"response":"{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"No endpoint or operation is available at [_xpack]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"No endpoint or operation is available at [_xpack]\"},\"status\":400}"}
at respond (/opt/gfitps/TpsStream/kibana-5.0.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:289:15)
at checkRespForFailure (/opt/gfitps/TpsStream/kibana-5.0.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:248:7)
at HttpConnector. (/opt/gfitps/TpsStream/kibana-5.0.1-linux-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
at IncomingMessage.wrapper (/opt/gfitps/TpsStream/kibana-5.0.1-linux-x86_64/node_modules/elasticsearch/node_modules/lodash/lodash.js:4962:19)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)

as mentioned here you should install for both. and the same version for all.

Related

Getting an error when attempting to mock location in Appium on iOS simulator using driver.setLocation

I am trying to set the location on my iOS simulator within my appium test using the driver.setLocation method.
Here is the response from appium logs:
[HTTP] --> POST /wd/hub/session/dfad6b21-23de-4d3c-9157-8832df4815f5/location
[HTTP] {"location":{"altitude":0,"latitude":53.427573,"class":"org.openqa.selenium.html5.Location","longitude":-6.243413}}
[debug] [W3C (dfad6b21)] Calling AppiumDriver.setGeoLocation() with args: [{"altitude":0,"latitude":53.38977,"class":"org.openqa.selenium.html5.Location","longitude":-6.10934},"dfad6b21-23de-4d3c-9157-8832df4815f5"]
[debug] [XCUITest] Executing command 'setGeoLocation'
[iOSSim] 'set-simulator-location' binary has not been found in your PATH. Please install it as "brew install lyft/formulae/set-simulator-location" by brew or read https://github.com/lyft/set-simulator-location to set the binary by manual to be able to set geolocation by the library.
[iOSSim] Failed to set geolocation with idb because it is not installed or the "launchWithIDB" capability was not set
[iOSSim] spawn /usr/bin/python ENOENT
[debug] [W3C (dfad6b21)] Encountered internal error running command: Error: spawn /usr/bin/python ENOENT
[debug] [W3C (dfad6b21)] at Process.ChildProcess._handle.onexit (node:internal/child_process:283:19)
[debug] [W3C (dfad6b21)] at onErrorNT (node:internal/child_process:476:16)
[debug] [W3C (dfad6b21)] at processTicksAndRejections (node:internal/process/task_queues:82:21)
[HTTP] <-- POST /wd/hub/session/dfad6b21-23de-4d3c-9157-8832df4815f5/location 500 66 ms - 635
[HTTP]

kubernetes nginx upstream server as service not found

These are my yaml files referenced by here (kubernetes.io/docs)
deployment gists
And I use command kubectl apply -f backend-deployment.yaml -f frontend-configmap.yaml -f frontend-deployment.yaml
Backend were launched successfully, but Frontend occured error.
[emerg] 1#1: host not found in upstream "backend-service" in /etc/nginx/conf.d/nginx.conf:2
#nginx: [emerg] host not found in upstream "backend-service" in /etc/nginx/conf.d/nginx.conf:2
Even if 'backend-service' is declared, nginx couldn't recognize it.
Command nslookup backend-service result is:
Server: 127.0.0.53
Address: 127.0.0.53#53
** server can't find backend-service: SERVFAIL
What am I missing?
I have gotten closer to the issue/solution.
First of all, you if the images in your supplied answer are correct, you are running nginx in your backend deployment and node-js server on your frontend deployment. This is a mistake.
after changing the images frontend is running (as expected) but backend pod is crashing.
However, the backend pod is successfully resolving the mysql-service to its internal clusterIp and I appear to have wrong authentication setup.
`> server#1.0.0 start /usr/src/app
> node backend.js
(node:18) Warning: Accessing non-existent property 'count' of module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was created)
(node:18) Warning: Accessing non-existent property 'findOne' of module exports inside circular dependency
(node:18) Warning: Accessing non-existent property 'remove' of module exports inside circular dependency
(node:18) Warning: Accessing non-existent property 'updateOne' of module exports inside circular dependency
listening on 3000
events.js:292
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED 10.100.77.32:3306
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1142:16)
--------------------
at Protocol._enqueue (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Protocol.handshake (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
at Connection.connect (/usr/src/app/node_modules/mysql/lib/Connection.js:116:18)
at Object.<anonymous> (/usr/src/app/backend.js:58:12)
at Module._compile (internal/modules/cjs/loader.js:1185:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)
at Module.load (internal/modules/cjs/loader.js:1034:32)
at Function.Module._load (internal/modules/cjs/loader.js:923:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47
Emitted 'error' event on Connection instance at:
at Connection._handleProtocolError (/usr/src/app/node_modules/mysql/lib/Connection.js:423:8)
at Protocol.emit (events.js:315:20)
at Protocol._delegateError (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:398:10)
at Handshake.<anonymous> (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:153:12)
at Handshake.emit (events.js:315:20)
at Handshake.Sequence.end (/usr/src/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:78:12)
at Protocol.handleNetworkError (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:369:14)
at Connection._handleNetworkError (/usr/src/app/node_modules/mysql/lib/Connection.js:418:18)
at Socket.emit (events.js:315:20)
at emitErrorNT (internal/streams/destroy.js:96:8)
at emitErrorCloseNT (internal/streams/destroy.js:68:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.100.77.32',
port: 3306,
> node backend.js
fatal: true
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! server#1.0.0 start: `node backend.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the server#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-07-21T08_46_32_833Z-debug.log`
if you look at the log. it says it tries to connect to 10.100.77.32:3306 and the output of kubectl get svc is the following mysql-service ClusterIP 10.100.77.32 <none> 3306/TCP 17m
with the matching ip of my service.
kubectl get endpoints shows that the mysql-service has indeed found an endpoint:
mysql-service ClusterIP 10.100.77.32 <none> 3306/TCP 17m
and here is additional log information showing that the authentication mechanism of your nodejs application appearently is not working for the mysql pod.
Error: ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol requested by server; consider upgrading MySQL client
at Handshake.Sequence._packetToError (/usr/src/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
at Handshake.ErrorPacket (/usr/src/app/node_modules/mysql/lib/protocol/sequences/Handshake.js:123:18)
at Protocol._parsePacket (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:291:23)
at Parser._parsePacket (/usr/src/app/node_modules/mysql/lib/protocol/Parser.js:433:10)
at Parser.write (/usr/src/app/node_modules/mysql/lib/protocol/Parser.js:43:10)
at Protocol.write (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:38:16)
at Socket.<anonymous> (/usr/src/app/node_modules/mysql/lib/Connection.js:88:28)
at Socket.<anonymous> (/usr/src/app/node_modules/mysql/lib/Connection.js:526:10)
at Socket.emit (events.js:315:20)
at addChunk (_stream_readable.js:296:12)
at readableAddChunk (_stream_readable.js:272:9)
at Socket.Readable.push (_stream_readable.js:213:10)
at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
--------------------
at Protocol._enqueue (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Protocol.handshake (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
at Connection.connect (/usr/src/app/node_modules/mysql/lib/Connection.js:116:18)
at Object.<anonymous> (/usr/src/app/backend.js:58:12)
at Module._compile (internal/modules/cjs/loader.js:1185:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)
at Module.load (internal/modules/cjs/loader.js:1034:32)
at Function.Module._load (internal/modules/cjs/loader.js:923:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47
Emitted 'error' event on Connection instance at:
at Connection._handleProtocolError (/usr/src/app/node_modules/mysql/lib/Connection.js:423:8)
at Protocol.emit (events.js:315:20)
at Protocol._delegateError (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:398:10)
at Handshake.<anonymous> (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:153:12)
at Handshake.emit (events.js:315:20)
at Handshake.Sequence.end (/usr/src/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:78:12)
at Handshake.ErrorPacket (/usr/src/app/node_modules/mysql/lib/protocol/sequences/Handshake.js:125:8)
at Protocol._parsePacket (/usr/src/app/node_modules/mysql/lib/protocol/Protocol.js:291:23)
[... lines matching original stack trace ...]
at readableAddChunk (_stream_readable.js:272:9) {
code: 'ER_NOT_SUPPORTED_AUTH_MODE',
errno: 1251,
sqlMessage: 'Client does not support authentication protocol requested by server; consider upgrading MySQL client',
sqlState: '08004',
fatal: true
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! server#1.0.0 start: `node backend.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the server#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-07-21T08_55_22_133Z-debug.log
However, with the changes suggest at the beginning of the answer the issues around resolving the service name and communicating inside of kubernetes should be resolved.
backend-deployment image : bateaux/test-nginx
frontend-deployment image : bateaux/test-node-server
When deploy backend-deployment, remove [livenessProbe, readinessProbe].
Because node.js express server use #cloudnative/health-connect To readinessProbe, so call PingCheck(worker node ip).
Wow....I found solution for getaddrinfo EAI_AGAIN mysql-service.
I think if network can't find service, there is error in dns.
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
Result of command is gist.
So...I finally found this link about kubernetes coredns.
By belphegor's answer, it works restart coredns.
Although, there is new error Client does not support authentication protocol requested by server; consider upgrading MySQL client, but i think to solve it is easier than dns error. hhhhhhhhhhhhhhhh

Frequently Prerender is getting stopped. How can i handle this?

Using: https://prerender.io/
Frequently getting this error unexpected server response (404) and prerender is stopped.
Right now I have always restarted the server and solving this issue.
I am using pm2 to continue server start.
https://www.npmjs.com/package/pm2
got 504 in 16ms for https://www.testing.com/tags/testing1234/1
getting https://www.testing.com/tags/testing1234/1
Error: unexpected server response (404)
at ClientRequest._req.on (/opt/apps/apps/prerender/node_modules/ws/lib/WebSocket.js:653:21)
at ClientRequest.emit (events.js:182:13)
at HTTPParser.parserOnIncomingClient (_http_client.js:555:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:109:17)
at Socket.socketOnData (_http_client.js:441:20)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at TCP.onStreamRead (internal/stream_base_commons.js:94:17)

SSL error bad handshake while calling databrick job from airflow Dag

I am running a airflow container in which
My airflow dags fails to connect to Databrick job with the error log below.
failed with reason: HTTPSConnectionPool(host='Mycompany-dev.cloud.databricks.com', port=443): Max retries exceeded with url: /api/2.0/jobs/runs/submit (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))
more Information:
Initially to install docker or java it was giving same error in which i rewrite the pip install code as below, however i am not sure how to include it while connecting to a server from airflow UI.
pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org <package_name>

Connect sparklyr 0.8.4 to remote spark 2.2.1 connection

I'm trying to connect from R to a remote spark cluster.
The spark cluster is build on debian jessie and the R version i can install on it is at most 3.3 but I need 3.4 to be able to run FactoMineR. So I installed R on another machine and try to connect the cluster using sparklyr 0.8.4
> sc <- spark_connect(master = "spark://spark-cluster-m:7077", spark_home="/usr/lib/spark/", version="2.2.1")
Error in start_shell(master = master, spark_home = spark_home, spark_version = version, :
SPARK_HOME directory '/usr/lib/spark/' not found
spark isn't installed on the local machine but on the spark-cluster-m, it is :
jc#spark-cluster-m:/usr/lib/spark$ ls
bin conf data examples external jars LICENSE licenses NOTICE python R README.md RELEASE sbin work yarn
Have I missed something ?
The spark cluster is on google cloud (test account) and so is the VM with R. How do I verify the port spark can be connected to ?
Thanks for your clues
#user16... You're right, this particular problem seems to be solved but my way is not ended.
I installed the same spark version (2.2.1 with hadoop > 2.7)
Here is my new error message :
Error in force(code) :
Failed during initialize_connection: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:137)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Log: /tmp/RtmpTUh0z6/file5d231368db0_spark.log
---- Output Log ----
at io.netty.channel.nio.NioEventLoop.processS
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:24:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-cluster-m:7077...
18/07/21 18:24:59 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to spark-cluster-m/10.142.0.3:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
... 4 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: spark-cluster-m/10.142.0.3:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:631)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:25:19 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
18/07/21 18:25:19 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
18/07/21 18:25:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46811.
18/07/21 18:25:19 INFO NettyBlockTransferService: Server created on 10.142.0.5:46811
18/07/21 18:25:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/07/21 18:25:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMasterEndpoint: Registering block manager 10.142.0.5:46811 with 366.3 MB RAM, BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO SparkUI: Stopped Spark web UI at http://10.142.0.5:4040
18/07/21 18:25:19 INFO StandaloneSchedulerBackend: Shutting down all executors
18/07/21 18:25:19 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/07/21 18:25:19 WARN StandaloneAppClient$ClientEndpoint: Drop Unregist
I can see it can resolve the name (=> 10.142.0.3)
Also, it seems to be the good port as if I use port 7000, i have the error :
18/07/21 18:32:54 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from spark-cluster-m/10.142.0.3:7000 is closed
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Could not connect to spark-cluster-m:7000: java.io.IOException: Connection reset by peer
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7000
But I can't figure out what this means.
You say my configuration is "particular". If there is a better (and simple) approach, i would be glad to use it.
Here is how I proceeded in my tests :
I created a google dataproc cluster with spark (2.2.1)
I added Cassandra on each node
At this stage, everything works ok.
Then, i need to install FactoMineR as I'd like to try HMFA. It is said to run with R > 3.0.0 so it seems to be ok but it depends on nlme which can't be installed on R < 3.4.0 (and the one in the debian jessie backports is 3.3.)
So, what can I do ?
I must admit that i'm not very enthusiastic in restarting a full spark / cassandra cluster install from scratch...

Resources