I cannot get a Play app within Boxfuse to connect to a MariaDB instance on the same computer (development PC).
vb-3144982e => Caused by: org.mariadb.jdbc.internal.common.QueryException: Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
vb-3144982e => at org.mariadb.jdbc.internal.mysql.MySQLProtocol.connectWithoutProxy(MySQLProtocol.java:626)
vb-3144982e => at org.mariadb.jdbc.internal.common.Utils.retrieveProxy(Utils.java:541)
vb-3144982e => at org.mariadb.jdbc.Driver.connect(Driver.java:95)
vb-3144982e => ... 12 more
vb-3144982e => Caused by: java.net.ConnectException: Connection refused
What am I missing to get the "contained" app to connect to a "host" port?
To make it easy to access services running on your physical machine (outside of your Boxfuse VirtualBox instance), Boxfuse exposes an environment variable named BOXFUSE_HOST_IP to each of its VirtualBox instances. This environment variable contains the IP address of your physical machine (example: 172.27.3.61) which you can use this to construct URLs to access your services.
More info: https://cloudcaptain.sh/docs/virtualbox#BOXFUSE_HOST_IP
Related
OS: Windows 10 and Ubuntu 18.04
Corda: 4.4
I wanted to learn CordaRPCOps so I started by using the template’s Cordform deployNodes task that provides me with three running nodes.
First I used the following code running locally to connect to PartyA’s Corda node.
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
CordaRPCClient client = new CordaRPCClient(nodeAddress);
CordaRPCConnection connection = client.start("user1", "test");
CordaRPCOps cordaRPCOps = connection.getProxy();
This worked great.
Then I tried connecting from a different PC on the same network with the following change:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This failed with the following error:
net.corda.client.rpc.RPCException: Cannot connect to server(s). Tried with all available servers.
Assuming this was network related I went back to the local PC and ran the same code:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
This also failed. So I decided to try the PC name instead of the IP address. This failed both locally and on the other PC.
If the rpcSettings in the node.conf file use “localhost”:
rpcSettings {
address="localhost:10006"
adminAddress="localhost:10046"
}
…you cannot connect to the node using anything but “localhost” or “127.0.0.1”
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("localhost", 10006);
This also means you cannot connect to the node from across the network.
If you replace “localhost” with the IP address or the computer name:
rpcSettings {
address="192.168.1.149:10006"
adminAddress="localhost:10046"
}
… then you can reference the node by either IP or computer name locally, or from another PC on the network:
NetworkHostAndPort nodeAddress = new NetworkHostAndPort("192.168.1.149", 10006);
You’ll notice the change on the opening screen of the Node Shell when they start and list their respective information.
--- Corda Open Source 4.4 (21e8c4f) -------------------------------------------------------------
Logs can be found in : F:\corda\Java\cordapp-template-java\build\nodes\PartyA\logs
! ATTENTION: This node is running in development mode! This is not safe for production deployment.
Jolokia: Agent started with URL http://127.0.0.1:7006/jolokia/
Advertised P2P messaging addresses : localhost:10005
RPC connection address : 192.168.1.149:10006
RPC admin connection address : localhost:10046
Background: I've got two machines with identical hostnames, I need to set up a local spark cluster for testing, setting up a master and a worker works fine, but trying to run an application with the driver causes problems, netty doesn't seem to be picking the correct host (regardless of what I put in there, it just picks the first host).
Identical hostname:
$ dig +short corehost
192.168.0.100
192.168.0.101
Spark config (used by master and the local worker):
export SPARK_LOCAL_DIRS=/some/dir
export SPARK_LOCAL_IP=corehost // i tried various like 192.168.0.x for
export SPARK_MASTER_IP=corehost // local, master and the driver
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_DIR=/some/dir
Spark starts up and I can see the worker in the web-ui.
When I run the spark "job" below:
val conf = new SparkConf().setAppName("AaA")
// tried 192.168.0.x and localhost
.setMaster("spark://corehost:7077")
val sc = new SparkContext(conf)
I get this exception:
15/04/02 12:34:04 INFO SparkContext: Running Spark version 1.3.0
15/04/02 12:34:04 WARN Utils: Your hostname, corehost resolves to a loopback address: 127.0.0.1; using 192.168.0.100 instead (on interface en1)
15/04/02 12:34:04 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/04/02 12:34:05 ERROR NettyTransport: failed to bind to corehost.home/192.168.0.101:0, shutting down Netty transport
...
Exception in thread "main" java.net.BindException: Failed to bind to: corehost.home/192.168.0.101:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Process finished with exit code 1
Not sure how to proceed... its a whole jungle of ip addresses.
Not sure if this is a netty issue either.
My experience with the identical problem is that it revolves around setting things up locally. Try being more verbose in your spark driver code, add the SPARK_LOCAL_IP and driver host ip to the config:
val conf = new SparkConf().setAppName("AaA")
.setMaster("spark://localhost:7077")
.set("spark.local.ip","192.168.1.100")
.set("spark.driver.host","192.168.1.100")
This should tell netty which of the two identical hosts to use.
I am running ELK setup on a single machine. Everything was working fine untill I switched off my internet connection.
My logstash console shows this error when I switch off Internet connection:
log4j, [2014-11-27T10:31:57.480] WARN: org.elasticsearch.transport.netty: [logstash-HP-Pro] exception caught on transport layer [[id: 0x7a124750]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.connect(Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)
at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
EDIT
Logstash output config:
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
So it seams it's unable to connect to ES server.
So is an internet connection be required always?? I am new to the setup.
I was trying to setup ODBC connection for Hive. I followed the below steps but it didn't worked.
User DSN-->Add--> Hortonworks Hive ODBC Driver --> and I gave below details
Host : IP of the Primary name node cluster
Port:10001
Server Type : Hive Server 2
Authentication Mechanism : User Name --> hadoop
While testing the connection, it throws the following error
Error:
Driver Version: V1.2.13.1018
Running connectivity tests...
Attempting connection
Failed to establish connection
SQLSTATE: HY000[Hortonworks][Hardy] (34) Error from Hive: connect() failed: errno = 10061.
TESTS COMPLETED WITH ERROR.
Could you please tell me if the port I use is correct ? If not, what port should I try ? The port 10000 doesn't work either.
I am using HDP 2.0 on Windows 2012 R2 Server (Single Node Cluster). I Installed Hive ODBC Driver from Microsoft site. I gave my Host Name and Port :10001 and user as hive. When I installed HDP 2.0 in Win 2012 Server R2, I gave the Hive User Name as hive. I am able to connect successfully.
The answer of your problem is that first of all: check on your virtual machine that the port "10000" is added because it's not added by default.
If the port is there, you might check the hive Server if it's running from your virtual machine
I hope it will help.
under the mechanism changed it to user name only.
I have a Vagrant machine based on VirtualBox that has some problems (see Vagrant crashes depending on physical network). Now I tried running it on another piece of hardware (with OS X Mavericks), and got the following error message:
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: VBoxManage: error: Unable to create a host network interface
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component Host,
interface IHost, callee nsISupports
Context: "CreateHostOnlyNetworkInterface (hif.asOutParam(),
progress.asOutParam())" at line 64 of file VBoxManageHostonly.cpp
What does this mean?
For the error to appear I run
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
… and then it crashes. Any ideas?
Oh, by the way: It's Vagrant 1.3.5 and VirtualBox 4.1.18.
sudo /Library/StartupItems/VirtualBox/VirtualBox restart
worked for me, see https://coderwall.com/p/ydma0q
The popular answer seems to be modprobe vboxnetadp (for Linux) or /Library/StartupItems/VirtualBox/VirtualBox restart (for Mac).
However, the fix for me was to add myself to the vboxusers group and relogin.