Akka remoting binds to hostname instead of bind-hostname - networking

Context
I am trying to run an akka application on a node and make it work with other nodes using akka remoting capabilities.
My node has an IP address, 10.254.55.10, and there is an external IP, 10.10.10.44, redirecting to the former. This external IP is the one on which I want other nodes to contact me.
Extract from my akka app config:
akka {
remote {
netty.tcp {
hostname = "10.10.10.44"
port = 2551
bind-hostname = "10.254.55.10"
bind-port = 2551
}
}
}
I know everything works fine on the network side, because when I listen on my IP with netcat, I can send messages to myself via telnet using the external IP.
In other words, when running these two commands in separate shells:
$ nc -l 10.254.55.10 2551
$ telnet 10.10.10.44 2551
I'm able to communicate with myself, proving the network redirection works fine between the two IPs.
Problem
When launching the application, it crashes with a bind error:
INFO Remoting - Starting remoting
ERROR a.r.t.n.NettyTransport - failed to bind to /10.10.10.44:2551, shutting down Netty transport
Exception in thread "main" java.lang.ExceptionInInitializerError
[...]
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /10.10.10.44:2551
[...]
Caused by: java.net.BindException: Cannot assign requested address
[...]
INFO a.r.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
I assume what makes it crash is that it tries to bind to an IP that is not present locally (i.e. 10.10.10.44). But what I don't understand in first place is why akka is even trying to bind to 10.10.10.44, since it is not my bind-hostname (which is 10.254.55.10). This doc page seemed pretty clear to me on that matter, yet it doesn't work...

The project I was working with was based on akka 2.3.4, in which bind-hostname and bind-port configuration keys do not exist. I upgraded to latest version at the time, akka 2.4.1, and it solved the problem.

Related

nebulagraph-storaged: Exited– 9780 Address already in use

The nebula-storage service failed to start, the storage logs show that the port is occupied, but I checked that port 9780 is not occupied either. Also the configuration files are original and unmodified.
There could be several reasons why the nebula-storage service is failing to start:
Another process is already using port 9780. You can use the lsof -i tcp:9780 command to check which process is using the port.
There is a problem with the nebula-storage service itself. You can try restarting the service.

Unable to access Kafka Broker from separate LAN machine

EDIT: OBE - figured it out. Provided in answer for anyone else who has this issue.
I am working in an offline environment and am unable to connect to a kafka broker, on machine 1, from a separate machine, machine 2, on a LAN connection through a single switch.
Machine 1 (where Kafka and ZK are running):
server.properties
listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
advertised.listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
zookeeper.connect=localhost:2181
I am starting kafka/ZK from the config files located in kafka_2.12-2.8.0/config and the running the appropritate .bat from kafka_2.12-2.8.0/bin/windows.
On machine 2 I am able to ping <ethernet_IPv4_m1> and get results; however, I fail to get a TCP connection if I run Test-NetConnection <ethernet_IPv4_m1> -p 9092 while kafka is running. In python 3.8.11, using KafkaConsumer from kafka-python, I receive the NoBrokersAvailable error when using <ethernet_IPv4_m1>:9092 as the bootstrap_server. Additionally if I run a python:3.8.12-buster docker container with a '/bin/bash' entrypoint, and follow along with the kafka-listener walkthrough I am unable to connect to the broker. I'm in the exact situation as Scenario 1 provided in the link, but the walkthrough assumes you can connect to the broker. I have also tried opening the 9092 port in my Windows Defender for in/outbound traffic (on both machines) and still have no luck. Neither Kafka, nor networking, are my strong suits and every tutorial/answer I find refers to changing the listener and advertised.listener in the kafka server.properties file - I think I correctly did this, but am unsure. This is everything I have tried so far, any recommendations would be greatly appreciated. Thank you.
For M1, the private network was the active network.
Go to control panel -> Firewall & network protection -> advanced settings (must be admin) -> setup inbound/outbound rules for port 9092 for the active network.

corda CENM networkmap server start failing to connect database after a few week run

we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi

How to communicate with Kafka server running inside a docker

I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)

Can "Monit" monitor the processes running on remote servers?

I want to setup monit on a server which is going to be a centralized server to monitor processes running on remote servers. I checked many docs related to setup monit but could not find how to setup for remote server processes. For example a centralized monit server should monitor nginx running on A server, mongod running on B server and so on. Any suggestion how to do this?
In the documentation, Monit can be able to test the connection remotely, using tcp or udp, what you can do is to provide a small status file that gets refreshed for each technology you are intending to monitor, and let Monit hit that status file through http, etc. and can be used as follows:
check host nginxserver with address www.nginxserver.com
if failed port 80 protocol http
and request "/some_file"
then alert
Since you are testing a web server that can be easily accomplished with the above. as a note , below is the part about Monit connection testing:
CONNECTION TESTING Monit is able to perform connection testing via
networked ports or via Unix sockets. A connection test may only be
used within a check process or within a check host service entry in
the Monit control file.
If a service listens on one or more sockets, Monit can connect to the
port (using either tcp or udp) and verify that the service will accept
a connection and that it is possible to write and read from the
socket. If a connection is not accepted or if there is a problem with
socket i/o, Monit will assume that something is wrong and execute a
specified action. If Monit is compiled with openssl, then ssl based
network services can also be tested.
The full syntax for the statement used for connection testing is as
follows (keywords are in capital and optional statements in
[brackets]),
IF FAILED [host] port [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
or for Unix sockets,
IF FAILED [unixsocket] [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
host:HOST hostname. Optionally specify the host to connect to. If the
host is not given then localhost is assumed if this test is used
inside a process entry. If this test was used inside a remote host
entry then the entry's remote host is assumed. Although host is
intended for testing name based virtual host in a HTTP server running
on local or remote host, it does allow the connection statement to be
used to test a server running on another machine. This may be useful;
For instance if you use Apache httpd as a front-end and an
application-server as the back-end running on another machine, this
statement may be used to test that the back-end server is running and
if not raise an alert.
port:PORT number. The port number to connect to
unixsocket:UNIXSOCKET PATH. Specifies the path to a Unix socket.
Servers based on Unix sockets always run on the local machine and do
not use a port.

Resources