Kubernetes: unable to get connected to a remote master node "connection refused" - networking

Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.

Related

Not able to reach from one instance to other

I have 2 aws ec2 instances and facing reach-ability issue from one instance to another. Have checked for SG, IGW, and it looks fine. Have also added subnet in /etc/hosts.allow to allow the hosts.
Can someone please suggest how to debug this reach-ability issue ?
I'm trying with
telnet <ip of other ec2 instance> <port>
from one ec2 instance to check if one instance is able to connect to open port where service is running of other instance.
Can capturing packet trace from source and destination will help? If yes, what will be the command for it ?

Accessing Kafka broker from outside my LAN

I start-up a plain Zookeeper/Kafka broker using the default commands below as these are described in the Kafka documentation in one machine (let's call it Machine A)
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
These start a broker at localhost:9092 in Machine A.
I then go to another computer (let's name it Machine B) which is on the same network and create a default consumer provided by Kafka by calling this
bin/kafka-console-consumer.sh --bootstrap-server IP_ADDRESS_HERE:9092 --topic test --from-beginning
Where IP_ADDRESS_HERE is the IP address in the local network of Machine A that hosts the broker (e.g., 192.168.1.10)
Everything works fine. Then I try to access the broker from a machine outside of my local network (let's call it Machine C). I go to my router configuration and I do a port forwarding for the machine that hosts the broker (Machine A). For example I do the following
192.168.1.10:9092 --> 9094
Meaning that I forward the internal 9092 port of the 192.168.1.10 device (Machine A) to port 9094 of my router. I then go to a service like the YouGetSignal and check if port 9094 of my public IP address (e.g., 97.190.92.128) is open. And I get a message that indeed the port is open.
When I try to consume the borker in Machine A from Machine C using the following command
bin/kafka-console-consumer.sh --bootstrap-server PUBLIC_IP_ADDRESS_HERE:9094 --topic test --from-beginning
Where the PUBLIC_IP_ADDRESS_HERE is the public IP address of the network where Machine A is in (e.g., 97.190.92.128). However, I get an error that cannot connect to 192.168.1.10:9092 - broker might not be available (notice that the internal IP:port is returned in the error which means that my router provides the info correctly)
What am I doing wrong?
You need to update advertised.listeners in your server.properties to give a host/IP and port combination that your client can resolve and connect to.
As you've observed, the broker gives out 192.168.1.10:9092 in response to a client connecting to it, which will work fine for any machine on the same network (including Machine B).
For Machine C the broker needs to tell it how to connect to the broker, which if I have followed correctly is PUBLIC_IP_ADDRESS_HERE:9094.
If you need to connect from both your LAN and WAN, you'll need two listeners; one for LAN and one for WAN. In server.properties put:
listeners=LISTENER_LAN://0.0.0.0:9092,LISTENER_WAN://0.0.0.0:9094
advertised.listeners=LISTENER_LAN://192.168.1.10:9092,LISTENER_WAN://PUBLIC_IP_ADDRESS_HERE:9094
listener.security.protocol.map: LISTENER_LAN:PLAINTEXT,LISTENER_WAN:PLAINTEXT
inter.broker.listener.name=LISTENER_LAN
Note that you need to change your port forwarding, so that the endpoint on your broker is the port as defined for LISTENER_WAN (9094). If a WAN connection tries to connect to the broker on 9092 then it will be given the LISTEN_LAN details (which is what's happening at the moment, and won't work).
Ref: https://rmoff.net/2018/08/02/kafka-listeners-explained/

What is overlay network and how does DNS resolution work?

I cannot connect to external mongodb server from my docker swarm cluster.
As I understand this is because of cluster uses overlay network driver. Am I right?
If not, how does docker overlay driver works and how can I connect to external mongodb server from cluster?
Q. How does the docker overlay driver work?
I would recommend this good reference for understanding docker swarm network overlay, and more globally, Docker's architecture.
This states that:
Docker uses embedded DNS to provide service discovery for containers running on a single Docker Engine and tasks running in a Docker Swarm. Docker Engine has an internal DNS server that provides name resolution to all of the containers on the host in user-defined bridge, overlay, and MACVLAN networks.
Each Docker container ( or task in Swarm mode) has a DNS resolver that forwards DNS queries to Docker Engine, which acts as a DNS server.
So, in multi-host docker swarm mode, with this example setup :
In this example there is a service of two containers called myservice. A second service (client) exists on the same network. The client executes two curl operations for docker.com and myservice.
These are the resulting actions:
DNS queries are initiated by client for docker.com and myservice.
The container's built-in resolver intercepts the DNS queries on 127.0.0.11:53 and sends them to Docker Engine's DNS server.
myservice resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses. Container names resolve as well, albeit directly to their IP addresses.
docker.com does not exist as a service name in the mynet network and so the request is forwarded to the configured default DNS server.
Back to your question:
How can I connect to an external mongodb server form cluster?
For your external mongodb (let's say you have a DNS for that mongodb.mydomain.com), you are in the same situation as the client in above architecture, wanting to connect to docker.com, except that you certainly don't wan't to expose that mongodb.mydomain.com to the entire web, so you may have declared it in your internal cluster DNS server.
Then, how to tell docker engine to use this internal DNS server to resolve mongodb.mydomain.com?
You have to indicate in your docker service task that you want to use an internal DNS server, like so:
docker service create \
--name myservice \
--network my-overlay-network \
--dns=10.0.0.2 \
myservice:latest
The important thing here is --dns=10.0.0.2. This will tell the Docker engine to use the DNS server at 10.0.0.2:53 as default if it can not resolve the DNS name in the VIP.
Finally, when you say :
I cannot connect to external mongodb server from my docker swarm cluster. As I understand this is because of cluster uses overlay network driver. Am I right?
I would say no, as there is a built in method in docker engine to forward unknown DNS name coming from overlay network to the DNS server you want.
Hope this helps!

Connect docker container to database in host's local network

I am running a docker container containing a node server that needs to connect to a psql db in a local network, with a private ip address, I'll show my current configuration:
The container exposes the 3000 port to connect to the node server
But when I run it I get a connection refused error:
Unhandled rejection SequelizeBaseError: connect ECONNREFUSED 10.9.0.0:5432
I know that the db is accepting connections from other ips in the network since this is not the only app using this db server.
What is the docker way to achieve this?
I am running:
Docker version 1.13.1, build 092cba3
OS: Win10 but this also needs to work on MacOS
Thank you!

Open Source Glassfish 3.1.1 and clustering

I want to tests running web application on cluster to get information about perfomance. I've created cluster with two nodes. I've also deployed application on 2 nodes, and in administration panel GlassFish displays IPs for nodes, from which I can get acces to application.
Cluster info page gives information about:
Multicast Port: 2048
Multicast Address: 228.9.3.1
but if I type given address and port resources aren't found. I changed this values to my host IP and port, but the problem is the same.
Am I doing something wrong? Should I do something more to set up cluster IP and port?
In order to configure your clusters ports you should check the instance properties.
HereĀ“s the path:
Clusters -> <your cluster name> -> Instances -> <your instance name> -> Properties
And then change to values you want.

Resources