I am currently working on single RabbitMQ server which support 20000 to 25672 TCP connection. So is it possible to expand the
TCP connection limit using RabbitMQ server cluster, If yes then How can to configure it and what are the benefits?
With the cluster, you can handle a lot of connections since the connections can be shared across more machines.
To configure the cluster you can follow the official documentation:
https://www.rabbitmq.com/clustering.html
rabbit2$ rabbitmqctl stop_app
Stopping node rabbit#rabbit2 ...done.
rabbit2$ rabbitmqctl join_cluster rabbit#rabbit1
Clustering node rabbit#rabbit2 with [rabbit#rabbit1] ...done.
rabbit2$ rabbitmqctl start_app
Starting node rabbit#rabbit2 ...done.
Related
I see some strange logs in my kong container, which internally uses nginx:
2019/08/07 15:54:18 [info] 32#0: *96775 client closed connection while SSL handshaking, client: 10.244.0.1, server: 0.0.0.0:8443
This happens every 5 secs, like some sort of diagnostic is on.
In my kubernetes descriptor I set no ready or liveliness probe, so I can't understand why there are those calls and how can I prevent them from appearing as they only dirt my logs...
edit:
It seems it's the LoadBalancer service: I tried deleting it and I get no logs anymore...how to get rid of those logs though?
This has been already discussed on Kong forum in Stopping logs generated by the AWS ELB health check thread.
The same behaviour with lb heathcheck every few seconds.
Make Kong listen on plain HTTP port, open that port up only to the
subnet in which ELB is running (public most probably), and then don’t
open up port 80 on the ELB. So ELB will be able to Talk on port 80 for
health-check but there won’t be a HTTP port available to external
world.
Use L4 proxying (stream_listen) in kong, open up the port and
then make ELB healthcheck that port.
Both of solutions are reasonable.
Just simply check what is connecting to your nginx:
kubectl get po,svc --all-namespaces -owide | grep 10.244.0.1
after that, you should know what is happening inside your cluster, maybe a misconfigured pod or some clients.
I also encountered this error when watching the log of Nginx.
I use the Azure cloud, I found the ip in the log is the location of my Azure server
I resolved that by changing the Protocol option from [TCP] to [HTTPS] in Heath probes menu on Azure portal
I'm trying to figure out a proper way to implement active/passive failover between replicas of service with Docker swarm mode.
The service will hold a valuable in-memory state that cannot be lost, that's why I need multiple replicas of it. The replicas will internally implement Raft so that only the replica which is active ("leader") at a given moment will accept requests from clients.
(If you're unfamiliar with Raft: simply put, it is a distributed consensus algorithm, which helps implement active/passive fault-tolerant cluster of replicas. According to Raft, the active replica - the leader - replicates changes in its data to passive replicas - the followers. The only leader accepts requests from clients. If the leader fails, a new leader is elected among the followers).
As far as I understand, Docker will guarantee that a specified number of replicas are up and running, but it will balance incoming requests among all of the replicas, in the active/active manner.
How can I tell Docker to route requests only to the active replica, but still guarantee that all replicas are up?
One option is routing all requests through an additional NGINX container, and updating its rules each time a new leader is elected. But that will be an additional hop, which I'd like to avoid.
I'm also trying to avoid external/overlapping tools such as consul or kubernetes, in order to keep the solution as simple as possible. (HAProxy is not an option because I need a Linux/Windows portable solution). So currently I'm trying to understand if this can be done with Docker swarm mode alone.
Another approach I came across is returning a failing health check from passive replicas. It does the trick with kubernetes according to this answer, but I'm not sure it will work with Docker. How does the swarm manager interpret failing health checks from task containers?
I'd appreciate any thoughts.
Active Passive replica can be achieved by having below deployment mode:
mode: global
With this port of the corresponding service is open, i.e., service is accessible via any the nodes in the swarm, but container will be running only on particular node.
Ref: https://docs.docker.com/compose/compose-file/#mode
Example:
VAULT-HA with Consul Backend docker stack file:
https://raw.githubusercontent.com/gtanand1994/VaultHA/master/docker-compose.yml
Here, Vault and Nginx containers will be seen only in one node in the swarm, but Consul containers (having mode: replicated) will be present on all the nodes of swarm.
But as I said before, VAULT, and NGINX services are available via 'any_node_ip:corresponding_port_number'
trying to use NGINX as reverse proxy,
and would like to have constant number of open connections to backend (upstream) open at all times.
Is this possible with nginx (maybe haproxy..?) ??
running on ubuntu if it makes any difference
Community edition of Nginx does not provide such functionality.
A commercial version of Nginx provides. There is max_conns parameter in upstream's servers:
upstream my_backend {
server 127.0.0.1:11211 max_conns=32;
server 10.0.0.2:11211 max_conns=32;
}
The documentation is here
Something like that can be done easily with haproxy. The end result will be that there are no more than N concurrent connections to a backend server + open connections are shared between requests coming from different clients.
backend app
http-reuse safe
server server1 127.0.0.1:8080 maxconn 32
server server2 127.0.0.2:8080 maxconn 32
The example shows 2 servers, haproxy will not open more than 32 connection to each server, and each connection can be shared between several clients whenever that can be done safely.
I'm trying to setup some new hosts in munin for monitoring. For some reason it ain't happening!
Here's what I've tried so far.
On the munin server, which is already monitoring several other hosts, I've added the host I want in /etc/munin/munin.conf
[db1]
address 10.10.10.25 # <- obscured the real IP address
use_node_name yes
And on the db1 host I have this set in /etc/munin/munin-node.conf
host_name db1.example.com
allow ^127\.0\.0\.1$
allow ^10\.10\.10\.26$
allow ^::1$
port 4949
And I made sure to restart the services on both machines.
From the monitoring host I can telnet to the new server I want to monitor on the munin port:
[root#monitor3:~] #telnet db1.example.com 4949
Trying 10.10.10.26...
Connected to db1.example.com.
Escape character is '^]'.
# munin node at db1.example.com
Wait a few minutes.. and nothing! The new server won't appear in the munin dashboard on the munin monitoring host.
In the /var/log/munin/munin-update.log log on the db1 host (the one I'm trying to monitor) I find this:
2015/11/30 03:20:02 [INFO] starting work in 14199 for db1/10.10.10.26:4949.
2015/11/30 03:20:02 [FATAL] Socket read from db1 failed. Terminating process. at /usr/share/perl5/vendor_perl/Munin/Master/UpdateWorker.pm line 254.
2015/11/30 03:20:02 [ERROR] Munin::Master::UpdateWorker<db1;db1> died with '[FATAL] Socket read from db1 failed. Terminating process. at /usr/share/perl5/vendor_perl/Munin/Master/UpdateWorker.pm line 254.
What could be going on here? And how can I solve this ?
Since you have already verified that your network connection is ok, as a first step of investigation, I would surely simplify the munin-node.conf. Currently you have:
host_name db1.example.com
allow ^127\.0\.0\.1$
allow ^10\.10\.10\.26$
allow ^::1$
port 4949
From these I would remove:
host_name (it is probably redundant.)
The IPv6 loopback address. (I don't think you need it, but you can add it back later if you do need it)
The IPv4 loopback address. (same as above)
If it still not working, you could completely outrule any issue with the allow config by replacing the direct IPs with:
cidr_allow 10.10.10.0/24
This would allow connection from a full range of IPs in case your db1 host appears to be connecting from a different IP.
I want to setup monit on a server which is going to be a centralized server to monitor processes running on remote servers. I checked many docs related to setup monit but could not find how to setup for remote server processes. For example a centralized monit server should monitor nginx running on A server, mongod running on B server and so on. Any suggestion how to do this?
In the documentation, Monit can be able to test the connection remotely, using tcp or udp, what you can do is to provide a small status file that gets refreshed for each technology you are intending to monitor, and let Monit hit that status file through http, etc. and can be used as follows:
check host nginxserver with address www.nginxserver.com
if failed port 80 protocol http
and request "/some_file"
then alert
Since you are testing a web server that can be easily accomplished with the above. as a note , below is the part about Monit connection testing:
CONNECTION TESTING Monit is able to perform connection testing via
networked ports or via Unix sockets. A connection test may only be
used within a check process or within a check host service entry in
the Monit control file.
If a service listens on one or more sockets, Monit can connect to the
port (using either tcp or udp) and verify that the service will accept
a connection and that it is possible to write and read from the
socket. If a connection is not accepted or if there is a problem with
socket i/o, Monit will assume that something is wrong and execute a
specified action. If Monit is compiled with openssl, then ssl based
network services can also be tested.
The full syntax for the statement used for connection testing is as
follows (keywords are in capital and optional statements in
[brackets]),
IF FAILED [host] port [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
or for Unix sockets,
IF FAILED [unixsocket] [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
host:HOST hostname. Optionally specify the host to connect to. If the
host is not given then localhost is assumed if this test is used
inside a process entry. If this test was used inside a remote host
entry then the entry's remote host is assumed. Although host is
intended for testing name based virtual host in a HTTP server running
on local or remote host, it does allow the connection statement to be
used to test a server running on another machine. This may be useful;
For instance if you use Apache httpd as a front-end and an
application-server as the back-end running on another machine, this
statement may be used to test that the back-end server is running and
if not raise an alert.
port:PORT number. The port number to connect to
unixsocket:UNIXSOCKET PATH. Specifies the path to a Unix socket.
Servers based on Unix sockets always run on the local machine and do
not use a port.