I want to set load balancer for syslog-ng messages, so let say several boxes are sending TCP 514 messages to fronend interface of HAProxy box - 192.168.0.20 and there is one graylog server to which those messages are passed - 10.0.0.2.
Below simplest possible config doesn't work.
defaults
mode tcp
frontend main
bind 192.168.0.20:514
use_backend graylog
backend graylog
server graylog1 10.0.0.2:514
Tcpdump is showing that HAProxy is sending RST flag to incoming messages on 514. I believe I should see HAProxy listening on 514 with netstat?
RST for SYN packet means the port is not open for connection. Use netstat utility to determine if the ports are open. RST can also be sent when the entity wants to close the established connection for good.
Here is a config that should work. You have to be root (or sudo) to bind to port 514 though.
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
listen graylog
bind *:514
mode tcp
balance roundrobin
server graylog1 10.0.0.1:514
server graylog2 10.0.0.1:514
timeout connect 20s
timeout server 30s
Related
I know that the TCP is connection oriented. But if I set up a forwarding server(syslog server for example) which forwards logs on TCP, is the connection always on or whenever the logs are forwarded to a server.
It depends on the server configuration.
Say you are working on Linux, you can use the command
cat /proc/sys/net/ipv4/tcp_keepalive_time
to check your current keepalive value in seconds.
Let say I have a server XYZ that listens on port 50000 for TCP clients and port 80 for HTTP clients. And on the other side, I have a client that uses a WebSocket to establish a socket connection to port 50000 and will use HTTP port 80 for the handshake (of course).
Now, when the client begins, it will first send a request to server XYZ via the HTTP port 80, and the server will receive its request on port 80 for the handshake and will send a response for welcome. So, in that case, both parties are using port 80 (of course).
Now, when the handshake is done, the standard documentation says that the same TCP connection that is used for HTTP request/response for handshake purposes is then converted to the TCP socket connection. Ok right.
But, but if this whole handshake process and TCP connection for the HTTP request/response uses port 80 the first time, and that the same TCP connection is converted to the TCP socket connection, and this whole process is done via port 80, then how does the same TCP connection get converted to port 50000 for the TCP socket on both parties? Does the client initialize another TCP connection internally for changing to port 50000?
So, can anyone tell how the port conversion is performed and works in the WebSocket from port 80 to a different port in both parties? How does a complete single socket connection get established on the different ports? How does the same TCP connection change/flip its ports?
A TCP socket connection cannot change ports at all. Once a connection has been established, its ports are locked in and cannot be changed. If you have a TCP socket connection on port 80, the only way to have a connection on port 50000 is to make a completely separate TCP socket connection.
A WebSocket cannot connect to port 80 and then switch to port 50000. However, an HTML page that is served to a browser from port 80 can contain client-side scripting that allows the browser to make a WebSocket object and connect it to port 50000. The two TCP connections (HTTP and WebSocket) are completely separate from each other (in fact, the HTTP socket connection does not even need to stay open once the HTML is served, since HTTP is a stateless protocol).
I have configured HAProxy (1.5.4, but I tried also 1.5.14) to balance in TCP mode two server exposing AMQP protocol (WSO2 Message Broker) on 5672 port.
The clients create and use permanent connection to the AMQP Servers, via HAProxy.
I've changed the client and server TCP keepalive timeout, setting net.ipv4.tcp_keepalive_time=120 (CentOS 7).
In HAProxy I've setted timeout client/server to 200 seconds (>120 seconds of the keepalive packets) and used the option clitcpka.
Then I've started wireshark and sniffed all the tcp traffic: after the last request from the clients, the tcp keepalived packets are sente regularly after 120 seconds, but after 200 seconds after the last request from the clients the connection are closed (thus ignoring the keepalived packet).
Below the configuration:
haproxy.conf
global
log 127.0.0.1 local3
maxconn 4096
user haproxy
group haproxy
daemon
debug
listen messagebroker_balancer 172.19.19.91:5672
mode tcp
log global
retries 3
timeout connect 5000ms
option redispatch
timeout client 200000ms
timeout server 200000ms
option tcplog
option clitcpka
balance leastconn
server s1 172.19.19.79:5672 check inter 5s rise 2 fall 3
server s2 172.19.19.80:5672 check inter 5s rise 2 fall 3
TCP keep alive is at the transport layer and is only used to do some traffic on the connection so intermediate systems like packet filters don't loose any states and that the end systems can notice if the connection to the other side broke (maybe because something crashed or a network cable broke).
TCP keep alive has nothing to do with the application level idle timeout which you have set explicitly to 200s:
timeout client 200000ms
timeout server 200000ms
This timeouts gets triggered if the connection is idle, that is if no data get transferred. TCP keep alive does not transport any data, the payload of these packets is empty.
The timeout client detects a dead client application on a responsive client OS. You can always have an application that occupies a connection but doesn't speak to you. This is bad because the number of connections isn't infinite (maxconn).
Similarly, set timeout server for the backend.
These options were for haproxy talking to application. Now, there is a completely separate check where OS talks to OS (without touching the app or haproxy):
With option clitcpka or option srvtcpka or option tcpka you allow the inactive connection to be detected and killed by the OS, even when haproxy doesn't actively check it. This primarily needs OS settings (Linux).
If no data sent for 110 seconds then immediately send the first keep-alive (KA), don't kill connection yet:
sysctl net.ipv4.tcp_keepalive_time=110
Wait for 30 seconds after each KA, once they're enabled on this connection:
sysctl net.ipv4.tcp_keepalive_intvl=30
Allow 3 KAs be unacknowledged, then kill the TCP connection:
sysctl net.ipv4.tcp_keepalive_probes=3
In this situation OS kills the connection 200 seconds after packets stop coming.
I have a load balancer group with few target servers and they are SSL enabled.
Now I want to do the TCP monitoring on the target servers port (443)
Does TCP monitor work with the backends which are on https ?
TCP Monitor, according to me, does a socket connect on the given Host and Port. What this means is, if there is an open port on the target server, then server is considered alive and kicking.
Since this is only a socket connect; protocol HTTP,HTTPS does not matter as long as there is port open and has a listener on the port.
I always check network connectivity using telnet [IP] [port]. However, sometimes the connection timeouts because nothing is listening to that port but the tunnel to that port is open. How do I check the difference, i.e. does telnet timeout because tunnel is not open or because nothing is listening to that port at the other end?
Probably by using tcptraceroute, from the man page
This program attempts to trace the route an IP packet would follow to
some internet host by launching probe packets with a small ttl (time to
live) then listening for an ICMP "time exceeded" reply from a gateway.
We start our probes with a ttl of one and increase by one until we get
an ICMP "port unreachable" (or TCP reset)
set the max_ttl value to be appropriate for your firewall