maxconn limit per backend in haproxy - networking

Our haproxy loadbalancer opens thousands of connections to its backends
even though its settings say to open no more than 10 connections per server instance (see below). When I uncomment "option http-server-close" the number of backend connection drops however I would like to have keep-alive backend connections.
Why maxconn is not respected with http-keep-alive? I verified with ss that the opened backend connections are in ESTABLISHED state.
defaults
log global
mode http
option http-keep-alive
timeout http-keep-alive 60000
timeout connect 6000
timeout client 60000
timeout server 20000
frontend http_proxy
bind *:80
default_backend backends
backend backends
option prefer-last-server
# option http-server-close
timeout http-keep-alive 1000
server s1 10.0.0.21:8080 maxconn 10
server s2 10.0.0.7:8080 maxconn 10
server s3 10.0.0.22:8080 maxconn 10
server s4 10.0.0.16:8080 maxconn 10

In keep-alive mode idle connections are not accounted. As explained in this HAProxy mailthread
The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections.
This even makes more sense, knowing that browsers initiate preconnect to improve page performance. So in keep-alive mode only outstanding/active connections are taken into account.
You can still enforce maxconn limits regardless of the connection state using tcp mode, especially that I don't see a particular reason to using mode http in your current configuration (apart from having reacher logs).
Or you can use http-reuse with http mode to achieve a lowest number of concurrent connections.

Related

HAproxy backend 503 connection refused over VPN

I have a simple setup for HAProxy to a back-end server available over an IPSec VPN. When I connect directly to the back-end server using Curl, the requests goes through successfully, but when I use HAProxy to the same back-end, over the VPN, the request is dropped with a 503 ERROR. From the logs, it seems the connection is being aborted prematurely, but I cannot decipher why. Also, both requests work when I use remote servers available over the Internet as back-ends, where a VPN is not involved. Am I missing a specific config or something for HAProxy over a VPN?
Note: I have set no health check on purpose for the back-end
HAProxy config:
defaults
mode http
# option httplog
log global #use log set in the global config
log-format \"[Lo:%ci/%cp; Re:%si/%sp] [Proxy - %bi:%bp/%fi:%fp] [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r\"
option dontlognull
option http-keep-alive
option forwardfor except 127.0.0.0/8
option redispatch
retries 2
timeout http-request 10s #maximum allowed time to wait for a complete HTTP request
timeout queue 10s #maximum time to wait in the queue for a connection slot to be free
timeout connect 5s #maximum time to wait for a connection attempt to a server to succeed
timeout client 5s #minimum time for inactivity on client side
timeout server 5s #maximum inactivity time on the server side
timeout http-keep-alive 30s #maximum allowed time to wait for a new HTTP request to appear
timeout check 10s
maxconn 5000
##-----------------------------------------------------
## API Requests
##-----------------------------------------------------
## frontend to proxy HTTP callbacks coming from App servers to VPN Server
frontend api_requests
mode http
bind 10.132.2.2:80
bind 127.0.0.1:80
default_backend testbed
## backend to proxy HTTP requests from App Servers to VPN Server
backend testbed
balance roundrobin
server broker 196.XXX.YYY.136:80
Entry captured on traffic log for failed attempt over VPN :
May 30 09:15:10 localhost haproxy[22844]: [Lo:127.0.0.1/56046; Re:196.XXX.YYY.136/80] [Proxy - :0/127.0.0.1:80] [30/May/2019:09:15:10.285] api_requests testbed/broker 0/0/-1/-1/0 503 212 - - SC-- 1/1/0/0/2 0/0 "POST /request HTTP/1.1"
What could be the issue causing a Curl request to be accepted but a proxy request by HAProxy to be dropped specifically for a VPN connection? Anyone faced a similar issue before?

HAproxy health check underlying application

I am using NGINX reverproxy on 3 servers to password protect an app. nginx holds the port 80 on a server and app runs on localhost port x. nginx fwds requests to port x from client hitting a vip:vipport.
I have a vip on server 1, keld by keepalived and haproxy does the LB and health check against port 80 on all 3 servers. I am looking at the interface, and taking down the application doesnt turn the server row red as the nginx port is still up .. is there any way for ha to accurately represent the application port turning off?
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local0 notice # only send important events
chroot /var/lib/haproxy.app
pidfile /var/run/haproxy.app.pid
user haproxy
group users
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy.elasticsearch/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option tcplog
option dontlognull
timeout connect 4s
timeout server 30s
timeout client 30s
timeout http-request 10s
timeout queue 30s
timeout http-keep-alive 5s
timeout check 5s
maxconn 10000
frontend front
mode tcp
bind ipin:vipport
default_backend back
backend back
balance leastconn
source ipin
#balance roundrobin
#option ssl-hello-chk
stick-table type ip size 200k expire 60m
stick on src
server server3 ip:80 check
server server2 ip:80 check
server server1 ip:80 check
listen stats
bind *:99
mode http
stats enable
stats uri /
I suggest, you have a separate health check route in your application server, and periodically hit that route instead of just checking if port is up. This will help you monitor actual application port instead of NGINX.
At HAProxy, this can be achieved by injecting a simple Lua script, which hits the end point, checks status and based on it, mark if server is up.
You will be able to find some sample Lua scripts for HAProxy here.

Freezing haproxy traffic with maxconn 0 and keepalive connections

Since haproxy v1.5.0 it was possible to temporarily stop reverse-proxying traffic to frontends using
set maxconn frontend <frontend_name> 0
command.
I've noticed that if haproxy is configured to maintain keepalive connections between haproxy and a client then said connections will continue be served whereas the new ones will continue awaiting for "un-pausing" a frontend.
The question is: is it possible to terminate current keepalive connections gracefully so that a client was required to establish new connections?
I've only found shutdown session and shutdown sessions commands but they are obviously not graceful at all.
The purpose of all of this is to make some changes on server seamlessly, otherwise in current configuration it would require a scheduled maintenance window.

How to make HAProxy rotate more faster?

I have set up rotating proxies with HAProxy successfully.
Following is part of haproxy.cfg;
frontend RotatingProxies1000
bind 0.0.0.0:1000
default_backend Proxies1000
option http_proxy
option httpclose
option http-use-proxy-header
backend Proxies1000
server fp0 1.1.1.1:8800
server fp1 2.2.2.2:8800
server fp2 3.3.3.3:8800
server fp3 4.4.4.4:8800
...
balance roundrobin
But i notice the rotation speed is very slow.
I made test in the Firefox, I looked up the client ip address on http://whatismyipaddress.com/.
First it's 1.1.1.1. I refreshed the page, still 1.1.1.1, refreshed again, still 1.1.1.1.
One minute later i refreshed again it became 2.2.2.2.
How to make HAProxy rotate more faster?
According to Baptiste and Willy's suggestions. I tried to add "mode http" and remove "option http_proxy".
The current config, but it's still slow to rotate IPs:
global
log 127.0.0.1 local0 notice
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend RotatingProxies1000
bind 0.0.0.0:1000
default_backend Proxies1000
#option http_proxy
mode http
option httpclose
option http-use-proxy-header
backend Proxies1000
server fp0 1.1.1.1:8800
server fp1 2.2.2.2:8800
server fp2 3.3.3.3:8800
server fp3 4.4.4.4:8800
...
balance roundrobin
Your configuration misses timeouts and http mode.
My assumption is that your browser does not close the connection with HAProxy because of you configuration, so HAProxy can't balance you to an other server until a new connexion is established.

nginx upstream server "out of ports"

I'm using nginx as reverse proxy, and find more than 30k TIME_WAIT state ports in upstream server(windows 2003). I know my servers are "out of ports" which discussed here(http://nginx.org/pipermail/nginx/2009-April/011255.html), and set both nginx and upstream server to reuse TIME_WAIT and to recycle more quickly.
[sysctl -p]
……
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
But nginx hangs and "connection timed out while connection to upstream server" error still can be found on nginx error log, when RPS of upstream is higher than 1000 within 1 minutes. When upstream is Windows, server will be "out of ports" in seconds.
Any ideas? A connection pool with a waiting queue? Maxim Dounin wrote a useful module to keep connection with memcached, but why can't it support Web Server?
I am new to nginx but from what I know so far, you need to reduce your net.ipv4.tcp_fin_timeout value which defaults to 60 seconds. Out of the box nginx doesn't supports http connection pooling with backend. Because of this every request to backend creates a new connection. With 64K ports and 60 seconds of wait before that port can be reused, the average RPS will not be more than 1K per second. You can either reduce
your net.ipv4.tcp_fin_timeout value both at nginx server and the backend server or you can assign multiple ip addresses to the backend box and configure nginx to treat these "same servers" as different servers.

Resources