How to make HAProxy rotate more faster? - http

I have set up rotating proxies with HAProxy successfully.
Following is part of haproxy.cfg;
frontend RotatingProxies1000
bind 0.0.0.0:1000
default_backend Proxies1000
option http_proxy
option httpclose
option http-use-proxy-header
backend Proxies1000
server fp0 1.1.1.1:8800
server fp1 2.2.2.2:8800
server fp2 3.3.3.3:8800
server fp3 4.4.4.4:8800
...
balance roundrobin
But i notice the rotation speed is very slow.
I made test in the Firefox, I looked up the client ip address on http://whatismyipaddress.com/.
First it's 1.1.1.1. I refreshed the page, still 1.1.1.1, refreshed again, still 1.1.1.1.
One minute later i refreshed again it became 2.2.2.2.
How to make HAProxy rotate more faster?
According to Baptiste and Willy's suggestions. I tried to add "mode http" and remove "option http_proxy".
The current config, but it's still slow to rotate IPs:
global
log 127.0.0.1 local0 notice
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend RotatingProxies1000
bind 0.0.0.0:1000
default_backend Proxies1000
#option http_proxy
mode http
option httpclose
option http-use-proxy-header
backend Proxies1000
server fp0 1.1.1.1:8800
server fp1 2.2.2.2:8800
server fp2 3.3.3.3:8800
server fp3 4.4.4.4:8800
...
balance roundrobin

Your configuration misses timeouts and http mode.
My assumption is that your browser does not close the connection with HAProxy because of you configuration, so HAProxy can't balance you to an other server until a new connexion is established.

Related

Enabling proxy protocol in Nginx for just one vhost without breaking the others?

I just set up HAProxy on a server by itself to act as a reverse proxy. This will send the traffic to a main server that's running Nginx.
I got it almost working, aside from being able to pass the original IP through from the proxy server to the web server.
I'm using mode tcp in haproxy since a lot of the traffic coming in will already be on port 443 using SSL. I read that I can't use the option forwardfor in tcp mode, and I can't send SSL traffic using mode http. So I added send-proxy to the server lines, and then tried to enable the proxy protocol on Nginx.
That actually worked for the domain I'm running, but we have about 10 other virtualhost domains being hosted on that same machine, and as soon as I enabled proxy protocol on one vhost separately, it broke ALL of our other domains pointing to that server, as they all started timing out.
Is there a way around this? Can I enable proxy protocol for just one virtualhost on Nginx without breaking the rest of them? Or is there a way to just use http mode with the forwardfor option, even if it's sending SSL traffic?
Below is my haproxy config, with the IPs redacted:
global
maxconn 10000
user haproxy
group haproxy
defaults
retries 3
timeout client 30s
timeout server 30s
timeout connect 30s
mode tcp
frontend incoming
bind *:80
bind *:443
option tcplog
default_backend client_proxy
backend client_proxy
use-server ps_proxy_http if { dst_port 80 }
use-server ps_proxy_https if { dst_port 443 }
server ps_proxy_http XXX.XXX.XXX.XXX:80 send-proxy
server ps_proxy_https XXX.XXX.XXX.XXX:443 send-proxy
This is my first time using HAProxy as a reverse proxy, so any insight would be much appreciated.

Using HAProxy to Front Amazon S3

I have Direct Connect over a fast pipe to an AWS VPC and I'd like to use a cluster of HAProxy instances in my VPC to reverse-proxy one or more S3 buckets. This is so my users on premises can enjoy the increased bandwidth.
I guess the main question is whether this is doable, with the follow-on, "Is there a better solution for this than HAProxy?" I don't want to use an explicit proxy like squid because my only use-case for this is S3.
Assuming HAProxy is fine, I did a quick dummy setup for one bucket as a POC. When I connect directly to the bucket without credentials (simply to test connectivity), I see the "Access Denied" XML response I expect. But when I connect to the reverse-proxy, it seems to redirect me to https://aws.amazon.com/s3/. How am I screwing this up?
Here's my config (replace MY_BUCKET with any bucket name):
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 MY_BUCKET.s3.amazonaws.com:80 maxconn 100
UPDATE:
Per Pedro's request, here is the configuration I found that makes this work. As you can see, it's extremely bare bones. (I'm using an EC2 instance with two CPUs.)
global
daemon
nbproc 2
nbthread 1
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend S3
backend S3
server server1 s3.amazonaws.com:80
Proxying from HAProxy to Amazon S3 is definitely possible! Below is a copy of our production config file. We are using HAProxy 2.2 but this should be backward compatible to older versions as well.
resolvers dns
parse-resolv-conf
hold valid 10s
frontend wwoof
mode http
bind *:80
default_backend s3
backend s3
mode http
http-request set-header Host your-bucket.s3-website.eu-west-3.amazonaws.com
http-request del-header Authorization
http-response del-header x-amz-id-2
http-response del-header x-amz-request-id
server s3 your-bucket.s3-website.eu-west-3.amazonaws.com:80 resolvers dns check inter 5000

HAProxy - ACL based on Client CN in TCP mode

I am running HAProxy in TCP mode with TLS (client certificate based authentication). My configuration is pasted below. My goal is to redirect the SSH connection to correct server based on Client certificate that is being presented. This example talks about SSH but in the future I have various services that I may have to securely expose in this manner. Any help is appreciated.
Note that in HTTPS mode you can extract the client CN using something like and use the variable in header against an ACL. However, as I am in TCP mode, I am unsure how to do something similar.
http-request set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)]
However, I am not sure how to do something similar when running in TCP mode.
frontend Frontend_server
mode tcp
option tcplog
log global
bind X.X.X.X:8000 ssl crt /etc/certs/server.pem ca-file /etc/certs/ca.crt verify required
acl ACL_SRV1 ??????? -m str -f /etc/SRV1/cn.list
acl ACL_SRV2 ??????? -m str -f /etc/SRV2/cn.list
acl ACL_SRV3 ??????? -m str -f /etc/SRV3/cn.list
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %ST\ %B\ %tsc\ %ac/%fc/%bc/%sc\ %sq/%bq\ {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]
use_backend SRV1 if ACL_SRV1
use_backend SRV2 if ACL_SRV2
use_backend SRV3 if ACL_SRV3
backend SRV1
mode tcp
option tcplog
option tcp-check
server MY_SRV1 X.X.X.X:22 check inter 1000 port 22 maxconn 1000
backend SRV2
mode tcp
option tcplog
option tcp-check
server MY_SRV2 X.X.X.X:22 check inter 1000 port 22 maxconn 1000
backend SRV3
mode tcp
option tcplog
option tcp-check
server MY_SRV3 X.X.X.X:22 check inter 1000 port 22 maxconn 1000
With tcp mode the TLS is not terminating at HAProxy but the TLS termination is done on the server behind haproxy. This server has of course to be known before any data can be send or forwarded to the server. This means a decision which server to choose can only be done on the first data from the client in the TLS handshake (ClientHello) but not on later data which require a reply from the server already.
But, client certificates are only send by the client if the server explicitly requests these. This means, in order to get a client certificate from the client the server needs to communicate with the client which means that the connection to the server has to be established already. This of course means that the decision which server to use cannot be done based on the client certificate since the client certificate is known too late in the TLS handshake.
The only way to make such a decision based on the client certificate would be to terminate TLS at the load balancer already.

HAproxy health check underlying application

I am using NGINX reverproxy on 3 servers to password protect an app. nginx holds the port 80 on a server and app runs on localhost port x. nginx fwds requests to port x from client hitting a vip:vipport.
I have a vip on server 1, keld by keepalived and haproxy does the LB and health check against port 80 on all 3 servers. I am looking at the interface, and taking down the application doesnt turn the server row red as the nginx port is still up .. is there any way for ha to accurately represent the application port turning off?
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local0 notice # only send important events
chroot /var/lib/haproxy.app
pidfile /var/run/haproxy.app.pid
user haproxy
group users
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy.elasticsearch/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option tcplog
option dontlognull
timeout connect 4s
timeout server 30s
timeout client 30s
timeout http-request 10s
timeout queue 30s
timeout http-keep-alive 5s
timeout check 5s
maxconn 10000
frontend front
mode tcp
bind ipin:vipport
default_backend back
backend back
balance leastconn
source ipin
#balance roundrobin
#option ssl-hello-chk
stick-table type ip size 200k expire 60m
stick on src
server server3 ip:80 check
server server2 ip:80 check
server server1 ip:80 check
listen stats
bind *:99
mode http
stats enable
stats uri /
I suggest, you have a separate health check route in your application server, and periodically hit that route instead of just checking if port is up. This will help you monitor actual application port instead of NGINX.
At HAProxy, this can be achieved by injecting a simple Lua script, which hits the end point, checks status and based on it, mark if server is up.
You will be able to find some sample Lua scripts for HAProxy here.

maxconn limit per backend in haproxy

Our haproxy loadbalancer opens thousands of connections to its backends
even though its settings say to open no more than 10 connections per server instance (see below). When I uncomment "option http-server-close" the number of backend connection drops however I would like to have keep-alive backend connections.
Why maxconn is not respected with http-keep-alive? I verified with ss that the opened backend connections are in ESTABLISHED state.
defaults
log global
mode http
option http-keep-alive
timeout http-keep-alive 60000
timeout connect 6000
timeout client 60000
timeout server 20000
frontend http_proxy
bind *:80
default_backend backends
backend backends
option prefer-last-server
# option http-server-close
timeout http-keep-alive 1000
server s1 10.0.0.21:8080 maxconn 10
server s2 10.0.0.7:8080 maxconn 10
server s3 10.0.0.22:8080 maxconn 10
server s4 10.0.0.16:8080 maxconn 10
In keep-alive mode idle connections are not accounted. As explained in this HAProxy mailthread
The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections.
This even makes more sense, knowing that browsers initiate preconnect to improve page performance. So in keep-alive mode only outstanding/active connections are taken into account.
You can still enforce maxconn limits regardless of the connection state using tcp mode, especially that I don't see a particular reason to using mode http in your current configuration (apart from having reacher logs).
Or you can use http-reuse with http mode to achieve a lowest number of concurrent connections.

Resources