HAproxy backend 503 connection refused over VPN - vpn

I have a simple setup for HAProxy to a back-end server available over an IPSec VPN. When I connect directly to the back-end server using Curl, the requests goes through successfully, but when I use HAProxy to the same back-end, over the VPN, the request is dropped with a 503 ERROR. From the logs, it seems the connection is being aborted prematurely, but I cannot decipher why. Also, both requests work when I use remote servers available over the Internet as back-ends, where a VPN is not involved. Am I missing a specific config or something for HAProxy over a VPN?
Note: I have set no health check on purpose for the back-end
HAProxy config:
defaults
mode http
# option httplog
log global #use log set in the global config
log-format \"[Lo:%ci/%cp; Re:%si/%sp] [Proxy - %bi:%bp/%fi:%fp] [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r\"
option dontlognull
option http-keep-alive
option forwardfor except 127.0.0.0/8
option redispatch
retries 2
timeout http-request 10s #maximum allowed time to wait for a complete HTTP request
timeout queue 10s #maximum time to wait in the queue for a connection slot to be free
timeout connect 5s #maximum time to wait for a connection attempt to a server to succeed
timeout client 5s #minimum time for inactivity on client side
timeout server 5s #maximum inactivity time on the server side
timeout http-keep-alive 30s #maximum allowed time to wait for a new HTTP request to appear
timeout check 10s
maxconn 5000
##-----------------------------------------------------
## API Requests
##-----------------------------------------------------
## frontend to proxy HTTP callbacks coming from App servers to VPN Server
frontend api_requests
mode http
bind 10.132.2.2:80
bind 127.0.0.1:80
default_backend testbed
## backend to proxy HTTP requests from App Servers to VPN Server
backend testbed
balance roundrobin
server broker 196.XXX.YYY.136:80
Entry captured on traffic log for failed attempt over VPN :
May 30 09:15:10 localhost haproxy[22844]: [Lo:127.0.0.1/56046; Re:196.XXX.YYY.136/80] [Proxy - :0/127.0.0.1:80] [30/May/2019:09:15:10.285] api_requests testbed/broker 0/0/-1/-1/0 503 212 - - SC-- 1/1/0/0/2 0/0 "POST /request HTTP/1.1"
What could be the issue causing a Curl request to be accepted but a proxy request by HAProxy to be dropped specifically for a VPN connection? Anyone faced a similar issue before?

Related

Using HAProxy to Front Amazon S3

I have Direct Connect over a fast pipe to an AWS VPC and I'd like to use a cluster of HAProxy instances in my VPC to reverse-proxy one or more S3 buckets. This is so my users on premises can enjoy the increased bandwidth.
I guess the main question is whether this is doable, with the follow-on, "Is there a better solution for this than HAProxy?" I don't want to use an explicit proxy like squid because my only use-case for this is S3.
Assuming HAProxy is fine, I did a quick dummy setup for one bucket as a POC. When I connect directly to the bucket without credentials (simply to test connectivity), I see the "Access Denied" XML response I expect. But when I connect to the reverse-proxy, it seems to redirect me to https://aws.amazon.com/s3/. How am I screwing this up?
Here's my config (replace MY_BUCKET with any bucket name):
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 MY_BUCKET.s3.amazonaws.com:80 maxconn 100
UPDATE:
Per Pedro's request, here is the configuration I found that makes this work. As you can see, it's extremely bare bones. (I'm using an EC2 instance with two CPUs.)
global
daemon
nbproc 2
nbthread 1
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend S3
backend S3
server server1 s3.amazonaws.com:80
Proxying from HAProxy to Amazon S3 is definitely possible! Below is a copy of our production config file. We are using HAProxy 2.2 but this should be backward compatible to older versions as well.
resolvers dns
parse-resolv-conf
hold valid 10s
frontend wwoof
mode http
bind *:80
default_backend s3
backend s3
mode http
http-request set-header Host your-bucket.s3-website.eu-west-3.amazonaws.com
http-request del-header Authorization
http-response del-header x-amz-id-2
http-response del-header x-amz-request-id
server s3 your-bucket.s3-website.eu-west-3.amazonaws.com:80 resolvers dns check inter 5000

HAproxy health check underlying application

I am using NGINX reverproxy on 3 servers to password protect an app. nginx holds the port 80 on a server and app runs on localhost port x. nginx fwds requests to port x from client hitting a vip:vipport.
I have a vip on server 1, keld by keepalived and haproxy does the LB and health check against port 80 on all 3 servers. I am looking at the interface, and taking down the application doesnt turn the server row red as the nginx port is still up .. is there any way for ha to accurately represent the application port turning off?
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local0 notice # only send important events
chroot /var/lib/haproxy.app
pidfile /var/run/haproxy.app.pid
user haproxy
group users
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy.elasticsearch/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option tcplog
option dontlognull
timeout connect 4s
timeout server 30s
timeout client 30s
timeout http-request 10s
timeout queue 30s
timeout http-keep-alive 5s
timeout check 5s
maxconn 10000
frontend front
mode tcp
bind ipin:vipport
default_backend back
backend back
balance leastconn
source ipin
#balance roundrobin
#option ssl-hello-chk
stick-table type ip size 200k expire 60m
stick on src
server server3 ip:80 check
server server2 ip:80 check
server server1 ip:80 check
listen stats
bind *:99
mode http
stats enable
stats uri /
I suggest, you have a separate health check route in your application server, and periodically hit that route instead of just checking if port is up. This will help you monitor actual application port instead of NGINX.
At HAProxy, this can be achieved by injecting a simple Lua script, which hits the end point, checks status and based on it, mark if server is up.
You will be able to find some sample Lua scripts for HAProxy here.

maxconn limit per backend in haproxy

Our haproxy loadbalancer opens thousands of connections to its backends
even though its settings say to open no more than 10 connections per server instance (see below). When I uncomment "option http-server-close" the number of backend connection drops however I would like to have keep-alive backend connections.
Why maxconn is not respected with http-keep-alive? I verified with ss that the opened backend connections are in ESTABLISHED state.
defaults
log global
mode http
option http-keep-alive
timeout http-keep-alive 60000
timeout connect 6000
timeout client 60000
timeout server 20000
frontend http_proxy
bind *:80
default_backend backends
backend backends
option prefer-last-server
# option http-server-close
timeout http-keep-alive 1000
server s1 10.0.0.21:8080 maxconn 10
server s2 10.0.0.7:8080 maxconn 10
server s3 10.0.0.22:8080 maxconn 10
server s4 10.0.0.16:8080 maxconn 10
In keep-alive mode idle connections are not accounted. As explained in this HAProxy mailthread
The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections.
This even makes more sense, knowing that browsers initiate preconnect to improve page performance. So in keep-alive mode only outstanding/active connections are taken into account.
You can still enforce maxconn limits regardless of the connection state using tcp mode, especially that I don't see a particular reason to using mode http in your current configuration (apart from having reacher logs).
Or you can use http-reuse with http mode to achieve a lowest number of concurrent connections.

haproxy how to make it switch to a different backend in case of 404

I need your help.
I have implemented a haproxy configuration which correctly manages both http and websocket backends, except in one specific scenario.
Here below a summary about how this stuff works:
When I connect to :2703/webapp, haproxy correctly redirects to one of the two http configured backends (webapp-lb1 or webapp-lb2).
When I connect to :2703/webapp/events, haproxy correctly redirects to one of the two websocket configured backends (websocket-lb1 or websocket-lb2)
Webapp is a servlet running in apache tomcat.
When I stop one of the two backend tomcats, haproxy correctly switches to the other one (for both the http and the websocket).
On the contrary, when I try to simulate an outage of one of the http backends by stopping the webapp via the tomcat manager, haproxy reports a HTTP Status 404 error but does not switch to the other backend.
Being that I explicitly configured the http-check expect status 302 directive, I would expect that - in case of a 404 status - haproxy switches to the other backend.
I had a look at the haproxy official documentation and I also tested the http-check disable-on-404 configuration but this is not what I need, as the haproxy behavior remains exactly the same of the above one.
For info, activating the http-check disable-on-404, haproxy detects the DOWN of the backend I stopped but does nothing (which as far as I understand, is exactly what we have to expect from the http-check disable-on-404 configuration in case of 404 status); here below the haproxy log when this option is enabled:
Jul 23 14:19:23 localhost haproxy[4037]: Server webapp-lb/webapp-lb2 is stopping, reason: Layer7 check conditionally passed, code: 404, info: "Not Found", check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Here below an extract of my haproxy configuration:
frontend haproxy-webapp
bind *:2703
monitor-uri /haproxy_check
stats enable
stats uri /haproxy_stats
stats realm Strictly Private
stats auth admin:zqxwcevr
acl is_websocket url_beg /webapp/events
use_backend websocket-lb if is_websocket
default_backend webapp-lb
log global
backend webapp-lb
server webapp-lb1 192.168.136.129:8888 maxconn 400 check cookie webapp-lb1
server webapp-lb2 192.168.136.130:8888 maxconn 400 check cookie webapp-lb2
balance roundrobin
cookie JSESSIONID prefix nocache
log global
#http-check disable-on-404
option httpchk GET /webapp
http-check expect status 302
backend websocket-lb
server websocket-lb1 192.168.136.129:8888 maxconn 400 check
server websocket-lb2 192.168.136.130:8888 maxconn 400 check
balance roundrobin
log global
Please give me a hint as I am spending ages in reading documentation and forums with no success.
Thanks!

Freezing haproxy traffic with maxconn 0 and keepalive connections

Since haproxy v1.5.0 it was possible to temporarily stop reverse-proxying traffic to frontends using
set maxconn frontend <frontend_name> 0
command.
I've noticed that if haproxy is configured to maintain keepalive connections between haproxy and a client then said connections will continue be served whereas the new ones will continue awaiting for "un-pausing" a frontend.
The question is: is it possible to terminate current keepalive connections gracefully so that a client was required to establish new connections?
I've only found shutdown session and shutdown sessions commands but they are obviously not graceful at all.
The purpose of all of this is to make some changes on server seamlessly, otherwise in current configuration it would require a scheduled maintenance window.

Resources