HAProxy Keep-Alive not working as expected - http

Is my reasoning about HTTP Keep-Alive correct? Here are my assumptions:
In modern browsers, persistent connections are the default, so the Connection: Keep-Alive response header isn't likely to be sent by HAProxy for those clients. Not seeing it does not necessarily mean that the connection is not persistent.
A Keep-Alive is meant to be a short duration, for example a few hundred milliseconds, during which the client can reuse the same connection to the same server (for uses such as downloading images, CSS and JavaScript that are on the same page)
Longer persistence can be accomplished by cookies or stick tables
The reason I ask is because when I set a long keep-alive timeout (via timeout http-keep-alive), typing F5 rapidly loads a different backend server just as fast, rotating through all three without any effect from the keep-alive. I would have thought that if I hit F5 during the keep-alive timeout period of five seconds, I'd still get the same backend server.
Am I thinking about this wrong?
Here is my HAproxy config where I've set the keep-alive timeout to 5 seconds:
defaults
timeout connect 5s
timeout client 120s
timeout server 120s
timeout http-request 5s
timeout http-keep-alive 5s
option http-keep-alive
frontend http
mode http
bind *:80
default_backend servers1
backend servers1
mode http
balance roundrobin
server web1 web1:80 check
server web2 web2:80 check
server web3 web3:80 check
UPDATE
Using Wireshark, it looks like, from the client perspective, the client is reusing the same socket connection to HAProxy's frontend until the timeout expires. So the keep-alive appears to be working between client and frontend. However, I added an image to the Web page to see which backend server would serve it, and if it would be a different server than the one that served the HTML...and it was different. In roundrobin mode, two different backend servers served up content to the same client, even though I had keep-alive enabled.
This all seems to me like keep-alive is not working between frontend and backend. This is the behavior I would expect if I had set option http-server-close, but not when using option http-keep-alive which should keep the connection to the server open.

you should ensure your server does not close the connection.
you should enable the "option prefer-last-server", in your HAProxy's defaults section.
you should re-read the "option http-keep-alive" documentation of HAProxy: http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#option%20http-keep-alive.
It clearly states that:
If the client request has to go to another backend or another server due to content switching or the load balancing algorithm, the idle connection will immediately be closed and a new one re-opened. Option "prefer-last-server" is available to try optimize server selection so that if the server currently attached to an idle connection is usable, it will be used.
(hence point 2)
So your observations looks normal.

Related

Close HTTP request socket connection

I'm implementing HTTP over TLS proxy server (sni-proxy) that make two socket connection:
Client to ProxyServer
ProxyServer to TargetServer
and transfer data between Client and TargetServer(TargetServer detected using server_name extension in ClientHello)
The problem is that the client doesn't close the connection after the response has been received and the proxy server waits for data to transfer and uses resources when the request has been done.
What is the best practice for implementing this project?
The client behavior is perfectly normal - HTTP keep alive inside the TLS connection or maybe even a Websocket connection. Given that the proxy does transparent forwarding of the encrypted traffic it is not possible to look at the HTTP traffic in order to determine exactly when the connection can be closed. A good approach is therefore to keep the connection open as long as the resources allow this and on resource shortage close the connections which were idle (no traffic) the longest time.

nginx reload ignores worker_shutdown_timeout

Clients connect to my Nginx instance with the keep-alive of 15s. I set worker_shutdown_timeout to 30s, and server keep-alive to 90s.
When I send -HUP signal or using Nginx -s reload to my instance. It creates new workers and immediately shuts down old workers. This causes my clients to get 499 EOFs.
Any idea what am I doing wrong?
Keepalive connections are closed immediately regardless of the worker_shutdown_timeout value, as clients are expected to re-open them as needed. The worker_shutdown_timeout applies to connections with actual requests being processed - these requests will be terminated when shutdown timeout expires.
If your clients cannot handle keepalive connection being closed by the server, probably there is room for improvement in the client code.

Reverse-proxying an NTLM-protected website

How do I proxy requests to NTLM-protected websites, like TeamFoundation and SharePoint? I keep getting 401 authentication errors.
According to this Microsoft TechNet article, you can't.
Microsoft NTLM uses stateful HTTP, which is a violation of the HTTP/1.1 RFC. It relies on authentication (an affair which involves a handshake with a couple of initial 401 errors) and subsequent connections to be done through the exact same connection from client to server. This makes HTTP proxying nearly impossible, since each request would usually go through either a new or a random connection picked from a pool of open connections. It can be done though.
NGiNX apparently supports this through the "ntlm" option, but this is part of their commercial offering. Apache HTTPD seems to have a couple of experimental patches for this, but this requires rebuilding Apache. TinyProxy doesn't support this either. HAProxy to the rescue!
Here is an example of a running configuration which works - it's a fairly simple setup with a single backend server:
backend backend_tfs
server static teamfoundation.mycompany.com:8080 check maxconn 3
mode http
balance roundrobin
option http-keep-alive
option prefer-last-server
timeout server 30s
timeout connect 4s
frontend frontend_tfs
# You probably want something other than 127.0.0.1 here:
bind 127.0.0.1:8080 name frontend_tfs
mode http
option http-keep-alive
timeout client 30s
default_backend backend_tfs
The important options here are http-keep-alive and prefer-last-server.
One more thing for my scenerio;
If you are using ssl both sides(the iis servers and haproxy), the ssl must be same for iis and haproxy server. Otherwise ntlm doesn't work when you want to go iis from haproxy.
Maybe can help someone who has the same problem.

Server http keep-alive to the client ip or session?

When a server sent a keep-alive header to a client
Does it mean that every requests of this client ip will be benefited?
Does it mean that every requests of this client ip plus session will be benefited?
Put it into a situation.
After I browse a website and the server sent keep-alive to me. I open another browser and go to the same website. Will my second request connect without handshake?
I read the documentation but I could not find out the target. Please help me.
In HTTP 1.0, if both the client and server support keep alive then the connection will be persisted and multiple requests can use the same connection without handshaking each time, benefitting the session by slightly reducing request/response time.
In HTTP 1.1, connections are keep alive by default so this is the expected behaviour.
This happens within the session - another browser window would constitute another session, so there would be no connection sharing and therefore no benefit.

Doesnt http Keep-Alive solve the issue that long-polling solves?

What exactly is the difference between long polling and http Keep-Alive??
Doesnt http Keep-Alive solve the issue that long-polling solves??
No. They're almost completely unrelated.
HTTP keepalive allows the client to keep a connection open, but idle, to allow it to make future requests a little bit more efficiently. The server cannot send data to a client over a keepalive connection, as no request is active.
Long polling is a mechanism where the server keeps a request (and thus a connection) active, but not sending data, to allow the server to send data to the client when it becomes available -- for instance, when an event occurs.

Resources