NGINX not proxying request to upstream if client send TCP FIN immediately after sending data - nginx

I have client (10.1.30.29), that sends HTTP requests to server (port 6500, 10.1.30.11-127.0.0.1) behind NGINX reverse proxy (port 80, 10.1.30.11-127.0.0.1). Most of the times (~5 of 6), server is not receiving requests.
Digged into with wireshark, I found that client sends TCP FIN ACK packet after it sends data and before NGINX responded with TCP ACK for data:
Data transmission from NGINX to server starts, but ends before any data was transferred.
In other (correct) case data is fully transmitted:
Key difference from first case is that NGINX managed to send ACK on data before client have sent FIN ACK.
In both cases access log in NIGNX contains records about requests; error log is empty.
Unfortunately, I barely can influence the client's behavior, but I know, that other HTTP server implementations can work with request data even if client incorrect closes TCP transmission. The question is if there any way to force NGINX to ignore such incorrect client's behavior and always proxy request data?
P. S. already tried postpone_output NGINX option - no luck.

Found two solutions, that seems pretty similar and valid (in my case):
proxy_ignore_client_abort on;
2. proxy_http_version 1.1;
proxy_request_buffering off;

Related

Does Nginx close client TCP connection before sending back the http response?

I found the following documentation from Nginx website itself: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Question:
The above point is not correct, right? Since HTTP is a synchronous protocol, after a client sends a request over an established TCP connection with the server (here Nginx reverse proxy), the client expects a response on that TCP connection. So if this is the case Nginx server cannot close the connection just after receiving the request, correct? Shouldn't the Nginx server keep the connection still open until it gets a response from upstream server connection and relays back that data over the same client connection?
I believe the way that paragraph is phrased is inaccurate.
The NGINX blog post mentioned in the question is referencing the behavior of UDP in the context of Direct Server Return (DSR). It is not part of their official documentation. I suspect that the author didn't do a good job of communicating how a conventional layer 7 reverse proxy connection works because they were focusing on explaining how DSR works.

Does http CONNECT method get proxy relay data at TCP level?

This is the question about HTTP CONNECT method.
I learned that after CONNECT request from client a TCP connection is established between proxy and remote server.
Then, at the step of SSL handshake, does the proxy evaluate and relay any http data from client up to at TCP level? So the data is not passed to application level of the proxy?
I understood that after SSL session establishment any data from client is encrypted and the proxy cannot read those. But how about the time before SSL session establishment, that is, SSL handshake step?
After the proxy has sent a successful response to the clients CONNECT request a normal proxy will forward all data between client and server without any changes. This includes the TLS handshake for HTTPS connections tunneled using CONNECT.
Note that there are proxies which do SSL interception (typically at firewalls). In this case the data are not blindly forwarded but the proxy will be an active man in the middle which means that the client does not receive the original certificate from the server and that the proxy will decrypt and maybe even modify the traffic between client and server.

HTTP REDIRECT(3xx) TO A DIFFERENT HOST

I'm building a HTTP client(for embedded devices) and I was wondering,
If I receive a HTTP 3xx response, and in the location header I get a hostname different from the one I had in the request. Should I disconnect the TCP connection and reconnect to the new host, or I just need to send a new request with a new host header and keep the old TCP connection alive.
Thank you
It doesn't make sense to reuse the original TCP connection if you're being redirected elsewhere. If my webserver only hosts example.com and I redirect you to elsewhere.net, my webserver will probably not respond to a request for elsewhere.net.
Worse, this also potentially sets you up for a great man-in-the-middle attack if my server redirects you to http://bank.com and you reuse the same TCP connection when sending a request to bank.com. My server can maliciously respond to requests with Host: bank.com, which isn't something you want to happen.
You can't assume the original connection can be reused unless the redirect is to the same same host with the same protocol.
Persistent HTTP connections are a little tricky with the number of client/server combinations. You can avoid the complexity by just wasting time closing and re-establishing each connection:
If you're implementing a HTTP/1.0 client, Connection: keep-alive isn't something you have to implement. Compliant servers should just close the connection after every request if you don't negotiate that you support persistent connections.
If you're implementing a HTTP/1.1 client and don't want to persist the connection, just send Connection: close with your request and a HTTP/1.1 server should close the connection.

How to set nginx upstream module response to client synchronously

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

A lot of 408 on nginx due to client body timeout

I am running a backend server with gunicorn behind an nginx 1.6.2 on ubuntu 12.04.
Recently I noticed a lot of 408's in the nginx logs for upload (POST) requests and changing the various timeouts in nginx config I got to know that it was due to client_body_timeout.
Taking tcpDump on the server side it looked like the client is not sending anything after the initial SYN and SYNACK packets and after the client body timeout time the server tries to close the connection by sending FIN ACK, but the client does not ACK and the server goes into its retransmission policy.
Is there anything I am missing or any HTTP header needs to be added or any tcp parameter need to be configured
I found the issue.
Took the client side tcpdump n found that only small sized tcp segments were reaching the client.
Reduced mss to 1200 and it worked for me :). Don't know if this is the correct approach.

Resources