A lot of 408 on nginx due to client body timeout - nginx

I am running a backend server with gunicorn behind an nginx 1.6.2 on ubuntu 12.04.
Recently I noticed a lot of 408's in the nginx logs for upload (POST) requests and changing the various timeouts in nginx config I got to know that it was due to client_body_timeout.
Taking tcpDump on the server side it looked like the client is not sending anything after the initial SYN and SYNACK packets and after the client body timeout time the server tries to close the connection by sending FIN ACK, but the client does not ACK and the server goes into its retransmission policy.
Is there anything I am missing or any HTTP header needs to be added or any tcp parameter need to be configured

I found the issue.
Took the client side tcpdump n found that only small sized tcp segments were reaching the client.
Reduced mss to 1200 and it worked for me :). Don't know if this is the correct approach.

Related

Does Nginx close client TCP connection before sending back the http response?

I found the following documentation from Nginx website itself: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Question:
The above point is not correct, right? Since HTTP is a synchronous protocol, after a client sends a request over an established TCP connection with the server (here Nginx reverse proxy), the client expects a response on that TCP connection. So if this is the case Nginx server cannot close the connection just after receiving the request, correct? Shouldn't the Nginx server keep the connection still open until it gets a response from upstream server connection and relays back that data over the same client connection?
I believe the way that paragraph is phrased is inaccurate.
The NGINX blog post mentioned in the question is referencing the behavior of UDP in the context of Direct Server Return (DSR). It is not part of their official documentation. I suspect that the author didn't do a good job of communicating how a conventional layer 7 reverse proxy connection works because they were focusing on explaining how DSR works.

Cloudflare - normal TCP connection "bad request"

This question is only related to Cloudflare Proxy
Trials:
If I try to establish a connection to mydomain.com:8443 through the browser it works
But if I try the same but using an ownmade TCP program, it disconnects before hitting the actual server.
Note that using the ownmade TCP program works if I turn off the orange cloud (proxy)
Errors:
So I used wireshark to see what happen and it turns out Cloudflare blocks the connection with error code 400 Bad Request
Thoughts/Questions:
Is there any settings in Cloudflare I can fiddle around with to forward it to the server regardless if it is a "bad request" (while keeping orange clouded proxy on)? Or am I forced to re-write the program to start a Websocket instead?

HTTP Client not initiate TCP FIN/ACK when Server sends PSH,FIN,ACK

I am implementing my own TCP protocol stack and an extremely simple HTTP server on UBoot, and I run into problem that client does not send FIN/ACK after I send FIN/ACK/PSH. Both HTTP and TCP content seems to be right regarding TCP sequence and Ack, and content length, but client only respond with FIN in its first attempt on any URL. Any subsequent attempt on the visited URL does not respond with FIN. Can someone tell me what I am missing in my TCP or HTTP content, that cause the client to not close the connection?
I provided a capture in case anyone is interested in this problem
Link to packet capture
The expected result should be client display the content of the HTTP 404 Not found. However, all I see if browser keeps loading non-stop until the client send a TCP RST, and the browser display Page cannot be found.
In the streams which issues (like tcp.stream eq 1 in the pcap) the 404 from the server does not get acknowledged by the client, which likely means that it is dropped somewhere. In the stream without issues (tcp.stream eq 0) the 404 gets acknowledged. Looking closer at both 404 reveals that the good one has a valid TCP checksum while the dropped one does not. Thus, most likely your TCP checksum calculation is wrong and the client system is dropping these wrong packets so that they never reach the client application.

what does the http keep alive affect on tcp connection?

On http persistent there is a "keep alive" timer.
When the keep alive time is over , what happend?
the tcp connection will close? i don't think so because there is keep alive on tcp connection that exsist.
so what is the affect of "keep alive http timer"?
If i open http connection to url (TCP) on port 80 ,
the port of server will not be free until the tcp connection will end.
so what if the http keep alive end?
I tried to understand that .
i will be happy if i get an official source to this .
thanks!
On http persistent there is a "keep alive" timer.
Correct. Don't confuse it with TCP keepalive, which is a completely different thing (RFC 1122). I am here assuming you are talking about HTTP as per your text.
When the keep alive time is over, what happened?
The connection will be closed by one peer or the other.
the tcp connection will close?
Correct.
I don't think so because there is keep alive on tcp connection that exist.
I don't know what this means.
so what is the affect of "keep alive http timer"?
It closes open HTTP connections when the specified period of inactivity has expired.
If i open http connection to url (TCP) on port 80 , the port of server will not be free until the tcp connection will end.
Incorrect. You can open many connections to the same listening port.
so what if the http keep alive end?
The connection is closed. You've already asked that.
I will be happy if I get an official source to this.
The official source for HTTP 1.1 is RFC 7230-5, the successors of RFC 2616.
TCP level keepalive is done out of band, so there is no stream data associated with this. This means applications using sockets don't see the effect of TCP keepalives, so an idle connection will still be closed by an http server or proxy.
Also, the interval for sending TCP keepalives is typically very long by default (hours). You can find more information on the keepalive socket option here on MSDN
HTTP doesn't allow a server to attempt to prompt a client to do something, so if the client doesn't use a connection, the only option is to close it or leave it open. That is typically a configuration option in the server or proxy.

Simulating HTTP/TCP re-transmission timeout

I am working on linux.
I have a HTTP client which requests some data from the HTTP server. The HTTP client is written in C and HTTP server is written in perl.
I want to simulate TCP re-transmission timeouts at the client end.
I assume that closing the socket gracefully would not result in client to re-transmit the requests.
So I tried the following scenario:
Exit the Server as soon as it gets the HTTP GET request. However, I noticed that once the application exits, the socket is still closed gracefully. I see that the server initiates FIN.ACK messages towards the client even though the application has not called "close" on the socket. I have noticed this behaviour on a simple TCP server and client written in C program as well.
Server does not send any response to the client's GET request. In this case I notice that there is still FIN, ACK sent by the server.
Seems that in these cases the OS (linux) takes care of closing the socket with the peer.
Is there any way to suppress this behaviour (using ioctl or setsockopt options) or any other way to simulate the TCP re-transmission timeouts.
You could try setting firewall rules that block the packets going from the server to the client, which would cause the client to re-transmit the quests. On Linux, this would probably be done using iptables, but different distributions have different methods of controlling it.
This issue was previously discussed here

Resources