Nginx request not timing out - nginx

I am very new to Nginx. I have set up a Nginx in my Live environment.
Problem Statement
I have set 4 servers as upstream servers in my Nginx configuration. I could see there are few requests which take more than 180 seconds overall and that makes my system very slow. I could see few requests going to the first server in the upstream and then selecting the 2nd server in the upstream. So i guess, the problem could be the first server is timing out and sending back the response after some timeout period. The only timeout period set in my configuration is
proxy_read_timeout 180;
Is this the main culprit? Can I get the timeout from the server if I change this value to a lesser value?
I need to change the value in the Live only after some expert advice.
Please someone put some light into this query.

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Nginx Keep Alive: Simulataneous SSL Handshakes Taking 25s

Thanks for reading :)
This is a super tough issue, and would love any ideas to figure this out.
Problem: The application on a user logging in initiates ~20 api requests in parallel. The first request will do the SSL handshake and then around the 10th to 13th request, I see two requests initiate the SSL handshake at the same time with each handshake getting stuck and taking over 25 seconds to repeat. The issue manifests for users as a 30 second login.
Setup: I have a setup with hardware based load balancer and about 8 nginx nodes that reverse proxy for a java application running on the same node. FE is a SPA, and all traffic flowing through nginx is dynamic content.
Additional Details
Tweaking the keepalive from 65s to 10s reduced the total SSL handshake time from >30s (which is the FE timeout) to 25s, so the issue is related to keepalive in some way.
The issue used to only be present on FF, and has now spread to safari
Upgraded nginx to latest LTS
Load balancer is distributing requests round robin.
Nginx logs do not include any mention of the issue.
The api requests are ordered, and usually affects 2 of the same 3 requests.

Varnish 3.0 returns 503 intermittently even though backend server responds in under 3 seconds

We are experiencing a weird problem with Varnish 3.0. We are observing a rate of 10-20 failures per node per minute in our varnish farm. Varnish talks to a backend server which is fronted by a load balancer application (F5) in this case. We took TCP dumps on the Varnish layer and the load balancer layer. It appears that the backend server is responding in around 3 seconds. In the TCP dump we see the 200 Ok being received by Varnish after 3 seconds. After this is where we see the strange behaviour. Varnish server sends the ACK message to the load balancer within milliseconds. The FIN, ACK message is sent after a delay of about 10 seconds. This time matches the 10 second configuration in the Varnish layer and we see the 503 error being returned from the Varnish layer. This is the Varnish backend configuration. The backend has been renamed due to security reasons.
backend backend1{
.host = "<load balancer virtual server name>";
.port = "<port>";
.first_byte_timeout = 120s;
.connect_timeout = 10s;
.between_bytes_timeout = 10s;
}
Have any of you experienced a similar issue. Any pointers on troubleshooting this issue would be greatly appreciated.
The problem seems to be in the between_bytes_timeout configuration. You have set it to be 10 seconds, and according to you, the load balancer takes 10 seconds to send the FIN, ACK message.
From the varnish docs:
between_bytes_timeout
Units: s
Default: 60
Default timeout between bytes when receiving data from backend. We only wait for this many seconds between bytes before giving up. A value of 0 means it will never time out. VCL can override this default value for each backend request and backend request. This parameter does not apply to pipe.
Try to increase this number and see what happens

How to set nginx upstream module response to client synchronously

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

Nginx: Limit number of simultaneous connections per IP to backend

We use nginx with an application server as a backend.
We need to limit number of simultaneous connections per IP to backend. We used limit_conn nginx directive for this purpose. But it doesn't work well in all cases.
If user generates a lot of connections from one IP and quickly closes them, then nginx passes this request to a backend, but because client connection is already closed, this connection is not count in limit_conn.
Is it possible to limit number of simultaneous connections per IP to backend server with nginx?
You may want to set
proxy_ignore_client_abort off;
Determines should the connection with a proxied server be closed if a
client closes a connection without waiting for a response.
from the documentation
Another suggestion is to use limit_req to limit the request rate.
I'm afraid this facility is not yet available for nginx out of the box. According to the Nginx FAQ
Many users have requested that Nginx implement a feature in the load
balancer to limit the number of requests per backend (usually to one).
While support for this is planned, it's worth mentioning that demand
for this feature is rooted in misbehaviour on the part of the
application being proxied
I've seen some 3rd parties module for that nginx-limit-upstream but I've never tried.

Resources