Im running CentOS 6.5 and just moved from apache to nginx. Im running wordpress website on the server with few plugins that use scrapers to get information from other websites, to be exact 2 scrapers , so I have to 2 cron jobs that doing it every 1h. The thing is when I moved to nginx I can run only one scraper at a time. When I try to run second one it just stops and I get error in logs upstream timed out (110: Connection timed out) while reading response header from upstream
I think it related to allowed php processes, but cant find the right setting for it. It would be cool if you can suggest what I need add/change to make it work.
fastcgi_connection_timeout - wont work for me, because I need to run them at the same time.
Use: fastcgi_read_timeout 300; in the server block and proxy_read_timeout 300; in the http block
Related
I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.
CKAN is running on version 2.9.1. When enabled ckan's native tracking, it is slowing down the page load. On the first hit it loads quickly but on the next hit, it takes about 90 seconds.
There is a reverse proxy Nginx. On it, I'm getting timeout error on the _tracking call.
*99 upstream timed out (110: Connection timed out) while reading response header from upstream,
But on the application level, I had put in some print statements which are all getting printed(the call is reaching the CKAN application).
Nginx was expecting a response from the application, but it wasn't responding. I've set the timeout for this at Nginx manually. Now there is no lag.
When invoking other services through HTTP in my app, parsing DNS is occurring every time. At the beginning, parsing DNS was normal and there was no timeout, but after a while, parsing DNS timeouts became more and more. Take out the timeout domain names separately, and use the 'dig' command to parse in the Linux environment, they are all normal. In my nginx.conf file, resolver_timeout 60s (default 30s), resolver 8.8.8.8. My app is deployed with openresty.How do I check?
I am very new to Nginx. I have set up a Nginx in my Live environment.
Problem Statement
I have set 4 servers as upstream servers in my Nginx configuration. I could see there are few requests which take more than 180 seconds overall and that makes my system very slow. I could see few requests going to the first server in the upstream and then selecting the 2nd server in the upstream. So i guess, the problem could be the first server is timing out and sending back the response after some timeout period. The only timeout period set in my configuration is
proxy_read_timeout 180;
Is this the main culprit? Can I get the timeout from the server if I change this value to a lesser value?
I need to change the value in the Live only after some expert advice.
Please someone put some light into this query.
I am using nginx as a load balancer (reverse proxy) and everything looks fine till now.
The problem I am trying to solve is to somehow make nginx to understand an upstream backend server is down and do not send requests to it. By means of down, I realy mean that there is no server like that or shutdown.
Case 1 : 2 backend server defined on upstream, both running instances and 1 backend application is stopped. Than nginx understands that it is down and doesn't try to send requests again during fail_timeout (by default 10secs) -- This is ok and already acceptable.
Case 2 : 2 backend server defined on upstream but 1 real running instance. Nginx still tries to balance the requests like both of them are up and doesn't mark stopped (not existing) backend as unhealty. In this case I receive 504 gateway timeout.
What I would like to achieve is to make nginx work like case 1 and mark the backend as unhealthy w/o receiving 504 gateway timeout.
Any ideas? Configuration option?
A little more investigation on nginx configuration directed me to this configuration line. Incase anyone needs;
proxy_next_upstream error timeout http_504;