I have an application that is trying to make ajax requests from a Javascript application to retrieve xml files from an nginx server. Normally everything is fine but I often see errors in the nginx log (and get errors from my applications error reporting) that nginx expriences a timeout during the get:
2015/11/16 21:15:21 [error] 1208#0: *4894044 upstream timed out (110: Connection timed out) while connecting to upstream, client: 209.95.138.54, server: www.servername.com, request: "GET /Shape%20Textures/Metal/Born%20to%20Shine.jpg?agentView=436314 HTTP/1.1", upstream: "http://127.0.0.1:3000/AQO/Shape%20Textures/Metal/Born%20to%20Shine.jpg?agentView=436314", host: "www.servername.com.com", referrer: "https://www.servername.com.com/?nid=39956&mode=edit"
We also sometimes get this similar error:
2015/11/17 19:03:16 [error] 1002#0: *54042 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 137.164.121.52, server: www.servername.com, request: "POST /projects/?q=node/36375/update_project_time_and_stats HTTP/1.1", upstream: "http://127.0.0.1:3000/projects/?q=node/36375/update_project_time_and_stats", host: "www.servername.com", referrer: "https://www.servername.com/AQO/?nid=36375&mode=edit"
I have seem similar posts with similar timeouts but all of those posts seemed reproducible. This issue never happens to me but running on the live server I will see 5-30 of these timeouts a day.
Here is my nginx config:
client_max_body_size 50M;
server {
server_name servername.com;
return 301 $scheme://www.servername.com$request_uri;
}
server {
listen 80;
listen 443 ssl;
fastcgi_read_timeout 120;
ssl_certificate /path/to/ssl/star_servername_com.pem;
ssl_certificate_key /path/to/ssl/star_servername_com.key;
# Redirect all non-SSL traffic to SSL.
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name www.servername.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 120s;
}
}
Since I can't reproduce it I wonder how might I be able to track down this issue?
upstream timed out (110: Connection timed out) while connecting to
upstream
means that your upstream server didn't accept connection in time
upstream timed out (110: Connection timed out) while reading response
header from upstream
means that your upstream server didn't respond with answer in time
So check your upstream server at 127.0.0.1:3000. It may have setup with small number of incoming connections or some sort of DDoS protection or really heavy loaded at the moment or something else.
I would like to refer to this answer which shows how to optimize your settings.
Although not the best solution, but very dependent on what you'd like to achieve, you could simply increase proxy_read_timeout to for example 300.
Related
I use Nginx as a reverse proxy to serve an internal HTTP application over a HTTPS connection.
But I hae some problems with redirection.
This is my setup:
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://172.16.88.208;
proxy_intercept_errors on;
error_page 302 = #handle_redirects;
}
location #handle_redirects {
resolver 8.8.8.8;
set $orig_loc $upstream_http_location;
proxy_pass $orig_loc;
}
At first, I tried it without setting the resolver to 8.8.8.8 but encountered this error (while a 502 was displayed to the client):
*1 no resolver defined to resolve xxx.xxxx.com while sending to client, client: xxx.xxx.xxx.xxx
After setting the resolver I'm getting closer, but not quite. Still a 502 and this error:
*6 connect() failed (111: Connection refused) while connecting to upstream,
upstream: "https://172.16.88.208:443/login"
Well, that makes perfect sense, because the application is hosted at http://, not https://
I specify this in the code above:
set $orig_loc $upstream_http_location;
Why is the $upstream_http_location redirected to a https page?
Thanks for any help.
I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}
I get 502 bad gateway when connecting to my NodeBB installation using my domain
NodeBB is running on default port (4567)
My nginx seems to be configured properly (when connecting using the IP): http://puu.sh/mLI7U/0e03691d4c.png
My nodebb seems to be configured properly (when connecting using the IP):
http://puu.sh/mLI95/5fdafcaed9.png
My A record directing the IP to my VPS is configured properly.
Here is my etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name sporklounge.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:4567/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
My NodeBB config.json
{
"url": "http://localhost:4567",
"secret": "25d0d6a2-0444-49dc-af0c-bd693f5829d8",
"database": "redis",
"redis": {
"host": "127.0.0.1",
"port": "6379",
"password": "",
"database": "0"
}
}
Here is my var/log/nginx/error.log
2016/01/27 12:04:42 [error] 22026#0: *4062 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:80/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
2016/01/27 12:21:06 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4567/", host: "sporklounge.com"
2016/01/27 12:21:07 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:4567/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
All help is greatly appreciated and I will answer all questions that i can to help get a solution, thank you!
The one thing I see is that according to the docs, your url config value should be the full web-accessible address that points to your NodeBB. That would be sporklounge.com, not the current value.
It could also be that the backend is just sometimes responding slowly. Try very high values of this value in Nginx to see if the backend eventually responds:
# For testing, allow very long response times.
proxy_read_timeout 5m;
Also, use netstat to confirm the backend is running on port 4567:
sudo netstat -nlp | grep ':4567'
Wait, the answer may right in your logs, which give you the reason for the connection failure:
(13: Permission denied) while connecting to upstream
See the related question:
(13: Permission denied) while connecting to upstream:[nginx]
I'm not sure if this is an issue with nginx or thumbor. I followed the instructions located here for setting up thumbor with nginx, and everything has been running smoothly for the last month. Then recently we tried to use thumbor after uploading images with larger dimensions (above 2500x2500), but I'm only greeted with a broken image icon.
If I go to my thumbor URL and pass the image location itself into the browser I get one of two response:
1) 500: Internal Server Error
or
2) 502: Bad Gateway
For example, if I try to pass this image:
http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg
I get 502: Bad Gateway and checking my nginx error logs results in
2015/05/12 10:59:16 [error] 32020#0: *32089 upstream prematurely closed connection while reading response header from upstream, client: <my-ip>, server: <my-server>, request: "GET /unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg HTTP/1.1" upstream: "http://127.0.0.1:8003/unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg", host: "<my-host>"
If needed, here's my thumbor.conf file for nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
upstream thumbor {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name <my-server>;
client_max_body_size 10M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://thumbor;
proxy_redirect off;
}
}
For images below this, it works fine, but users will be uploading images from their phones. How can I fix this?
Im running mu Nginx as revers proxy to serv content from remote url , it was working fine for while , when i moved it to another host , i start getting the following erros
i tested internet though new host all is fine evern nginx serv from root location without issue but when i request a location that serv as revers proxt am getting
8583#0: *2 upstream timed out (110: Connection timed out) while
connecting to upstream, client: 10.64.159.12, server: xxxxx.com,
request: "GET /web/rest/v1/story/656903.json HTTP/1.1", upstream:
"http://requestedurl.com:80/web/rest/v1/story/656903.json", host: "myurl.com"
Location Config :
location /data {
sub_filter 'http' 'https';
sub_filter_once off;
sub_filter_types application/json;
proxy_read_timeout 300;
proxy_pass http://url here ;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Accept-Encoding "";
Any advise