I had this error today with my NGINX on Vutr, with php7:
2017/08/22 07:46:09 [error] 19191#19191: *7060 upstream timed out
(110: Connection timed out) while reading response header from
upstream, client: 111.11.11.111, server: somedomain.com, request: "GET
/com$
I did a hard reset of the server and now everything is working fine. But what was it, why it happened and will it happen again?
This is my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
#custom added 22.08.2017
proxy_read_timeout 300;
#end custom
client_max_body_size 800m;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json
application/javascript text/xml application/xml appli$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Finally, i found the issue!
I had too many processes at the same time.
the problem was in php-fpm settings.
so this error I had in php logs:
(so if u have same problem check php logs)
On the web I found this article.
After I add change my www.conf file:
pm.max_children = 40
Then I restart the php-fpm:
systemctl restart php7.0-fpm.service
Related
Please note, I have referred to THIS QUESTION however this did not fix the issue...
As you can see from the nginx error log, I am sending a post request to /order-history. This will then run a SQL query that takes about a minute, however, the connection is prematurely closing. This issue does not occur when the application is deployed with the flask test server obviously, as the logs point out :)
/var/log/nginx/error.log:
2018/05/30 15:34:04 [error] 12294#12294: *9 upstream prematurely closed connection while reading response header from upstream, client: 192.168.96.116, server: alpha2, request: "POST /order-history HTTP/1.1", upstream: "http://unix:/home/hleggio/myproject/myproject.sock:/order-history", host: "alpha2:5000", referrer: "http://alpha2:5000/query-selection"
/etc/nginx//nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
fastcgi_read_timeout 99999;
proxy_read_timeout 99999;
# server_tokens off;
client_max_body_size 20M;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-available/myproject:
server {
listen 5000;
server_name alpha2;
location / {
include proxy_params;
proxy_pass http://unix:/home/hleggio/myproject/myproject.sock;
proxy_read_timeout 9999;
proxy_connect_timeout 9999;
proxy_request_buffering off;
proxy_buffering off;
}
}
I was able to solve this issue. This wasn't related to my NGINX configuration.
The problem resided in my Gunicorn configuration file.
In my Gunicorn config file (/etc/systemd/system/myproject.service), I added the following to my ExecStart line:
--timeout 600
The file now looks like this:
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=harrison
Group=www-data
WorkingDirectory=/home/harrison/myproject
Environment="PATH=/home/harrison/myproject/myprojectenv/bin"
ExecStart=/home/harrison/myproject/myprojectenv/bin/gunicorn --workers 3 --timeout 600 --bind unix:myproject.sock -m 007 wsgi:application
[Install]
WantedBy=multi-user.target
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
# this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json
application/javascript text/xml application/xml applicatio
n/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
include /etc/nginx/sites-enabled/*;
Deploying meteor with nginx web server :
This my nginx.config file , I am getting follwing error ""map" directive is not allowed here in /etc/nginx/sites-enabled/app:2 nginx: configuration file /etc/nginx/nginx.conf test failed" while executing "sudo nginx -t"
I have written and deployed an R API using plumber to a Digital Ocean droplet as in the instructions.
I am posting in .json data and expecting .json data back. To do this I use the curl command from the command line, for example:
curl --data #data/data.json http://[API ADDRESS] > results/output.json
This works fine when I post a small dataset but as the dataset gets bigger I start to get an HTTP error as follows:
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.10.0 (Ubuntu)</center>
</body>
</html>
I tried editing /etc/nginx/nginx.conf to allows for longer timeouts and larger files but still no luck. The nginx.conf file is as follows:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 3000;
types_hash_max_size 2048;
# server_tokens off;
##
# Allow for longer jobs
##
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 100M;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I then restart the nginx server with sudo service nginx restart but still get the timeout error.
The /var/log/nginx/error.log line reads as follows:
*4 upstream timed out (110: Connection timed out) while reading response header from upstream, client: [MY IP], server: _, request: "POST [API]", upstream: "http://127.0.0.1:8000/[API]", host: "[HOST ADDRESS]"
Any help or tips you can give on how plumber works under the hood would be very useful indeed. Many thanks!
I have fixed this now by adding the following lines to /etc/nginx/sites-available/[my site]/mysite.conf
location {
# time out settings
proxy_connect_timeout 3000s;
proxy_send_timeout 3000;
proxy_read_timeout 3000;
}
I also commented out the keepalive_timout flag in nginx.conf and specified the http version as in this article but I am not sure exactly what made the difference. If I find out I will update the answer.
I am using nginx as a proxy server to serve two web apps on a single server that are running on separate ports (for local development purposes). Below is the full nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
server {
server_name news.mysite;
location / {
proxy_pass http://localhost:3001;
}
}
server {
server_name blog.mysite;
location / {
proxy_pass http://localhost:3002;
}
}
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
}
When accessing the subdomains from the browser I see the expected content based on the web apps that are running. However, when I access the main domain through the browser (http://mysite), it displays the content from the first proxypass (news.mysite # localhost:3001). I would have expected one of the following two scenarios:
Serve content from the default directory # /var/www/html
The typical "This site can’t be reached" error in the browser.
Why is nginx proxying the first proxypass it finds by default and how can I change it?
The first server that nginx encounters for a socket will be considered the default unless you create another one that you explicitly mark as the default.
So for your case, you would want to add an additional server block as a catch all:
server {
listen 80 default_server;
root /var/www/html;
}
I have docker and nginx version: nginx/1.10.0 (Ubuntu 16.04)
my nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
include /etc/nginx/tcpconf.d/*;
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
this is default nginx.conf, and I added include /etc/nginx/tcpconf.d/*;
tcpconf.d contains 1 file:
stream {
upstream docker{
server localhost:8182;
server localhost:8183;
}
server {
listen 443;
proxy_pass docker;
}
}
So basically i have glassfish 4 server on docker, and when i start container on port 8182 i want to nginx balance request to port 8183(if 8182 not responding) and in reverse order.
And this is works perfectly, except one thing, when i start container, glassfish server is starting and web application on this server is starting too. Glassfish starts after 1-5 seconds and web application after 30 sec - 1 min, so when glassfish is up(for example on port 8182) nginx send request to this port and i getting 404, because glassfish is up, but web application is not, in this case I want to be redirected on port 8183 because 404 is not what i want to see)
so my question is how to tell nginx to not showing me 404 and try to request another port?
Is there any reason you are using the stream module for this? If it is a regular http server NGINX is proxying to then use regular http{} and proxy_next_upstream to define behavior on 404 error:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream