I'm working on a web service where Nginx is used as the proxy and Dropwizard is in the backend. There is a problem with the loading of the URL, it is not loading anything that is taking more than 2 mins. Initially it was 1 minute, so I changed the proxy_read_timeout to 3600s;.
But however I'm increasing it, the request is not open for more than 2 mins. The nginx error log is showing the following error,
2016/10/17 09:43:57 [error] 6#6: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.128.10, server: localhost, request: "GET /report-system/templates/Connection/csv/Transaction?params=PageNumber:1,PageSize:2000 HTTP/1.1", upstream: "http://127.0.0.1:8384/report-service/dev/reports/templates/Player_GlobalTransaction/render?connref=UpamMysql&format=csv¶ms=PageNumber:1,PageSize:2000", host: "api.website.com"
Most probably the error is not with dropwizard and it is only with nginx, because when I tested without Nginx, the web-service is open until it completes loading the page. And it is closing exactly at 2 mins now.
Here is the full content of nginx.conf.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
proxy_read_timeout 3600s;
}
And everything is running on separate docker containers. So, what is the right way to have keep the connection open until in loads the content completely? Any help on this would be greatly appreciated.
Related
I try to deploy a Flask App on centOS with NGINX. The Flask app is served on 127.0.0.1:5000 and is also accessible with curl 127.0.0.1:5000.
I tried to keep it simple and removed all NGINX config files in conf.d and just use nginx.conf. This is the whole content of the nginx.conf file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5000/;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
However, NGINX still shows its default page and there seems to be nothing in the world or at least in the nginx.conf file to change this. Is there anything in NGINX that still might require reconfiguration or anything on centOS that could lead to the problem?
I solved it. It was indeed open NGINX commands.
After I ran the following commands it worked:
sudo systemctl stop nginx
sudo systemctl enable nginx
sudo systemctl restart nginx
Maybe it helps someone in the future.
we are configuring customized the nginx, while validating the nginx i'm getting below error
nginx -t
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
nginx: [warn] conflicting server name "gmottqa.test.att.com" on 0.0.0.0:8080, ignored
nginx: [warn] conflicting server name "gmottqa.test.att.com" on 0.0.0.0:8443, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Adding nginx config file for troubleshooting the issue, please let me know if anything requried
cat nginx.conf
user nginx;
worker_processes auto;
error_log /opt/app/nginx/logs/error.log error;
pid /opt/app/nginx/run/nginx.pid;
events {
worker_connections 4096;
multi_accept on;
use epoll;
}
worker_rlimit_nofile 400000;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /opt/app/nginx/logs/access.log main;
keepalive_timeout 15;
keepalive_requests 1024;
client_header_timeout 15;
client_body_timeout 15;
send_timeout 15;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# enable gzip compression
gzip off;
gzip_http_version 1.1;
gzip_vary on;
gzip_proxied any;
gzip_static on;
# end gzip configuration
include /opt/app/nginx/conf/conf.d/*.conf;
can you please some where i have check this issue
I assume, you're using Linux.
In the /etc/nginx/sites-enabled, depending on your editor it may have left a temp file e.g. default~ or the file could be named .save or something like this.
Just run $ ls -lah to see which files are unintended to be there.
Delete this file, it should solve your problem.
I have a Spring boot application running on embedded Tomcat running on Vagrant CentOS box. It running on port 8080, so I can access application in web browser.
I need to set up Nginx proxy server that listen to port 80 and redirect it to my application.
I'm getting this error it the Nginx log:
[crit] 2370#0: *14 connect() to 10.0.15.21:8080 failed (13: Permission
denied) while connecting to upstream, client: 10.0.15.1, server: ,
request: "GET / HTTP/1.1", upstream: "http://10.0.15.21:8080/", host:
"10.0.15.21"
All the set up examples looks pretty much similar and the only answer I could find that might help was this one. However it doesn't change anything.
Here is my server config located in /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80;
location / {
proxy_pass http://10.0.15.21:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And here is my nginx.conf file'
user root;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}
Don't know if this is related, but under journalctl -u nginx I can see this log.
systemd1: Failed to read PID from file /run/nginx.pid: Invalid
argument
centos has SELinux enabled by default.
You would need to turn if off by running
setsebool httpd_can_network_connect on
There are some information about this on internet if you want to learn more. to make it persistent you can run
setsebool -P httpd_can_network_connect on
I have kibana listening on localhost:5601 and if I SSH tunnel to this port I can access kibana in my browser just fine.
I have installed nginx to act as a reverse proxy but having completed the setup all I get is 502 Bad Gateway. The more detailed error in the nginx error log is
*1 upstream prematurely closed connection while reading response header from upstream,
client: 1.2.3.4,
server: elk.mydomain.com,
request: "GET /app/kibana HTTP/1.1",
upstream: "http://localhost:5601/app/kibana"
My nginx config is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.fedora.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
}
My kibana.conf file within /etc/nginx/conf.d/ is:
server {
listen 80 default_server;
server_name elk.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
}
}
This is a brand new Amazon Linux EC2 instance with the latest versions of kibana and nginx installed.
Has anyone encountered this problem before? I feel like it's a simple nginx config problem but I just can't see it.
It turns out that the slashes before the dollars proxy_set_header Upgrade \$http_upgrade; were a result of a copy-paste from another configuration management tool.
I removed the unnecessary slashes to make proxy_set_header Upgrade $http_upgrade; and reclaimed my sanity.
I want to ask about the nginx web server, when accessing the web a lot, then the server becomes down and get an error code 502/504, I use varnish 4 in the web server with port 8000, the physical server has the following specifications:
8 CPU Cores
16GB RAM
For nginx configuration is as follows:
user nginx;
worker_processes 8;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
fastcgi_buffers 256 16k;
fastcgi_buffer_size 256k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_max_temp_file_size 0;
......
}
While in php-fpm is as follows:
pm = dynamic
pm.max_children = 246
pm.start_servers = 32
pm.min_spare_servers = 32
pm.max_spare_servers = 64
Please help me feel confused, I followed some of the recommendations that I found from several sources, but still failed, thanks
Regards,
Janitra Panji
The HTTP status codes you are asking about are all defined on the related Wikipedia page
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
The all indicate that the service behind your Nginx reverse proxy is down for some reason or another. You should study the tuning of your backend server. The issue quite possibly there, and not with Nginx.