502 bad gateway after 30 seconds - nginx

One of the pages on my website requires a long computation on the server(~2 minutes). If I run the website on localhost it works fine. But in production when ~30 seconds. Here's my http section of the nginx conf:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 120;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I tried adding:
fastcgi_read_timeout 300;
proxy_read_timeout 300;
at the end(after "server") but it didn't do anything.

If you get 502 Bad Gateway error it means your Application server(i guess its Unicorn according to your tags) is sending the timeout and not Nginx, you should increase the timeout in your unicorn.rb file in your production server.
worker_processes 2
listen "/tmp/xxx.socket"
##equal to your proxy read timeout in the Nginx config.
timeout 300
pid "/tmp/unicorn.xxx.pid"
In case of Python Green Unicorn please do the following:
NUM_WORKERS=3
TIMEOUT=300
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--timeout $TIMEOUT \
--log-level=debug \
--bind=x.x.x.x \
--pid=$PIDFILE

Related

uWSGI + NGINx + web2py application not accassable

I am using centos 7 with python 2.7.15 and uwsgi + nginx to host my app.
step by step i am getting closer to make it work.
I had to set the python 2.7.15 to work as python insted of 2.7.5
then I had some uwsgi probmels with emperor service.
but now... the app works when I run uwsgi trough
uwsgi --http :8000 --chdir /opt/web2py -w wsgihandler:application
but when I try to put it together with nginx I cannot access the page
My nginx config ATM is
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $request_uri $loggable {
~/engine/getTasks.* 0;
~/static/* 0;
default 1;
}
access_log /var/log/nginx/access.log main if=$loggable;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
client_max_body_size 10M;
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /opt/web2py_cert/web2py.com;
}
location / {
uwsgi_pass unix:/run/uwsgi/web2py.sock;
include uwsgi_params;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
And my uwsgi.ini file
[uwsgi]
plugin = python2.7
logto = /opt/web2py/uwsgi.log
chdir = /opt/web2py
http = 0.0.0.0:80
module = wsgihandler:application
master = true
processes = 5
uid = woshi
socket = /run/uwsgi/web2py.sock
chown-socket = woshi:nginx
chmod-socket = 660
vacuum = true
any suggestions???
thank you

NGINX always returns {"status":400,"message":"Bad request"} - not able to consume upstream api

Below is my config:
am always getting bad request error when trying to consume api response. But it looks like the upstream api not getting recognized in the configs . please help.
api_gateway.conf
log_format api_main '$remote_addr - $remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer" "$http_user_agent"'
'"$http_x_forwarded_for" "$api_name"';
include api_backends.conf;
#include api_keys.conf;
server {
set $api_name -; # Start with an undefined API name, each API will update this value
access_log /var/log/nginx/api_access.log api_main; # Each API may also log to a separate file
listen 443 ssl;
server_name <my-ip-address>; #have put my ip address where nginx installed
# TLS config
ssl_certificate /etc/nginx/certs/test-bundle.crt;
ssl_certificate_key /etc/nginx/certs/test.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_protocols TLSv1.1 TLSv1.2;
# API definitions, one per file
include api_conf.d/*.conf;
# Error responses
error_page 404 = #400; # Invalid paths are treated as bad requests
proxy_intercept_errors on; # Do not send backend errors to the client
include api_json_errors.conf; # API client friendly JSON error responses
default_type application/json; # If no content-type then assume JSON
}
api_backends.conf
upstream warehouse_inventory {
zone inventory_service 64k;
server <ip>:443; #ip is where my upstream api hosted
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/api_gateway.conf; # All API gateway configuration
include /etc/nginx/conf.d/*.conf; # Regular web traffic
}
/etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
/etc/nginx/api_conf.d/warehouse_api_simple.conf
# API definition
#
location /api/warehouse/inventory {
set $upstream warehouse_inventory;
rewrite ^ /_warehouse last;
}
# Policy section
#
location = /_warehouse {
internal;
set $api_name "Warehouse";
# Policy configuration here (authentication, rate limiting, logging, more...)
proxy_pass https://$upstream$request_uri;
}
curl https:///api/warehouse/inventory/try/health -k
{"status":400,"message":"Bad request"}
Log:
access.log
<ip-where-nginx-installed> - - [10/Feb/2020:06:19:29 -0600] "GET /api/warehouse/inventory/try/health HTTP/1.1" 404 153 "-" "curl/7.29.0" "-"
api-access.log
<ip-where-nginx-installed> - - [10/Feb/2020:07:04:20 -0600] "GET /api/warehouse/inventory/try/health HTTP/1.1"400 39 "-" "curl/7.29.0""-" "Warehouse"

cgit + uwsgi + nginx not generating the pages for repositories

I am trying to configure cgit with nginx through uwsgi. I managed to get the main page working on example.com/ and added my repos but when I try to access a repo in example.com/somerepo I get a 502 error.
I know cgit is working fine because I can run cgit.cgi with and without the QUERY_STRING="url=somerepo"environmental variable and it generates the correct html for the main page and the somerepo page respectively.
I have been trying to debug the issue using the nginx error logs with debug level, strace and gdb on both nginx and cgit.cgi and the output from uwsgi, this is what I've found so far:
When I click on a somerepo link on cgit's main page uwsgi makes a GET request to /somerepo and nginx tries to open a directory in /htdocs/somerepo which it can't find because it doesn't exist. (I suppose cgit.cgi should generate this on the fly). I know this from strace stat("/usr/share/webapps/cgit/1.2.1/htdocs/olisrepo/", 0x7ffdf4c817c0) = -1 ENOENT (No such file or directory)
When I click on a somerepo link I get read(8, 0x561749c8afa0, 65536) = -1 EAGAIN (Resource temporarily unavailable) from cgit.cgi's strace.
When I try to visit a invalid url like somerepotypo it correctly generates a 404 page saying 'no repositories found'.
These are my configuration files:
/etc/nginx/nginx.conf
user nginx nginx;
worker_processes 1;
error_log /var/log/nginx/error_log debug;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip off;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
# Cgit
server {
listen 80;
server_name example.com;
root /usr/share/webapps/cgit/1.2.1/htdocs;
access_log /var/log/nginx/access_log main;
error_log /var/log/nginx/error_log debug;
location ~* ^.+(cgit.(css|png)|favicon.ico|robots.txt) {
root /usr/share/webapps/cgit/1.2.1/htdocs;
expires 30d;
}
location / {
try_files $uri #cgit;
}
location #cgit {
include uwsgi_params;
uwsgi_modifier1 9;
uwsgi_pass unix:/run/uwsgi/cgit.sock;
}
}
}
cgit.ini (I load this using uwsgi --ini /etc/uwsgi.d/cgit.ini)
[uwsgi]
master = true
plugins = cgi
chmod-socket = 666
socket = /run/uwsgi/%n.sock
uid = nginx
gid = nginx
processes = 1
threads = 1
cgi = /usr/share/webapps/cgit/1.2.1/hostroot/cgi-bin/cgit.cgi
/etc/cgitrc
css=/cgit.css
logo=/cgit.png
mimetype-file=/etc/mime.types
virtual-root=/
remove-suffix=1
enable-git-config=1
scan-path=/usr/local/cgitrepos
Can you help me fix this? Thanks in advance

nginx configuration does not start

I'm trying to setup NGINX server as benchmark to test client-server interaction. The root in the server contains a few thousand random html pages.
This is also my first work with applications like NGINX. I have been struggling to configure nginx for awhile now using this website [1] and the documentation of nginx.
To give you some more background, I setup nginx on my local machine and the installation on a specific-directory (called libs, bad naming -- I should change that.)
After starting nginx using ./sbin/nginx -c conf/nginx.conf I tried to curl on the website to check if it is functional
curl http://127.0.0.1:6011
And I get this error:
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.15.10</center>
</body>
</html>
Where am I going wrong in my configuration?
[1] https://www.slashroot.in/nginx-web-server-performance-tuning-how-to-do-it
worker_processes 32;
worker_rlimit_nofile 51200;
error_log /lustre1/nginx-benchmark/libs/logs/error.log;
error_log /lustre1/nginx-benchmark/libs/logs/error.log notice;
error_log /lustre1/nginx-benchmark/libs/logs/error.log info;
pid /lustre1/nginx-benchmark/libs/logs/nginx.pid;
events {
worker_connections 50000;
multi_accept on;
}
http {
include /lustre1/nginx-benchmark/libs/conf/mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay on;
types_hash_max_size 2048;
#gzip on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /lustre1/nginx-benchmark/libs/logs/access.log main;
server {
listen 6011 default_server;
listen [::]:6011 default_server ipv6only=on;
server_name localhost;
#listen 6011;
#server_name localhost;
#charset koi8-r;
access_log /lustre1/nginx-benchmark/libs/logs/host.access.log main;
location / {
root /lustre1/nginx-benchmark/dataset/1024/;
try_files $uri html/index.html;
#index.php;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /lustre1/nginx-benchmark/libs/html/50x.html {
root /lustre1/nginx-benchmark/libs/html;
}
}
}
Can you ls any files in /lustre1/nginx-benchmark/dataset/1024/?
ls -l /lustre1/nginx-benchmark/dataset/1024/
If you can't then, that's why nginx is 404ing your request - it can't see them either. If you can, what are the permissions on that folder and the files? Are they readable by the user nginx is running as? What about the parent folders of that path?
Add an error log with debug, to see what nginx thinks the problem is:
access_log /lustre1/nginx-benchmark/libs/logs/host.access.log main;
error_log /lustre1/nginx-benchmark/libs/logs/host.error.log debug;
Change your try_files line to look like this:
try_files $uri /index.html =404;
The =404 should terminate nginx's repeated checking, which is probably being caused by your /lustre1/nginx-benchmark/dataset/1024/ docroot not having a html/index.html in it.

map subdomain to different applications running on same IP same port distingushed by path

I have 3 applications(one web application,2 angular apps) running on same ec2 instance on same port(8080)
Paths to apps are
http://53.233.23.12:8080/Abc
http://53.233.23.12:8080/Xyz
http://53.233.23.12:8080/Pqr
I am using Nginx for redirection in server
My nginx.conf file looks like this`
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http{
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
include /etc/nginx/default.d/*.conf;
server_name www.listmydebt.com listmydebt.com;
return 301 http://listmydebt.com:8080/Abc;
# redirect server error pages to the static page /40x.html
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
server {
listen 80;
server_name admin.listmydebt.com;
return 301 http://listmydebt.com:8080/Xyz;
}
server {
listen 80;
server_name partner.listmydebt.com;
return 301 http://listmydebt.com:8080/Pqr;
}
}
All domain and subdomains( listmydebt.com, admin.listmydebt.com , partner.listmydebt.com) pointing to same IP address(53.233.23.12).
My Nginx is running on port 80 and tomcat server in which my applications are deplyed running on port 8080
When I put listmydebt.com its redirecting to http://listmydebt.com:8080/Abc and browser url changed(http://listmydebt.com:8080/Abc). But what I want is the url on browser should remain same as listmydebt.com but it should show the redirected url content.same happening for subdomains as well.
Please help me out .If any additional info is required please mention.Thanks in advance.
The best way is to use docker swarm, or at least run the applications as several dockers in several ports.
Then you can easily map the ports on subdomains with an ansible playbook.

Resources