Cannot Access Glassfish4 Admin console via nginx location and proxy pass - nginx

Folks,
We have a java application running under Glassfish4. I wanted to disable direct access to the Glassfish admin server by closing 4848 at the firewall level and accessing it via a location directive in nginx (also offloading the SSL to nginx).
with asadmin enable-secure-admin turned on I can get into the admin server via https://foo.domain.com:4848 and administer it normally.
However when I disable secure admin via asadmin disable-secure-admin and access with the following location block
# Reverse proxy to access Glassfish Admin server
location /Glassfish {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://127.0.0.1:4848;
}
ala https://foo.domain.com/Glassfish I get a blank screen, and the only reference I can find in the nginx error logs is
2015/10/05 09:13:57 [error] 29429#0: *157 open() "/usr/share/nginx/html/resource/community-theme/images/login-product_name_open.png" failed (2: No such file or directory), client: 104.17.0.4, server: foo.domain.com, request: "GET /resource/community-theme/images/login-product_name_open.png HTTP/1.1", host: "foo.domain.com", referrer: "https://foo.domain.com/Glassfish"
Reading docs and on the net I do see that:
Secure Admin must be enabled to access the DAS remotely
Is what I'm trying to do simply impossible?
Edit: As requested below is the full nginx configuration.
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#sendfile off;
tcp_nopush on;
tcp_nodelay off;
#keepalive_timeout 65;
types_hash_max_size 2048;
# Default HTTP server on 80 port
server {
listen 192.168.1.10:80 default_server;
#listen [::]:80 default_server;
server_name foo-dev.domain.com;
return 301 https://$host$request_uri;
}
# Default HTTPS server on 443 port
server {
listen 443;
server_name foo-dev.domain.com;
ssl_certificate /etc/ssl/certs/foo-dev.domain.com.crt;
ssl_certificate_key /etc/ssl/certs/foo-dev.domain.com.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/foo-dev.domain.com.access.ssl.log;
# Reverse proxy access to foo hospitality service implementation at BC back-end
location /AppEndPoint {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://foo-dev.domain.com:8080;
}
# Reverse proxy to access Glassfish Admin server
location /Glassfish {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://127.0.0.1:4848;
}
# Reverse proxy access to all processed servers by both client and server component
location /messages {
alias /integration/archive/app-messages/;
autoindex on;
#auth_basic "Integration Team Login";
#auth_basic_user_file /integration/archive/app-messages/requests/.htpasswd;
}
}
}
The /AppEndPoint location block is the Glassfish application server which works properly, it's only the /Glassfish location block that's giving me trouble.

Ok thx, for your edit.
try with:
listen: 443 ssl;
btw a good config help is offered by Mozilla: SSL Generator
and if you forward request to location /Glassfish you will have to trim the request url to remove /Glassfish. Credits to Rewrite.
Btw does the rest of your config work on SSL?

Only change in proxy_pass the http for https
location / {
proxy_pass https://localhost:4848;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
}

As you ask, I suppose you are having problems accessing to the Glassfish Admin Console using nginx. However I share an example of entire nginx.conf file for Glassfish server.
Note that the 'proxy_pass' directive for location '/admin' should be https because is mandatory for glassfish access to Admin Console using https.
One reason that can cause you can't see the Admin Console is because when you access to the page, the resources aren't properly loaded. You can verify the different loaded resources using developer options of your preferred browser to see the generated URLs; what can show you a part of the solution.
With this configuration you should be able to access both parts of glassfish, main and admin console pages.
If you don't have DNS server, you can access using server IP.
The SSL certificates used where made as Self-signed only for test purposes, consider using a valid SSL certificate like Let's Encrypt or generated by a valid CA.
Ex:
http://192.168.1.15/glassfish
http://192.168.1.15/admin
The https redirection should work and finally you will be redirected at:
https://192.168.1.15/glassfish
https://192.168.1.15/admin
glassfish-ngix.conf
upstream glassfish {
server 127.0.0.1:8080;
}
upstream glassfishadmin {
server 127.0.0.1:4848;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
set $glassfish_server glassfish;
set $glassfish_admin glassfishadmin;
server_name mydomain.com;
# sample site certificates
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
ssl_trusted_certificate /etc/nginx/server.crt;
location /glassfish {
charset utf-8;
# limits
client_max_body_size 100m;
proxy_read_timeout 600s;
# buffers
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# gzip
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip_vary on;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://$glassfish_server/;
}
location ~* .(png|ico|gif|jpg|jpeg|css|js)$ {
proxy_pass https://$glassfish_admin/$request_uri;
}
location /admin {
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
proxy_pass_request_headers on;
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $http_pragma $http_authorization;
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:$server_port; #Very nb to add :$server_port here
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin *;
proxy_pass https://$glassfish_admin/;
}
}

Related

Scalling flask socket io with redis connecting to only one server port

I have real time chatapplication, unable to scale them.
so i have frontend ngnix which proxy pass to backend ngnix , now backend ngnix will upstream to flasksocket io multiple instance,
server {
listen 443 ssl;
#server_name xx.xxx.xx.xxx;
client_max_body_size 300M;
ssl_certificate /etc/nginx/ssl/newssl/ssl_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/newssl/example.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'xxx+xxxx+xxxx';
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Real-IP;
real_ip_recursive on;
add_header X-Frame-Options "SAMEORIGIN" ;
add_header Access-Control-Allow-Origin *;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 65s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
client_max_body_size 0M;
proxy_pass http://live_chatnodes;
backend ngnix
server {
listen 80;
server_name 10.0.2.9;
client_max_body_size 300M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 65s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
client_max_body_size 0M;
proxy_pass http://live_nodes;
}
this now pass request to upstream and upstream has 15 nodes, how ever all the request are going to one flasksocket io. which is causing my system go down. and last info upstream has iphash as per flaskcoketio documentation.
all my clients are using vpn so they have static ip as well,

Odoo 11 Nginx Reverse Proxy Index of /

I have a odoo 11 installation behind a Nginx proxy. Its been working for a while but now when you access it, its showing index of / instead of Odoo login page (see screenshot).
Here is my Nginx configuration:
#odoo server
upstream odoo {
server 127.0.0.1:8069;
}
upstream odoochat {
server 127.0.0.1:8072;
}
# http -> https
server {
listen 80;
server_name businessapps.enone.tech;
#server_name odoo;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
server_name businessapps.enone.tech;
#server_name odoo;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
# Add Headers for odoo proxy mode
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
# SSL parameters
ssl on;
ssl_certificate /etc/letsencrypt/live/businessapps.enone.tech/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/businessapps.enone.tech/privkey.pem;
ssl_session_timeout 30m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers '<replaced cipher>';
ssl_prefer_server_ciphers on;
# log
access_log /var/log/nginx/odoo.access.log;
error_log /var/log/nginx/odoo.error.log;
# Redirect longpoll requests to odoo longpolling port
location /longpolling {
proxy_pass http://odoochat;
}
# Redirect requests to odoo backend server
location / {
proxy_redirect off;
proxy_pass http://odoo;
}
# common gzip
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Can someone point out what could be the issue. Sometimes I can access the login page, other times I get the index of / page. The installation is on Ubuntu 16.0
You need to specify the directory in the nginx config, and I had this error previously and found that even once changing my config I had weird errors with odoo, so I recommend you do what I did and that is completely re install Nginx and ensure you reboot your server after having done so!
Add location block inside server {} (block)
I have used below nginx configuration
upstream odooapp {
server odoo:8069;
keepalive 8;
}
upstream longpolling {
server odoo:8072;
keepalive 8;
}
server {
listen 80;
listen [::]:80;
server_name businessapps.enone.tech;
access_log /var/log/nginx/access.log mainlog;
error_log /var/log/nginx/error.log;
return 301 https://businessapps.enone.tech:443/web/login;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name businessapps.enone.tech;
access_log /var/log/nginx/access.log mainlog;
error_log /var/log/nginx/error.log;
ssl_ciphers ALL:!ADH:!MD5:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_certificate <PATH_TO_CERT>;
ssl_certificate_key <PATH_TO_KEY>;
add_header Strict-Transport-Security "max-age=2592000; preload;" always;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
client_max_body_size 20M;
proxy_pass http://odooapp/;
proxy_redirect off;
}
location /longpolling {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://longpolling/longpolling;
proxy_redirect off;
}

Can't access my Vapor app from tutorial on my web server

I want to deploy a Vapor app on my server to use it as backend for my iOS app.
I'm pretty new to this topic. The only thing I did before was deploying a Django backend on the same server. I rebuild my server to set up the Vapor backend.
To begin, I wanted to deploy a Vapor app as basic as possible.
I followed this tutorial (it's short):
https://medium.com/#ankitank/deploy-a-basic-vapor-app-with-nginx-and-supervisor-1ef303320726
I followed the steps and didn't get errors.
The problem is, when I try to call [IP]/hello like in the tutorial, I get 502 Bad Gateway as answer.
Nginx gives me this error:
connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: _, request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: "[IP]"
I hope you can help me with this. :)
Update 1:
I changed the config to this:
server {
listen 80;
listen [::]:80;
server_name [DOMAIN];
error_log /var/log/[DOMAIN]_error.log warn;
access_log /var/log/[DOMAIN]_access.log;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /home/[AppName]/Public;
}
}
Unfortunately I still get this one:
2019/12/01 14:48:04 [error] 6801#6801: *1 connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: [DOMAIN], request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: [DOMAIN]
Update 2:
The error was related to this line:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost is not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help!!!
You could try my 100% works production config with SSL and websockets support
server {
listen 443;
listen [::]:443;
server_name mydomain.com;
error_log /var/log/mydomain.com_error.log warn;
access_log /var/log/mydomain.com_access.log;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_ciphers 'HIGH:!aNULL:!MD5:!kEDH';
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
ssl_stapling on;
ssl_stapling_verify on;
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /apps/myApp/Public;
}
}
In the end of config you can see that static files from Public folder nginx will return directly without Vapor app running.
In your config.swift file you should use FileMiddleware only for macOS where you test the app without nginx cause this middleware is really slow, so I suggest you to put it into compiler check
#if os(macOS)
middlewares.use(FileMiddleware.self) // Serves files from `Public/` directory
#endif
The error was related to this line in the config file:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost was not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help! He solved it!

Why do I get 404 on nginx reverse proxy?

Below is my config and I'm getting 404 on all routes defined apart from the well-known route and I don't understand why.
If I make a request to http://example.tech/connect I get a 404 and if I make a request to http://api.example.tech I also get a 404.
I can't see where I've gone wrong as this looks like it should work!
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#REMOVE REFERENCE TO FILES THAT HAVE "server" and "location" blocks in them so we can do it all in this file
#include /etc/nginx/conf.d/*.conf;
# issue with ip and the nginx proxy
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
server {
listen 80;
listen [::]:80;
server_name example.tech;
location /.well-known/openid-configuration {
proxy_pass https://myapp.net;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
}
location /connect {
proxy_pass https://myapp.net;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
}
location /auth {
proxy_pass https://myapp.net;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
}
}
server {
listen 80;
listen [::]:80;
server_name api.example.tech;
location /auth/ {
proxy_pass https://myapp.net;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
}
}
}
Needed a forward slash on the end of the proxy_pass for some reason
You need a specific uri in proxy_pass directive, not a backslash. But in your case here, a backslash is acting as the specific uri. Nginx replaces '/auth'(for example) with '/'(you've added).
In fact, the answer you put is right, turning proxy_pass http://myapp.net; to proxy_pass http://myapp.net/;.
The reason is that proxy_pass would work in two different ways with/without a specific uri. More details about this directive on nginx.org. Blow is some content quoted in that link.
If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI
matching the location is replaced by a URI specified in the directive:
location /name/ {
proxy_pass http://127.0.0.1/remote/;
}
If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original
request is processed, or the full normalized request URI is passed
when processing the changed URI:
location /some/path/ {
proxy_pass http://127.0.0.1;
}
In your case, without URI in proxy_pass directive, so /auth would be passed to backend server. Unfortunately, your backend server does not have the /auth resource, so 404 is returned. If your backend server does have /auth to be processed, you would never get 404 error while requesting uri /auth.
Here are two examples, which hopefully clarify things.
location /some-path {
proxy_pass http://server:3000;
}
In this case the proxied server (target) must handle the route /some-path. If handling something else, like / only, it will return an error to Nginx.
One solution is to add a trailing / e.g.:
location /some-path {
proxy_pass http://server:3000/;
}
Now requests sent to /some-path can (and must) be handled by the route / on the proxied server side. However, this may cause issues with some servers. For example, with Express, curl localhost/some-path would be handled fine by Express, whereas curl localhost/some-path/ would cause Express to return Cannot GET //.
This might be different for your target server, but the principle is the same: if you specify the server only, the full path in location is passed to the server, so it must be handled accordingly.
This is my case how I've get 404 instead of 502:
# let's define some proxy pass
location ~ /.well-known/acme-challenge {
proxy_pass http://127.0.0.1:5080; # this backend doesn't exist which leads to 502
proxy_set_header Host $host;
}
# this is a default directives
error_page 500 502 503 504 /50x.html; # this is a reason to redirect 502 to 50x.html
location = /50x.html { # but this file doesn't exist in root so we get 404 instead of 502
root /usr/share/nginx/html;
}

How to increase nginx timeout for upstream uWSGI server?

Stack used:
Nginx -> Uwsgi (proxy passed) -> Django
I have an API that takes aroundn 80 seconds to execute a query. Nginx closes the connection with the upstream server after 60 seconds. This was found in the nginx error log:
upstream prematurely closed connection while reading response header from upstream
The uWSGI and django application logs do not show anything weird.
This is my nginx configuration:
server {
listen 80;
server_name xxxx;
client_max_body_size 10M;
location / {
include uwsgi_params;
proxy_pass http://127.0.0.1:8000;
proxy_connect_timeout 10m;
proxy_send_timeout 10m;
proxy_read_timeout 10m;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_hide_header Vary;
proxy_set_header Accept-Encoding '';
proxy_ignore_headers Cache-Control Expires;
proxy_set_header Referer $http_referer;
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
How do I increase the timeout, I have tried settings the proxy_pass timeout variables but they do no seem to be working.
Okay, so managed to solve this issue by replacing proxy_pass with uwsgi_pass
This is how my nginx conf looks now:
server {
listen 80;
server_name xxxxx;
client_max_body_size 4G;
location /static/ {
alias /home/rmn/workspace/mf-analytics/public/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi_web.sock;
uwsgi_read_timeout 600;
}
}
And I had to set the socket parameter in my uwsgi ini file.
For some reason, the proxy_pass timeouts just wouldnt take effect.

Resources