I am trying to deploy a Flask app through nginx + Gunicorn. I am currently allowing my Flask app to be accessed by http://kramericaindustries.hopto.org:8050/ or http://kramericaindustries.hopto.org/heatmap/. However the later URL, with a URI of /heatmap/ presents a screen which just says "Loading..." indefinitely while the former loads correctly. I believe it has to do with my nginx.conf file, but I am new to nginx and not really sure what I'm doing wrong. I believe it has something to do with proxy directives but don't know. Below is my nginx.conf file, and the areas in question are near the bottom. Let me know if you have any questions or need any more information. Thanks!
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name kramericaindustries.hopto.org;
rewrite ^/rstudio$ $scheme://$http_host/rstudio/ permanent;
location /rstudio/ {
rewrite ^/rstudio/(.*)$ /$1 break;
proxy_pass http://localhost:8787;
proxy_redirect http://localhost:8787/ $scheme://$http_host/rstudio/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
location /heatmap/ {
# rewrite ^/heatmap/(.*)$ /$1 break;
proxy_pass http://127.0.0.1:8000;
# proxy_redirect http://127.0.0.1:8000/ $scheme://$http_host/heatmap/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 8050;
server_name kramericaindustries.hopto.org;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
What location is your index page trying to load scripts and other source files from?
For your :8050 listener you're serving directly from the root location, and your index page may be pulling resources expecting that there's no additional /heatmap path.
e.g. the following would fail when served from /heatmap, because the resource URL's are not being prefixed with that path:
<script src="/_dash-component-suites/dash_renderer/polyfill#7.v1_8_3m1605058426.8.7.min.js"></script>
Those are going to 404 as the correct URL for those resources is now /heatmap/_dash-component-suites/…
If you're hardcoding these, you'll have to add the heat map in. If you're rendering the index with Flask / Jinja2 templating, you can prefix your URL's with {{ request.path }}, e.g.:
<script src="{{ request.path }}/_dash-component-suites/dash_renderer/polyfill#7.v1_8_3m1605058426.8.7.min.js"></script>
When served from the root location it will return /, when served from the heat map path it will return /heatmap
OK I finally got this figured out, and it had nothing to do with nginx or Gunicorn. The nginx.conf above is correct. It had to do with the Flask app I am deploying. I am actually using a Dash app (an app made with Flask), and when declaring the instance of Dash, the URL basename has to be specified as it is "/" by default. This is the line I needed
app = dash.Dash(__name__, external_stylesheets=external_stylesheets, url_base_pathname='/heatmap/')
Related
I am running a QuestDB 6.6.1-server. Now I want to increase the protection of this server and put the web gui behind an NGINX reverse proxy as described in QuestDB blog post where setting up basic authentication is shown.
When I try to open the QuestDB web gui, the login popup is displayed, I can enter name and password without issues. However, after having successfully passed the login popup, I only see a bare text "Not Found" in the browser (Note: but NOT the NGINX 403 Not Found screen, which I now in other cases). Neither nginx.log, nor questdb.log show entries.
It is POSSIBLE to reach the QuestDB web gui via <server.domain>:9000, no issues there.
The "location" settings are defined in a file reverse_proxy.conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name server.domain;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_certificate <path>/nginx.crt;
ssl_certificate_key <path>/nginx.key;
root /var/www/server.domain/html;
index index.html index.htm;
server_name server.domain;
location /location1 {
proxy_pass https://localhost:port1;
proxy_set_header Host $host;
}
location /location2 {
proxy_pass http://localhost:port2/location2;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10M;
}
location = /questdb/ {
auth_basic "Restricted QuestDB";
auth_basic_user_file <path>/.htpasswd;
proxy_pass http://localhost:9000;
proxy_set_header Host $host;
proxy_read_timeout 300;
proxy_connect_timeout 120;
proxy_send_timeout 300;
proxy_set_header Host $host;
}
}
reverse_proxy.conf is imported into nginx.conf. nginx.conf looks like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# For suppression of server version number
server_tokens off;
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
map_hash_max_size 262144;
map_hash_bucket_size 262144;
include /etc/nginx/conf.d/*.conf;
}
I see you are using questdb on a directory. You are proxying to http://localhost:9000/questdb, and QuestDB is saying "Not found"
To avoid that, you would need to add a slash at the end of proxy_pass http://localhost:9000; as in proxy_pass http://localhost:9000/;
Problem then is that relative URLs (/assets, /exec...) will not work and you will need to rewrite them on nginx.
It would be probably easier to just use a subdomain rather than a directory.
update: this is the nginx config I use. As explained, relative links are broken
location /questdb/ {
proxy_pass http://localhost:9000/;
index index.html index.htm;
auth_basic "Restricted Content";
auth_basic_user_file /opt/homebrew/etc/nginx.htpasswd;
}
I have an upstream block in an Nginx config file. This block lists multiple backend servers across which to load balance requests to.
...
upstream backend {
server backend1.com;
server backend2.com;
server backend3.com;
}
...
Each of the above 3 backend servers is running a Node application.
If I stop the application process on backend1 - Nginx recognises this, via passive health check, traffic is only directed to backend2 and backend3, as expected.
However, if I power down the server on which backend1 is hosted, Nginx does not recognise that it is offline and continues to attempt to send traffic/requests to it. Nginx still tries to direct traffic to the offline server, resulting in an error: 504.
Can someone shed some light on why this (scenario 2 above) may happen and if there is some further configuration needed that I am missing?
Update:
I'm beginning to wonder if the behaviour I'm seeing is because the above upstream block is located with an HTTP {} Nginx context. If backend1 was indeed powered down, this would be a connection error and so (maybe off the mark here, but just thinking aloud) should this be a TCP health check?
Update 2:
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
upstream backends {
server xx.xx.xx.37:3000 fail_timeout=2s;
server xx.xx.xx.52:3000 fail_timeout=2s;
server xx.xx.xx.69:3000 fail_timeout=2s;
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_certificate …
ssl_certificate_key …
ssl_ciphers …;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
default
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
#server_name ...;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
# SSL configuration
...
# Add index.php to the list if you are using PHP
index index.html index.htm;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
#try_files $uri $uri/ =404;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://backends;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
# Requests for socket.io are passed on to Node on port 3000
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://backends;
}
}
The reason for you to get a 504 is when nginx does HTTP health check it tries to connect to the location(ex: / for 200 status code) which you configured. Since the backend1 is powered down and the port is not listening and the socket is closed.
It will take some time to get timeout exception and hence the 504: gateway timeout.
It's a different case when you stop the application process.The port will not be listening and it will get connection refused which is identified pretty quick and marks the instance as unavailable.
To overcome this you can set fail_timeout=2s to mark the server as unavailable default is 10 seconds.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html?&_ga=2.174685482.969425228.1595841929-1716500038.1594281802#fail_timeout
I have an scenario on where a nginx is in front of an Artifactory server.
Recently, while trying to pull a big number of docker images in a for loop, all at the same time (first test was with 200 images, second test with 120 images), access to Artifactory gets blocked, as nginx is busy processing all the requests and users will not be able to reach it.
My nginx server is running with 4 cpu cores and 8192 of ram.
I have tried to improve the handling of files in the server, by adding the bellow:
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
This made it a bit better (but of course, pull's of images with 1gb+ take much more time, due to the chunk size) - still, access to the UI would cause a lot of timeouts.
Is there something else that i can do to improve the nginx performance, whenever a bigger load is pushed thru it?
I think that my last option is to increase the size of the machine (more cpu's) aswell as the number of processes on nginx (8 to 16).
The full nginx.conf file follows bellow:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
}
http {
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_disable "msie6";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
set_real_ip_from 138.190.190.168;
real_ip_header X-Forwarded-For;
log_format custome '$remote_addr - $realip_remote_addr - $remote_user [$time_local] $request_time'
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
listen 80 default;
listen [::]:80 default;
server_name _;
return 301 https://$server_name$request_uri;
}
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/ssl/certs/{{ hostname }}.cer;
ssl_certificate_key /etc/ssl/private/{{ hostname }}.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
server_name ~(?<repo>.+)\.{{ hostname }} {{ hostname }} _;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/{{ hostname }}-access.log custome;
error_log /var/log/nginx/{{ hostname }}-error.log warn;
rewrite ^/$ /webapp/ redirect;
rewrite ^//?(/webapp)?$ /webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 900;
proxy_max_temp_file_size 10240m;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://{{ appserver }}:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thanks for the tips.
Cheers,
Ricardo
I am using nginx (via gunicorn) to serve static files for a flask app.
Static files in the default static folder are working fine:
<link rel="stylesheet" href="{{ url_for('static', filename='css/fa/font-awesome.min.css') }}" />
However for other static files which I want to restrict access to for logged in users only I am using a static folder served by Flask:
app.register_blueprint(application_view)
application_view = Blueprint('application_view', __name__, static_folder='application_static')
in html I'm calling a static file thus:
<link rel="stylesheet" href="{{ url_for('application_view.static', filename='css/main.css') }}" />
then in application/application_static I have the restricted static files. This works fine on a local Flask install, however when I deploy to a production machine with Nginx serving files from the /static folder I get a "NetworkError: 404 Not Found - website.com/application_static/main.css".
Any ideas on how to configure Ngix to handle this issue?
conf.d/mysitename.conf file:
upstream app_server_wsgiapp {
server localhost:8000 fail_timeout=0;
}
server {
listen 80;
server_name www.mysitename.com;
rewrite ^(.*) https://$server_name$1 permanent;
}
server {
server_name www.mysitename.com;
listen 443 ssl;
#other ssl config here
access_log /var/log/nginx/www.mysitename.com.access.log;
error_log /var/log/nginx/www.mysitename.com.error.log info;
keepalive_timeout 5;
# nginx serve up static files and never send to the WSGI server
location /static {
autoindex on;
alias /pathtositeonserver/static;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_wsgiapp;
break;
}
}
# this section allows Nginx to reverse proxy for websockets
location /socket.io {
proxy_pass http://app_server_wsgiapp/socket.io;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
gunicorn will have the old code still running, unless you reload the configuration file.
You either stop and restart gunicorn, or send a HUP signal to the gunicorn process.
I'm trying to create a devpi mirror on HP-cloud that will be accessed via nginx, i.e - nginx listens to port 80 and used as a proxy to devpi that is using port 4040 on the same machine.
I have configured an HP-cloud security group that is opened for all ports (inbound and outbound) in hp-cloud (just for the beginning, I'll change it later of-course), and started an ubuntu 14 instance.
I have allocated a public IP to the instance that I have created.
I have installed devpi-server using pip, and nginx using apt-get.
I have followed the instructions on devpi's tutuorial page here:
ran devpi-server --port 4040 --gen-config, and copied the contents that was created in nginx-devpi.conf into nginx.conf.
Then, I have started the server using devpi-server --port 4040 --start.
Started nginx using sudo nginx.
My problem is as follows:
When I'm SSHing to the hp-instance on which the nginx and devpi are running, and executing pip install -i http://<public-ip>:80/root/pypi/ simplejson it succeeded.
But, when I'm running the same command from my laptop I get
Downloading/unpacking simplejson
Cannot fetch index base URL http://<public-ip>:80/root/pypi/
http://<public-ip>:80/root/pypi/simplejson/ uses an insecure transport scheme (http). Consider using https if <public-ip>:80 has it available
Could not find any downloads that satisfy the requirement simplejson
Cleaning up...
No distributions at all found for simplejson
Storing debug log for failure in /home/hagai/.pip/pip.log
I thought it might be security/network issue, but I think that this is not the case, because curl http://<public-ip>:80 returns the same thing when I'm executing it from my laptop and from the HP instance:
{
"type": "list:userconfig",
"result": {
"root": {
"username": "root",
"indexes": {
"pypi": {
"type": "mirror",
"bases": [],
"volatile": false
}
}
}
}
}
I have also tried to start another instance in HP-cloud and execute pip install -i http://<public-ip>:80/root/pypi/ simplejson, but I got the same error as in my laptop.
I can't understand what is the difference between these two cases, and I'd be happy if someone would have a solution for this case, or any idea what might be the problem.
My nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
server {
server_name localhost;
listen 80;
gzip on;
gzip_min_length 2000;
gzip_proxied any;
#gzip_types text/html application/json;
proxy_read_timeout 60s;
client_max_body_size 64M;
# set to where your devpi-server state is on the filesystem
root /home/ubuntu/.devpi/server;
# try serving static files directly
location ~ /\+f/ {
error_page 418 = #proxy_to_app;
if ($request_method != GET) {
return 418;
}
try_files /+files$uri #proxy_to_app;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #proxy_to_app;
}
location / {
error_page 418 = #proxy_to_app;
return 418;
}
location #proxy_to_app {
proxy_pass http://localhost:4040;
#dynamic: proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-outside-url http://localhost:80;
proxy_set_header X-Real-IP $remote_addr;
}
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
#include /etc/nginx/sites-enabled/*;
}
edit:
I have tried to use devpi-client from my laptop, and when I've executed devpi use http://<public-ip>:80 from my laptop I get the following:
using server: http://localhost/ (not logged in)
no current index: type 'devpi use -l' to discover indices
~/.pydistutils.cfg : no config file exists
~/.pip/pip.conf : no config file exists
~/.buildout/default.cfg: no config file exists
always-set-cfg: no
You can try modify from this:
location #proxy_to_app {
proxy_pass http://localhost:4040;
#dynamic: proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-outside-url http://localhost:80;
proxy_set_header X-Real-IP $remote_addr;
}
To this
location #proxy_to_app {
proxy_pass http://localhost:4040;
proxy_set_header X-outside-url $scheme://$host;
proxy_set_header X-Real-IP $remote_addr;
}
This has been work for me :-).