forward websocket to websocket - nginx

I'm using the nginx as reverse proxy for django and React with as config
worker_processes 1;
events {
worker_connections 1024;
}
http{
server{
include mime.types;
default_type application/octet-stream;
keepalive_timeout 240;
sendfile on;
listen 8001;
server_name 127.0.0.1;
location /{
proxy_pass http://localhost:3000;
}
location /backend {
proxy_pass http://127.0.0.1:8000;
}
}
}
its working fine but i want to forward websocket for react hot loading. i have still no solution after lot of googling. currently it have connection error as from chrome console
WebSocket connection to 'ws://127.0.0.1:8001/sockjs-node' failed: Error during WebSocket handshake: Unexpected response code: 404

because of http directive, i that thought it will not support forward WebSocket proxy but after more spending time on google. i known that http is upgrade to Websocket after initial handshake So finally the solution is here
its worked for me as forward proxy for Django as backend and react as frontend. so i can pass CORS problem due to server are at different IP and its unsecure so setting header not support much regard cookie sharing
worker_processes 1;
events {
worker_connections 1024;
}
http{
client_max_body_size 100M;
server{
include mime.types;
default_type application/octet-stream;
keepalive_timeout 240;
sendfile on;
listen 8001;
server_name 127.0.0.1;
location /{
proxy_pass http://localhost:3000;
}
location /backend {
proxy_pass http://127.0.0.1:8000;
}
location /sockjs-node {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
}
}

Related

Nginx + Gunicorn: cannot access Flask app without root location

I am trying to deploy a Flask app through nginx + Gunicorn. I am currently allowing my Flask app to be accessed by http://kramericaindustries.hopto.org:8050/ or http://kramericaindustries.hopto.org/heatmap/. However the later URL, with a URI of /heatmap/ presents a screen which just says "Loading..." indefinitely while the former loads correctly. I believe it has to do with my nginx.conf file, but I am new to nginx and not really sure what I'm doing wrong. I believe it has something to do with proxy directives but don't know. Below is my nginx.conf file, and the areas in question are near the bottom. Let me know if you have any questions or need any more information. Thanks!
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name kramericaindustries.hopto.org;
rewrite ^/rstudio$ $scheme://$http_host/rstudio/ permanent;
location /rstudio/ {
rewrite ^/rstudio/(.*)$ /$1 break;
proxy_pass http://localhost:8787;
proxy_redirect http://localhost:8787/ $scheme://$http_host/rstudio/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
location /heatmap/ {
# rewrite ^/heatmap/(.*)$ /$1 break;
proxy_pass http://127.0.0.1:8000;
# proxy_redirect http://127.0.0.1:8000/ $scheme://$http_host/heatmap/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 8050;
server_name kramericaindustries.hopto.org;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
What location is your index page trying to load scripts and other source files from?
For your :8050 listener you're serving directly from the root location, and your index page may be pulling resources expecting that there's no additional /heatmap path.
e.g. the following would fail when served from /heatmap, because the resource URL's are not being prefixed with that path:
<script src="/_dash-component-suites/dash_renderer/polyfill#7.v1_8_3m1605058426.8.7.min.js"></script>
Those are going to 404 as the correct URL for those resources is now /heatmap/_dash-component-suites/…
If you're hardcoding these, you'll have to add the heat map in. If you're rendering the index with Flask / Jinja2 templating, you can prefix your URL's with {{ request.path }}, e.g.:
<script src="{{ request.path }}/_dash-component-suites/dash_renderer/polyfill#7.v1_8_3m1605058426.8.7.min.js"></script>
When served from the root location it will return /, when served from the heat map path it will return /heatmap
OK I finally got this figured out, and it had nothing to do with nginx or Gunicorn. The nginx.conf above is correct. It had to do with the Flask app I am deploying. I am actually using a Dash app (an app made with Flask), and when declaring the instance of Dash, the URL basename has to be specified as it is "/" by default. This is the line I needed
app = dash.Dash(__name__, external_stylesheets=external_stylesheets, url_base_pathname='/heatmap/')

Nginx Error ("http" directive is not allowed here in /etc/nginx/sites-enabled/abc)

I am geeting below error while starting Nginx service
"http" directive is not allowed here in /etc/nginx/sites-enabled/abc:1
Here is my abc config
worker_processes 1;
error_log /usr/local/openresty/nginx/logs/lua.log debug;
events {
worker_connections 1024;
}
http {
upstream kibana {
server server1:30001;
server server2:30001;
keepalive 15;
}
server {
listen 8882;
location / {
ssl_certificate /etc/pki/tls/certs/ELK-Stack.crt;
ssl_certificate_key /etc/pki/tls/private/ELK-Stack.key;
ssl_session_cache shared:SSL:10m;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
proxy_pass http://kibana;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
}
--> FYI I am creating this file in /etc/nginx/sites-available and linking it to
/etc/nginx/sites-enabled . I am providing a link using following command
sudo ln -s /etc/nginx/sites-available/abc /etc/nginx/sites-enabled/abc
After the above command I can see a link is been created in /etc/nginx/sites-enabled directory .
Please suggest what I am doing wrong ?
Regards,
The http directive dos not belong there.
In the ngnix.conf you have already the http directive
http {
..config logs ...
inclide etc/ngnix/sites-enabled/*; <--- This Line include your files
.. more config...
server {
(..default server ...)
location / {
index
root
}
}
}
The files in your sites enabled must only contain servers, the http directive is in the principal configuration.
I would try:
events {
worker_connections 1024;
}
upstream kibana {
server server1:30001;
server server2:30001;
keepalive 15;
}
error_log /usr/local/openresty/nginx/logs/lua.log debug;
listen 8882;
location / {
basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
proxy_pass http://kibana;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
ssl_certificate /etc/pki/tls/certs/ELK-Stack.crt;
ssl_certificate_key /etc/pki/tls/private/ELK-Stack.key;
ssl_session_cache shared:SSL:10m;
}

SSL Pass-Through in Nginx Reverse proxy?

Is it possible to use Nginx reverse proxy with SSL Pass-through so that it can pass request to a server who require certificate authentication for client.
It means server will need to have certificate of client server and will not need certificate of Nginx reverse proxy server.
Not sure how much it can work in your situation, but newer (1.9.3+) versions of Nginx can pass (encrypted) TLS packets directly to an upstream server, using the stream block :
stream {
server {
listen 443;
proxy_pass backend.example.com:443;
}
}
If you want to target multiple upstream servers, distinguished by their hostnames, this is possible by using the nginx modules ngx_stream_ssl_preread and ngx_stream_map. The concept behind this is TLS Server Name Indication.
Dave T. outlines a solution nicely. See his answer on this network.
From the moment that we want to do ssl pass-through, the ssl termination will take place to the backend nginx server. Also i haven't seen an answer that takes care of the http connections as well.
The optimal solution will be a Nginx that is acting as a Layer 7 + Layer4 proxy at the same time. Something else that is rarely a subject of discussion is the IP Address redirection. When we use a proxy, this must be configured on the proxy, and not to the backend server like usually.
Lastly, the client ip address must be preserved, hence we must use the proxy protocol to do this correctly.
Sounds confusing? It's not much.
I came up with a solution that i currently using in production is works flawlessly.
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
variables_hash_bucket_size 1024;
variables_hash_max_size 1024;
map_hash_max_size 1024;
map_hash_bucket_size 512;
types_hash_bucket_size 512;
server_names_hash_bucket_size 512;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
autoindex off;
server_tokens off;
keepalive_timeout 15;
client_max_body_size 100m;
upstream production_server {
server backend1:3080;
}
upstream staging_server {
server backend2:3080;
}
upstream ip_address {
server backend1:3080; #or backend2:3080 depending on your preference.
}
server {
server_name server1.tld;
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 64k;
proxy_cache_background_update on;
proxy_pass http://production_server$request_uri;
}
}
server {
server_name server2.tld;
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
proxy_cache_background_update on;
proxy_pass http://staging_server$request_uri;
}
}
server {
server_name 192.168.1.1; #replace with your own main ip address
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
proxy_cache_background_update on;
proxy_pass http://ip_address$request_uri;
}
}
}
stream {
map $ssl_preread_server_name $domain {
server1.tld production_server_https;
server2.tld staging_server_https;
192.168.1.1 ip_address_https;
default staging_server_https;
}
upstream production_server_https {
server backend1:3443;
}
upstream staging_server_https {
server backend2:3443;
}
upstream ip_address_https {
server backend1:3443;
}
server {
ssl_preread on;
proxy_protocol on;
tcp_nodelay on;
listen 443;
listen [::]:443;
proxy_pass $domain;
}
log_format proxy '$protocol $status $bytes_sent $bytes_received $session_time';
access_log /var/log/nginx/access.log proxy;
error_log /var/log/nginx/error.log debug;
}
Now the only thing is yet to be done is to enable proxy protocol to the backend servers. The example below will get you going:
server {
real_ip_header proxy_protocol;
set_real_ip_from proxy;
server_name www.server1.tld;
listen 3080;
listen 3443 ssl http2;
listen [::]:3080;
listen [::]:3443 ssl http2;
include ssl_config;
# Non-www redirect
return 301 https://server1.tld$request_uri;
}
server {
real_ip_header proxy_protocol;
set_real_ip_from 1.2.3.4; # <--- proxy ip address, or proxy container hostname for docker
server_name server1.tld;
listen 3443 ssl http2 proxy_protocol; #<--- proxy protocol to the listen directive
listen [::]:3443 ssl http2 proxy_protocol; # <--- proxy protocol to the listen directive
root /var/www/html;
charset UTF-8;
include ssl_config;
#access_log logs/host.access.log main;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
default_type "text/plain";
}
location / {
index index.php;
try_files $uri $uri/ =404;
}
error_page 404 /404.php;
# place rest of the location stuff here
}
Now everything should work like a charm.

Flask nginx and static files issue for non default static location

I am using nginx (via gunicorn) to serve static files for a flask app.
Static files in the default static folder are working fine:
<link rel="stylesheet" href="{{ url_for('static', filename='css/fa/font-awesome.min.css') }}" />
However for other static files which I want to restrict access to for logged in users only I am using a static folder served by Flask:
app.register_blueprint(application_view)
application_view = Blueprint('application_view', __name__, static_folder='application_static')
in html I'm calling a static file thus:
<link rel="stylesheet" href="{{ url_for('application_view.static', filename='css/main.css') }}" />
then in application/application_static I have the restricted static files. This works fine on a local Flask install, however when I deploy to a production machine with Nginx serving files from the /static folder I get a "NetworkError: 404 Not Found - website.com/application_static/main.css".
Any ideas on how to configure Ngix to handle this issue?
conf.d/mysitename.conf file:
upstream app_server_wsgiapp {
server localhost:8000 fail_timeout=0;
}
server {
listen 80;
server_name www.mysitename.com;
rewrite ^(.*) https://$server_name$1 permanent;
}
server {
server_name www.mysitename.com;
listen 443 ssl;
#other ssl config here
access_log /var/log/nginx/www.mysitename.com.access.log;
error_log /var/log/nginx/www.mysitename.com.error.log info;
keepalive_timeout 5;
# nginx serve up static files and never send to the WSGI server
location /static {
autoindex on;
alias /pathtositeonserver/static;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_wsgiapp;
break;
}
}
# this section allows Nginx to reverse proxy for websockets
location /socket.io {
proxy_pass http://app_server_wsgiapp/socket.io;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
gunicorn will have the old code still running, unless you reload the configuration file.
You either stop and restart gunicorn, or send a HUP signal to the gunicorn process.

How to run Nginx on multiple ports

I am trying to configure nginx on two ports with the same instance, for example on port 80 and port 81, but no luck so far. Here is an example of what I am trying to do:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name chat.local.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_buffering off;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 81;
server_name console.local.com;
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_buffering off;
}
}
}
When I try to run console.local.com, it shows the content from chat.local.com. Is there a way to make nginx run on two ports? Thanks in advance!
your config looks ok
I think the problem is this (correct me if I'm wrong):
you have console.local.com listening on port 81,
that means you need to access it as http://console.local.com:81/
when you access it as http://console.local.com/ (no explicit port so defaults to port 80)
nginx will check, notice that noting is listening on port 80 for that server_name, and consequently will pass the request to the default server-block. Since the defaut server-block is the first one (in the absence of configuration to change it) you end up in the chat.local.com handling.
In all likelyhood you want to change your console.local.com to listen on port 80 also since:
the server_name directive in both serverblocks is enough to differentiate the requests
that avoids you having to add the :81 to the domainname in the requests all the time
You can add listen statement 2 times simple; like below
listen 80;
listen 81;
This should work with nginx

Resources