Nginx has conflict with an application running on 443 - nginx

In MacOS, I usually run my project in localhost by sudo PORT=443 HTTPS=true ./node_modules/.bin/react-scripts start. As a result, https://localhost/#/start works in a browser.
Now, to run third-party authentications in localhost, I need to run nginx. Here is my /usr/local/etc/nginx/nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream funfun {
server 178.62.87.72:443;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/localhost/localhost.crt;
ssl_certificate_key /etc/ssl/localhost/localhost.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_stapling off;
ssl_stapling_verify off;
add_header Strict-Transport-Security max-age=15768000;
add_header X-Frame-Options "";
proxy_ssl_name "www.funfun.io";
proxy_ssl_server_name on;
location ~ /socialLoginSuccess {
rewrite ^ '/#/socialLoginSuccess' redirect;
}
location ~ /auth/(.*) {
proxy_pass https://funfun/10studio/auth/$1?$query_string;
proxy_set_header Host localhost;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
proxy_set_header Proxy "";
proxy_pass https://localhost/;
# These three lines added as per https://github.com/socketio/socket.io/issues/1942 to remove socketio error
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
include servers/*;
}
However, launching the nginx returns me the following errors:
$ sudo nginx
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] still could not bind()
It seems that, nginx has conflit with the app running on 443. Does anyone know why?
Additionally, could anyone tell me what's the purpose of the block location / { ... } in my nginx configuration file?

Only one application can bind/listen on a given port at a time.
You started your app running on port 443:
sudo PORT=443 HTTPS=true ./node_modules/.bin/react-scripts start
Then when you tried to start nginx also on port 443 it fails because your app is already using 443.
To fix this:
stop nginx
stop your app and restart it running on a different port (e.g. 3000):
sudo PORT=3000 HTTPS=true ./node_modules/.bin/react-scripts start
edit your nginx config to tell nginx that your app ("upstream") is running on port 3000 now.
proxy_pass https://localhost:3000;
start nginx
Additionally, I would suggest that you do SSL (https) termination on nginx and let nginx connect to your app on localhost insecurely to reduce other problems. Currently it looks like you are doing ssl termination on nginx and then another ssl connection/termination to your app/upstream. This really isn't necessary when connecting on localhost or over a secure/private network (e.g. within AWS VPC).
stop nginx
stop your app and restart it running on a different port (e.g. 3000):
remove HTTPS=true from sudo PORT=3000 HTTPS=true ./node_modules/.bin/react-scripts start
...and any other changes needed in your react app to disable ssl/https.
edit your nginx config to tell nginx that your app ("upstream") is running on port 3000 now and insecure (change https to http).
proxy_pass http://localhost:3000;
start nginx
For production you should really always run nginx in front of your apps. This allows you to easily do ssl termination, load balancing (multiple apps/upstreams) as well as serving static files (jpg, css, etc) without running through nodejs or other application server. It will scale better. Right tool for the right job.
For local development purposes you can just work against the local insecure http://localhost:3000. If you really hate using port 3000 for some reason then you can of course change that using NODE_ENV in tandem with dotenv or similar in order to switch the port your app uses when in development mode vs production. There really isn't any reason you need to use https/443 on localhost during development. You won't be able to get a trusted SSL cert for localhost so there really isn't any point...it just makes your life more difficult.
I have no issues testing oauth login flows against http://localhost:3000 with google for instance.

Any port can be bound once to given interface. Now, if you run your react application server and it already bind port 443 on interface 0.0.0.0 which in this case is used as kind of wildcard which means "listen on port 443 on all interfaces on my computer" then any other application can't use this port because is already taken. In your nginx configuration you can see line which says that it also want to use port 443:
server {
listen 443 ssl; #<--- this is port config
server_name localhost;
You have (at least) 2 choices to fix that error:
change PORT=443 in your local application
change line with port numer in nginx configuration to any other not occupied
Next - location / { ... } means that all request starting from / which are virtually all requests except these catched in the two previous location blocks, will be forwarded to another web server located at https://localhost/ with some additional headers. This is called reverse proxy.

Related

Mercure hub behind Nginx reverse proxy

I try to deploy a Mercure hub on a server.
There is already a Symfony app (REST API) served with Apache2 (and Nginx configured in reverse proxy). My idea is to keep the API proxy to Apache2 and configure the Mercure subscriptions to be forwarded to the Mercure Hub (a Caddy server).
All is ok for the API part, but it's impossible to configure Nginx and Caddy correctly to work together. I precise that I reach the hub successfully when it's not behind Nginx. I use a custom certificate and, for some reason, each time I try to subscribe to the hub, I have this error :
DEBUG http.stdlib http: TLS handshake error from 127.0.0.1:36250: no
certificate available for '127.0.0.1'
If I modify my Nginx configuration with proxy_pass https://mydomain:3000; instead of proxy_pass https://127.0.0.1:3000;, the error becomes :
DEBUG http.stdlib http: TLS handshake error from PUBLIC-IP:36250: no
certificate available for 'PRIVATE-IP'
There is no further explaination in the Caddy or Nginx logs.
My guess is Nginx does not transfer the proper requested domain to Caddy, but I don't know why as I applied correctly the configuration instructions I found on the specification. Any help would be appreciated, thank you !
Caddy.dev config
{
# Debug mode (disable it in production!)
{$DEBUG:debug}
# Port update
http_port 3001
https_port 3000
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME:localhost}
log
tls /path-to-certificate/fullchain.pem /path-to-certificate/privkey.pem
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins http://localhost
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
NGinx Virtualhost config
server {
listen 80 http2;
server_name mercure-hub-domain.com;
return 301 https://mercure-hub-domain.com;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mercure-hub-domain.com;
ssl_certificate /path-to-certificate/fullchain.pem; # managed by Certbot
ssl_certificate_key /path-to-certificate/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass https://127.0.0.1:3000;
proxy_read_timeout 24h;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 300s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Configuration des logs
access_log /var/log/nginx/my-project/access.log;
error_log /var/log/nginx/my-project/error.log;
}
Command to launch the Mercure hub
sudo SERVER_NAME='mercure-hub-domain.com:3000' DEBUG=debug MERCURE_PUBLISHER_JWT_KEY='MY-KEY' MERCURE_SUBSCRIBER_JWT_KEY='MY-KEY' ./mercure run -config Caddyfile.dev

NGINX Incorrectly Forwarding Requests to Default Location

I have a React web application that I'm trying to deploy on an AWS EC2 instance and I'm using NGINX. I am trying to set it up so that all http requests get redirected to https. Right now it does appear to be redirecting all http requests to https, but NGINX is forwarding the request to the default path /usr/share/nginx/html/ instead of to the web application that I have running on localhost. I have read dozens of articles and have been trying to figure this out for days. Pointers would be much appreciated. Thanks in advance.
Here is my NGINX server configuration at /etc/nginx/sites-available/default:
server {
listen 80;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2 default_server;
listen 443 ssl http2 default_server;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1h;
location / {
proxy_pass http://127.0.0.1:3839;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Also, the application is running and accessible at the port specified in my location block. I can reach it on the machine with curl 127.0.0.1:3839 with no problems. I am able to see in /var/log/nginx/error.log that nginx is attempting to serve requests out of the /usr/share/nginx/html/ directory which is how I figured out that is the issue. I just have no idea why it's sending requests there instead of to the port on localhost that I specified in my location block. If I go to the root url for my application I get the "Welcome to NGINX" page. If I go to any subpath under my root url like example.com/login, then I get a 404 and error.log shows that it couldn't find resource /usr/share/nginx/html/login for example. Thanks :)
Update:
Inside of the listen 80 server block I added
location / {
proxy_pass http://127.0.0.1:3839;
}
and now it seems to be working correctly, but I have no idea why I would need to define a location block in the listen 80 server definition if requests in that block are just being redirected to be caught by the other server definition listening on 443. Any idea why this is working now?
I figured out the reason that location block in the http server definition worked. In AWS I accidentally had my load balancer forwarding all requests to port 80 on the EC2 instance. So even though my http server definition was redirecting to the https version of the site, those https requests were still ending up being handled by that same http server definition and since it didn't have a location block at all previously, that was causing it to fail. In the end, I removed the location block from the http server definition and then correctly updated my load balancer to forward https requests to port 443 on the EC2 instance and now everything works as expected.

NiFi Auth with Nginx reverse proxy

Is it possible to have NiFi with user authentication but with SSL termination on NGINX. I have NGINX running on port 443 and a proxy_pass passing to nifi at port 8080. I played around with these headers:
X-ProxyScheme - the scheme to use to connect to the proxy
X-ProxyHost - the host of the proxy
X-ProxyPort - the port the proxy is listening on
X-ProxyContextPath - the path configured to map to the NiFi instance
But it seems impossible to get NiFi to recognise it's on https connection behind the proxy. I updated my auth configuration however NiFi still throws an error:
IllegalStateException: User authentication/authorization is only supported when running over HTTPS.. Returning Conflict response.
java.lang.IllegalStateException: User authentication/authorization is only supported when running over HTTPS
Basically https to nginx than to http port for nifi.
Am not familiar with NiFi, but on RHEL with nginx the below gives me a reverse proxy with a HTTPS connection terminated in nginx and an onward HTTP connection with a /abc_end_point. Perhaps you can use this as a template?
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
ssl_certificate "/etc/pki/tls/certs/abc.com.crt";
ssl_certificate_key "/etc/pki/tls/private/abc.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /abc_end_point {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:9090/abc_end_point;
}
}
You are trying to setup Nifi with SSL offloading on the reverse proxy (nginx) - this kind of setup is not supported.
See: http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-and-SSL-offloading-td7790.html#a7799
I recommended to use TLS (HTTPS) also between reverse proxy and Nifi.

Docker network issues with nginx proxy container

I am currently trying to setup a docker based jira and confluence platform proxied by nginx and running into some kind of routing and network problems.
The basic setup consists of three docker containers - the nginx conatainer handles the https requests for specific domain names (e.g. jira.mydomain.com, confluence.mydomain.com) and redirects (proxy_pass) the requests to the specific containers for jira and confluence.
This setup is generally working - I can access the jira instance by opening https://jira.mydomain.com and the confluence instance by opening https://confluence.mydomain.com in my browser.
The problem I am running into becomes visible when logging into the jira:
And following the Find-out-more-link to:
The suggested resolutions from the provided JIRA health check link unfortunately did not help me to identify and solve the problem. Instead some exceptions in the log file lead to some more hints on the problem:
2017-06-07 15:04:26,980 http-nio-8080-exec-17 ERROR christian.schlaefcke 904x1078x1 eqafq3 84.141.114.234,172.17.0.7 /rest/applinks/3.0/applicationlinkForm/manifest.json [c.a.a.c.rest.ui.CreateApplicationLinkUIResource] ManifestNotFoundException thrown while retrieving manifest
ManifestNotFoundException thrown while retrieving manifest
com.atlassian.applinks.spi.manifest.ManifestNotFoundException: java.net.NoRouteToHostException: No route to host (Host unreachable)
...
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
And when I follow the hint from this Atlassian knowledge base article and running this curl statement from inside of the JIRA container:
curl -H "Accept: application/json" https://jira.mydomain.com/rest/applinks/1.0/manifest -v
I finally get this error:
* Trying <PUBLIC_IP>...
* connect to <PUBLIC_IP> port 443 failed: No route to host
* Failed to connect to jira.mydomain.com port 443: No route to host
* Closing connection 0
curl: (7) Failed to connect to jira.mydomain.com port 443: No route to host
EDIT:
The external URL jira.mydomain.com can be pinged from inside of the container:
root#c9233dc17588:# ping jira.mydomain.com
PING jira.mydomain.com (<PUBLIC_IP>) 56(84) bytes of data.
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=3 ttl=64 time=0.181 ms
From outside of the JIRA container (e.g. docker host or other machine) the curl statement works fine!
I have quite a good experience with linux in general but my knowledge about networks, routing and iptables is rather limited. Docker is running the current 17.03.1-ce version in combination with docker compose on a centos 7 system:
~]# uname -a
Linux rs226736 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
At the moment I don´t even understand what kind of problem (iptables?, routing, docker?) this actually is and how to debug this :-(
I played around with some iptables and nginx related hints found via google - all without success. Any hint pointing me to the right direction would be very much appreciated.
REQUESTED CONFIGS:
NGINX docker-compose.yml
nginx:
image: nginx
container_name: nginx
ports:
- 80:80
- 443:443
external_links:
- my_domain-jira
- my_domain-confluence
volumes:
- /opt/docker/logs/nginx:/var/log/nginx
- ./nginx.conf:/etc/nginx/nginx.conf
- ./certs/jira.mydomain.com.crt:/etc/ssl/certs/jira.mydomain.com.crt
- ./certs/jira.mydomain.com.key:/etc/ssl/private/jira.mydomain.com.key
- ./certs/confluence.mydomain.com.crt:/etc/ssl/certs/confluence.mydomain.com.crt
- ./certs/confluence.mydomain.com.key:/etc/ssl/private/confluence.mydomain.com.key
JIRA docker-compose.yml (Confluence similar):
jira:
container_name: my_domain-jira
build: .
external_links:
- postgres
volumes:
- ./inst/conf/server.xml:/opt/jira/conf/server.xml
- ./inst/bin/setenv.sh:/opt/jira/bin/setenv.sh
- /home/jira:/opt/atlassian-home
- /opt/docker/logs/jira:/opt/jira/logs
- /etc/localtime:/etc/localtime:ro
NGINX - nginx.conf
upstream jira {
server my_domain-jira:8080;
}
# begin jira configuration
server {
listen 80;
server_name jira.mydomain.com;
client_max_body_size 500M;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name jira.mydomain.com;
ssl on;
ssl_certificate /etc/ssl/certs/jira.mydomain.com.crt;
ssl_certificate_key /etc/ssl/private/jira.mydomain.com.key;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
client_max_body_size 500M;
location / {
proxy_pass http://jira/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
Ideas (nginx / proxy_pass / upstream) mostly picked up from:
https://www.digitalocean.com/community/tutorials/docker-explained-how-to-containerize-and-use-nginx-as-a-proxy
http://blog.nbellocam.me/2016/03/01/nginx-serving-multiple-sites-docker/
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
After some discussion with the provider of the virtual server it turned out, that conflicting firewall rules between plesk firewall and iptables caused this problem. After the conflict had been fixed by the provider the container could be accessed.
This problem is solved now - thank´s to anyone who participated!

ngixn conditionally reverse proxy or serve directly

The question is to ask for a possibility of making nginx conditionally redirect requests to other servers (by reverse proxy) or process the request by itself.
Here's the details.
I have a Raspberry Pi (RPi) running nginx + wordpress for 24*7 at home. I also have a laptop running Ubuntu for about 5 hours every night.
The wordpress on RPi is working great but it's slow (especially when it's working on php). So I would like to let the laptop help:
If laptop is on, RPi's nginx redirects all requests to Ubuntu by reverse proxy;
If laptop is off, RPi's nginx process the request as usual.
I wonder if it's possible to achieve this? If yes, how to configure RPi and Ubuntu?
The basic solution is, make nginx as a reverse-proxy with fail_timout, when it receives a request, it dispatch to the upstreams where Ubuntu has higher priority, and if Ubuntu is offline, RPi will handle the request by itself.
This requires:
mysql can be access by two clients with different ip, which is already supported;
wordpress should be the same for RPi and Ubuntu, which can be done by nfs share;
nginx should be correctly configured.
Below is the details of configuration.
Note, in my configureation:
RPi's IP is 192.168.1.100, Ubuntu's IP is 192.168.1.101;
The wordpress only allows https, all http requests are redirected to https;
Server listens at port 80 and 443, upstreams listen on port 8000;
Mysql
Set bind-address = 192.168.1.100 in /etc/mysql/my.cnf, and make sure skip-networking is not defined;
Grant permission to RPi and Ubuntu in mysql's console:
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
Wordpress
Set DB_HOST correctly:
define('DB_NAME', 'minewpdb');
define('DB_USER', 'mineblog');
define('DB_PASSWORD', 'xxx');
define('DB_HOST', '192.168.1.100');
NFS
On RPi, install nfs-kernel-server, and export by /etc/exports
/path/to/wordpress 192.168.1.101(rw,no_root_squash,insecure,sync,no_subtree_check)
To enable nfs server on RPi, rpcbind is also required:
sudo service rpcbind start
sudo update-rc.d rpcbind enable
sudo service nfs-kernel-server start
On Ubuntu, mount the nfs (it should also be set in /etc/fstab to make it mount automatically)
sudo mount -t nfs 192.168.1.100:/path/to/wordpress /path/to/wordpress
Nginx
On RPi, make a new config file /etc/nginx/sites-available/wordpress-load-balance, with below parameters:
upstream php {
server unix:/var/run/php5-fpm.sock;
}
upstream mineservers {
# upstreams, Ubuntu has much higher priority
server 192.168.1.101:8000 weight=999 fail_timeout=5s max_fails=1;
server 192.168.1.100:8000;
}
server {
listen 80;
server_name mine260309.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name mine260309.me;
ssl_certificate /path/to/cert/cert_file;
ssl_certificate_key /path/to/cert/cert_key_file;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /path/to/wordpress/logs/proxy.log;
error_log /path/to/wordpress/logs/proxy_error.log;
location / {
# reverse-proxy to upstreams
proxy_pass http://mineservers;
### force timeouts if one of backend is died ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
### Most PHP, Python, Rails, Java App can use this header ###
#proxy_set_header X-Forwarded-Proto https;##
#This is better##
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
### By default we don't want to redirect it ####
proxy_redirect off;
}
}
server {
root /path/to/wordpress;
listen 8000;
server_name mine260309.me;
... # normal wordpress configurations
}
On Ubuntu, it can use the same config file.
Now any request received by RPi's nginx server on port 443, it's dispatched to either Ubuntu or RPi's port 8000, where Ubuntu has much higher priority. If Ubuntu is offline, RPi itself can handle the request as well.
Any comments are welcome!

Resources