The question is to ask for a possibility of making nginx conditionally redirect requests to other servers (by reverse proxy) or process the request by itself.
Here's the details.
I have a Raspberry Pi (RPi) running nginx + wordpress for 24*7 at home. I also have a laptop running Ubuntu for about 5 hours every night.
The wordpress on RPi is working great but it's slow (especially when it's working on php). So I would like to let the laptop help:
If laptop is on, RPi's nginx redirects all requests to Ubuntu by reverse proxy;
If laptop is off, RPi's nginx process the request as usual.
I wonder if it's possible to achieve this? If yes, how to configure RPi and Ubuntu?
The basic solution is, make nginx as a reverse-proxy with fail_timout, when it receives a request, it dispatch to the upstreams where Ubuntu has higher priority, and if Ubuntu is offline, RPi will handle the request by itself.
This requires:
mysql can be access by two clients with different ip, which is already supported;
wordpress should be the same for RPi and Ubuntu, which can be done by nfs share;
nginx should be correctly configured.
Below is the details of configuration.
Note, in my configureation:
RPi's IP is 192.168.1.100, Ubuntu's IP is 192.168.1.101;
The wordpress only allows https, all http requests are redirected to https;
Server listens at port 80 and 443, upstreams listen on port 8000;
Mysql
Set bind-address = 192.168.1.100 in /etc/mysql/my.cnf, and make sure skip-networking is not defined;
Grant permission to RPi and Ubuntu in mysql's console:
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
Wordpress
Set DB_HOST correctly:
define('DB_NAME', 'minewpdb');
define('DB_USER', 'mineblog');
define('DB_PASSWORD', 'xxx');
define('DB_HOST', '192.168.1.100');
NFS
On RPi, install nfs-kernel-server, and export by /etc/exports
/path/to/wordpress 192.168.1.101(rw,no_root_squash,insecure,sync,no_subtree_check)
To enable nfs server on RPi, rpcbind is also required:
sudo service rpcbind start
sudo update-rc.d rpcbind enable
sudo service nfs-kernel-server start
On Ubuntu, mount the nfs (it should also be set in /etc/fstab to make it mount automatically)
sudo mount -t nfs 192.168.1.100:/path/to/wordpress /path/to/wordpress
Nginx
On RPi, make a new config file /etc/nginx/sites-available/wordpress-load-balance, with below parameters:
upstream php {
server unix:/var/run/php5-fpm.sock;
}
upstream mineservers {
# upstreams, Ubuntu has much higher priority
server 192.168.1.101:8000 weight=999 fail_timeout=5s max_fails=1;
server 192.168.1.100:8000;
}
server {
listen 80;
server_name mine260309.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name mine260309.me;
ssl_certificate /path/to/cert/cert_file;
ssl_certificate_key /path/to/cert/cert_key_file;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /path/to/wordpress/logs/proxy.log;
error_log /path/to/wordpress/logs/proxy_error.log;
location / {
# reverse-proxy to upstreams
proxy_pass http://mineservers;
### force timeouts if one of backend is died ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
### Most PHP, Python, Rails, Java App can use this header ###
#proxy_set_header X-Forwarded-Proto https;##
#This is better##
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
### By default we don't want to redirect it ####
proxy_redirect off;
}
}
server {
root /path/to/wordpress;
listen 8000;
server_name mine260309.me;
... # normal wordpress configurations
}
On Ubuntu, it can use the same config file.
Now any request received by RPi's nginx server on port 443, it's dispatched to either Ubuntu or RPi's port 8000, where Ubuntu has much higher priority. If Ubuntu is offline, RPi itself can handle the request as well.
Any comments are welcome!
Related
First I started with an nginx 80 server and a backend server (ports 5000 and 5001) locally, here is my nginx configuration:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
location /api {
proxy_pass http://localhost:5000;
proxy_set_header host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
root html;
index index.html index.htm;
}
}
For backend servers(NET CORE Web API), I found that if you specify such default settings, it is inaccessible through intranet IP (http://myIP/api).
Server Info(start without arguments):
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
So I changed the settings of the backend server to something like this:
Server Info(start with --urls "https://0.0.0.0:5001;http://0.0.0.0:5000")
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://0.0.0.0:5001
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://0.0.0.0:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
This way it can be accessed through the intranet IP, and then I put my nginx 80 with ngrok to penetrate the extranet:
Web Interface http://127.0.0.1:4040
Forwarding https://***********.jp.ngrok.io -> http://localhost:80
Now I can access the homepage through the domain name of the Internet (https://###.jp.ngrok.io), but if I access the /api (https://###.jp.ngrok.io/api), it will jump to the 5001 port of the Internet(https://###.jp.ngrok.io:5001/api), how should I set it to be correctly forwarded to my local 5001 server, is it necessary to penetrate the port 5001 of the backend?
P.S. English is not my native language, it may be difficult to read the text above, sorry.
I try to deploy a Mercure hub on a server.
There is already a Symfony app (REST API) served with Apache2 (and Nginx configured in reverse proxy). My idea is to keep the API proxy to Apache2 and configure the Mercure subscriptions to be forwarded to the Mercure Hub (a Caddy server).
All is ok for the API part, but it's impossible to configure Nginx and Caddy correctly to work together. I precise that I reach the hub successfully when it's not behind Nginx. I use a custom certificate and, for some reason, each time I try to subscribe to the hub, I have this error :
DEBUG http.stdlib http: TLS handshake error from 127.0.0.1:36250: no
certificate available for '127.0.0.1'
If I modify my Nginx configuration with proxy_pass https://mydomain:3000; instead of proxy_pass https://127.0.0.1:3000;, the error becomes :
DEBUG http.stdlib http: TLS handshake error from PUBLIC-IP:36250: no
certificate available for 'PRIVATE-IP'
There is no further explaination in the Caddy or Nginx logs.
My guess is Nginx does not transfer the proper requested domain to Caddy, but I don't know why as I applied correctly the configuration instructions I found on the specification. Any help would be appreciated, thank you !
Caddy.dev config
{
# Debug mode (disable it in production!)
{$DEBUG:debug}
# Port update
http_port 3001
https_port 3000
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME:localhost}
log
tls /path-to-certificate/fullchain.pem /path-to-certificate/privkey.pem
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins http://localhost
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
NGinx Virtualhost config
server {
listen 80 http2;
server_name mercure-hub-domain.com;
return 301 https://mercure-hub-domain.com;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mercure-hub-domain.com;
ssl_certificate /path-to-certificate/fullchain.pem; # managed by Certbot
ssl_certificate_key /path-to-certificate/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass https://127.0.0.1:3000;
proxy_read_timeout 24h;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 300s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Configuration des logs
access_log /var/log/nginx/my-project/access.log;
error_log /var/log/nginx/my-project/error.log;
}
Command to launch the Mercure hub
sudo SERVER_NAME='mercure-hub-domain.com:3000' DEBUG=debug MERCURE_PUBLISHER_JWT_KEY='MY-KEY' MERCURE_SUBSCRIBER_JWT_KEY='MY-KEY' ./mercure run -config Caddyfile.dev
I have installed rocket.chat version 0.72.3 on CentOS 7.6 as a private local team chat.
Then for configuring a reverse proxy to force rocket.chat use SSL protocol I installed nginx version 1.12.2 and followed this link https://rocket.chat/docs/developer-guides/mobile-apps/supporting-ssl/ to configure nginx as a reverse proxy.
After the configuration was successful, I have two urls both pointing to my rocket.chat application (http://localhost:3000 and https://localhost:443). I mean rocket.chat is accessible under both of these two links which the http access is redundant.
How can I disable access to rocket.chat via http://localhost:3000?
You need to 1) bind rocketchat service only to localhost interface and 2) let nginx to listen on public interface and to act as proxy (what you probably already did).
So, first open your rocketchat.service file (possibly in /lib/systemd/system/rocketchat.service, but this depends on how you did configure rocketchat service) and in [Service] section add this line:
[Service]
Environment=BIND_IP=127.0.0.1
Don't worry that you already have one (or some) Environment entries, these are aggregated (as for me I have single Environement entry for each variable).
Then open your nginx config (possibly /etc/nginx/sites-enabled/default, but this may differ) and make sure, that server block listens only on port 443 and does its proxy job. My nginx relevant entries look like this:
# Upstreams
upstream backend {
server 127.0.0.1:3000;
}
server {
listen 443;
server_name mydomain.com;
error_log /var/log/nginx/rocketchat.access.log;
ssl on;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
You probably need to reload/restart nginx and rocketchat services and reload config issuing
$ sudo systemctl daemon-reload
command.
For me it works flawlessly.
I resolved this issue by blocking external connections to localhost and allowing internal connections to localhost using iptables:
iptables -A INPUT -p tcp --dport 3000 -s 127.0.0.1 -j ACCEPT
iptables -A INPUT -p tcp --dport 3000 -j DROP
But I'm still wondering isn't there any config related to nginx to sort the issue out?
I can get my head wrapped around ... We have requirement using ActiveMQ hidden behind NGINX proxy, but I have no idea how to set it up.
For the ActiveMQ I've setup different ports for all protocols
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:62716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5782?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:62713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1993?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:62714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And the nginx configuration like this:
server {
listen *:61616;
server_name 192.168.210.15;
index index.html index.htm index.php;
access_log /var/log/nginx/k1.access.log combined;
error_log /var/log/nginx/k1.error.log;
location / {
proxy_pass http://localhost:62716;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_method stream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
}
(same for all other five redefined ports)
I though that this would expose default ports ActiveMQ ports and Nginx would map it to the new definition, but this doesn't work.
For communication, we're using NodeJs library amqp10 in version 3.1.4.
And all the ports are enabled on the server ... if using standard ports without nginx proxy, it works.
Anyone idea what am I missing? Thanks for any thoughts.
You can hide ActiveMq behind nginx proxy, even if you are trying to proxy OpenWire for a AMQP client.
If you are adding your configuration inside http block, its bound to fail.
But get it that, nginx not only supports http, but also tcp block.
If you proxy activemq over tcp, then what happens at http level won't matter and you would still be able to proxy.
Off-course you would lose flexibility that comes along with http.
Open your nginx.conf (at /etc/nginx/nginx.conf).
This would have http block, which in turn would have some include statements.
Outside this http block, add another include statement.
$ pwd
/etc/nginx
$ cat nginx.conf | tail -1
include /etc/nginx/tcpconf.d/*;
The include statement is directing nginx to look for additional configurations in directory "/etc/nginx/tcpconf.d/".
Add desired configuration in this directory. Let's call it amq_stream.conf.
$ pwd
/etc/nginx/tcpconf.d
$ cat amq_stream.conf
stream {
upstream amq_server {
# activemq server
server <amq-server-ip>:<port like 61616.;
}
server {
listen 61616;
proxy_pass amq_server;
}
}
Restart your nginx service.
$ sudo service nginx restart
You are done
Nginx is a HTTP server that is capable of proxying WebSocket and HTTP.
But you are trying to proxy OpenWire for a AMQP client. Which does not work with Nginx or Node.js.
So - if you really need to use Nginx, you need to change client protocol to STOMP or MQTT over WebSocket. Then setup a WebSocket proxy in Nginx.
Nginx-example with TLS. More details at https://www.nginx.com/blog/websocket-nginx/
upstream websocket {
server amqserver.example.com:62714;
}
server {
listen 8883 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/certificate.cer;
ssl_certificate_key /etc/nginx/ssl/key.key;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
proxy_read_timeout 120s;
}
}
However, since you have to rewrite all client code, I would rethink the Nginx idea. There are other software and hardware that can front TCP based servers and do TLS termination and whatnot.
I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.