Mercure hub behind Nginx reverse proxy - symfony

I try to deploy a Mercure hub on a server.
There is already a Symfony app (REST API) served with Apache2 (and Nginx configured in reverse proxy). My idea is to keep the API proxy to Apache2 and configure the Mercure subscriptions to be forwarded to the Mercure Hub (a Caddy server).
All is ok for the API part, but it's impossible to configure Nginx and Caddy correctly to work together. I precise that I reach the hub successfully when it's not behind Nginx. I use a custom certificate and, for some reason, each time I try to subscribe to the hub, I have this error :
DEBUG http.stdlib http: TLS handshake error from 127.0.0.1:36250: no
certificate available for '127.0.0.1'
If I modify my Nginx configuration with proxy_pass https://mydomain:3000; instead of proxy_pass https://127.0.0.1:3000;, the error becomes :
DEBUG http.stdlib http: TLS handshake error from PUBLIC-IP:36250: no
certificate available for 'PRIVATE-IP'
There is no further explaination in the Caddy or Nginx logs.
My guess is Nginx does not transfer the proper requested domain to Caddy, but I don't know why as I applied correctly the configuration instructions I found on the specification. Any help would be appreciated, thank you !
Caddy.dev config
{
# Debug mode (disable it in production!)
{$DEBUG:debug}
# Port update
http_port 3001
https_port 3000
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME:localhost}
log
tls /path-to-certificate/fullchain.pem /path-to-certificate/privkey.pem
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins http://localhost
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
NGinx Virtualhost config
server {
listen 80 http2;
server_name mercure-hub-domain.com;
return 301 https://mercure-hub-domain.com;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mercure-hub-domain.com;
ssl_certificate /path-to-certificate/fullchain.pem; # managed by Certbot
ssl_certificate_key /path-to-certificate/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass https://127.0.0.1:3000;
proxy_read_timeout 24h;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 300s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Configuration des logs
access_log /var/log/nginx/my-project/access.log;
error_log /var/log/nginx/my-project/error.log;
}
Command to launch the Mercure hub
sudo SERVER_NAME='mercure-hub-domain.com:3000' DEBUG=debug MERCURE_PUBLISHER_JWT_KEY='MY-KEY' MERCURE_SUBSCRIBER_JWT_KEY='MY-KEY' ./mercure run -config Caddyfile.dev

Related

Block incoming request when SSL verification is disabled

I have my REST APIs configured to work over https using nginx( java APIs deployed in tomcat and nginx is configured for DNS mapping). Our testing team has managed to access the APIs using burp tool (I assume it allows them to access with SSL verification disabled) and they were able to alter the API response before the client receives it. My nginx server is configured to work on SSL with proxy forward setup for http to https. How can I block the API requests which has SSL verification disabled, so that I can stop them altering the response? Below is my nginx config.
upstream mlljava{
server 172.31.5.222:8090;
}
server {
listen 443 ssl;
server_name mllwebapi.xyz.in www.mllwebapi.xyz.in;
underscores_in_headers on;
client_max_body_size 10M;
ssl_protocols TLSv1.3;
ssl_certificate /home/ubuntu/175e9.crt;
ssl_certificate_key /home/ubuntu/key.key;
location / {
proxy_pass http://mlljava/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
proxy_pass_request_headers on;
}
}
Does adding this to server configuration helps?
# force https-redirects
if ($scheme = http) {
return 401 https://$server_name$request_uri;
Configure Nginx SSL + force HTTP to redirect to HTTPS + force www to non-www on Serverpilot free plan (Using Nginx configuration file only)
Nginx: force SSL on one path, non-SSL on others

NiFi Auth with Nginx reverse proxy

Is it possible to have NiFi with user authentication but with SSL termination on NGINX. I have NGINX running on port 443 and a proxy_pass passing to nifi at port 8080. I played around with these headers:
X-ProxyScheme - the scheme to use to connect to the proxy
X-ProxyHost - the host of the proxy
X-ProxyPort - the port the proxy is listening on
X-ProxyContextPath - the path configured to map to the NiFi instance
But it seems impossible to get NiFi to recognise it's on https connection behind the proxy. I updated my auth configuration however NiFi still throws an error:
IllegalStateException: User authentication/authorization is only supported when running over HTTPS.. Returning Conflict response.
java.lang.IllegalStateException: User authentication/authorization is only supported when running over HTTPS
Basically https to nginx than to http port for nifi.
Am not familiar with NiFi, but on RHEL with nginx the below gives me a reverse proxy with a HTTPS connection terminated in nginx and an onward HTTP connection with a /abc_end_point. Perhaps you can use this as a template?
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
ssl_certificate "/etc/pki/tls/certs/abc.com.crt";
ssl_certificate_key "/etc/pki/tls/private/abc.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /abc_end_point {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:9090/abc_end_point;
}
}
You are trying to setup Nifi with SSL offloading on the reverse proxy (nginx) - this kind of setup is not supported.
See: http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-and-SSL-offloading-td7790.html#a7799
I recommended to use TLS (HTTPS) also between reverse proxy and Nifi.

Nginx Serving Cert For Site Even Though SSL Not On

I have been troubleshooting an obscure nginx problem where we have a site correctly serving a cert and establishing a ssl connection on port 443 even though ssl is not explicitly turned on for the port. Below you can see the configuration for the site, which is listening on port 443 but not using the ssl directive.
server {
listen 443;
port_in_redirect off;
server_name xyz.abcd.com;
# websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 1m;
proxy_set_header X-Request-Id $request_id;
proxy_set_header X-Request-Start $msec;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header Host $host;
location / {
proxy_pass http://xyz-svc;
}
}
Furthermore, our nginx.conf does not explicitly mention port 443 or ssl, but it does include the path to the cert for abcd.com:
http {
..
ssl_certificate /etc/ssl/certs/abcd.pem;
ssl_certificate_key /etc/ssl/private/abcd.key;
..
}
Lastly, if we go to http://abcd.com:443, nginx throws an error saying "The plain HTTP request was sent to HTTPS port." So, clearly it is interpreting port 443 for this site as a ssl port even though we do not explicitly define that in our configuration. This behavior is true for both nginx version 1.7.5 and nginx version 1.13.8.
What are possible reasons nginx would correctly establish a ssl connection on port 443 for a site with the appropriate cert if it is never defined in the configuration to do so?

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

ngixn conditionally reverse proxy or serve directly

The question is to ask for a possibility of making nginx conditionally redirect requests to other servers (by reverse proxy) or process the request by itself.
Here's the details.
I have a Raspberry Pi (RPi) running nginx + wordpress for 24*7 at home. I also have a laptop running Ubuntu for about 5 hours every night.
The wordpress on RPi is working great but it's slow (especially when it's working on php). So I would like to let the laptop help:
If laptop is on, RPi's nginx redirects all requests to Ubuntu by reverse proxy;
If laptop is off, RPi's nginx process the request as usual.
I wonder if it's possible to achieve this? If yes, how to configure RPi and Ubuntu?
The basic solution is, make nginx as a reverse-proxy with fail_timout, when it receives a request, it dispatch to the upstreams where Ubuntu has higher priority, and if Ubuntu is offline, RPi will handle the request by itself.
This requires:
mysql can be access by two clients with different ip, which is already supported;
wordpress should be the same for RPi and Ubuntu, which can be done by nfs share;
nginx should be correctly configured.
Below is the details of configuration.
Note, in my configureation:
RPi's IP is 192.168.1.100, Ubuntu's IP is 192.168.1.101;
The wordpress only allows https, all http requests are redirected to https;
Server listens at port 80 and 443, upstreams listen on port 8000;
Mysql
Set bind-address = 192.168.1.100 in /etc/mysql/my.cnf, and make sure skip-networking is not defined;
Grant permission to RPi and Ubuntu in mysql's console:
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
Wordpress
Set DB_HOST correctly:
define('DB_NAME', 'minewpdb');
define('DB_USER', 'mineblog');
define('DB_PASSWORD', 'xxx');
define('DB_HOST', '192.168.1.100');
NFS
On RPi, install nfs-kernel-server, and export by /etc/exports
/path/to/wordpress 192.168.1.101(rw,no_root_squash,insecure,sync,no_subtree_check)
To enable nfs server on RPi, rpcbind is also required:
sudo service rpcbind start
sudo update-rc.d rpcbind enable
sudo service nfs-kernel-server start
On Ubuntu, mount the nfs (it should also be set in /etc/fstab to make it mount automatically)
sudo mount -t nfs 192.168.1.100:/path/to/wordpress /path/to/wordpress
Nginx
On RPi, make a new config file /etc/nginx/sites-available/wordpress-load-balance, with below parameters:
upstream php {
server unix:/var/run/php5-fpm.sock;
}
upstream mineservers {
# upstreams, Ubuntu has much higher priority
server 192.168.1.101:8000 weight=999 fail_timeout=5s max_fails=1;
server 192.168.1.100:8000;
}
server {
listen 80;
server_name mine260309.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name mine260309.me;
ssl_certificate /path/to/cert/cert_file;
ssl_certificate_key /path/to/cert/cert_key_file;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /path/to/wordpress/logs/proxy.log;
error_log /path/to/wordpress/logs/proxy_error.log;
location / {
# reverse-proxy to upstreams
proxy_pass http://mineservers;
### force timeouts if one of backend is died ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
### Most PHP, Python, Rails, Java App can use this header ###
#proxy_set_header X-Forwarded-Proto https;##
#This is better##
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
### By default we don't want to redirect it ####
proxy_redirect off;
}
}
server {
root /path/to/wordpress;
listen 8000;
server_name mine260309.me;
... # normal wordpress configurations
}
On Ubuntu, it can use the same config file.
Now any request received by RPi's nginx server on port 443, it's dispatched to either Ubuntu or RPi's port 8000, where Ubuntu has much higher priority. If Ubuntu is offline, RPi itself can handle the request as well.
Any comments are welcome!

Resources