I have my REST APIs configured to work over https using nginx( java APIs deployed in tomcat and nginx is configured for DNS mapping). Our testing team has managed to access the APIs using burp tool (I assume it allows them to access with SSL verification disabled) and they were able to alter the API response before the client receives it. My nginx server is configured to work on SSL with proxy forward setup for http to https. How can I block the API requests which has SSL verification disabled, so that I can stop them altering the response? Below is my nginx config.
upstream mlljava{
server 172.31.5.222:8090;
}
server {
listen 443 ssl;
server_name mllwebapi.xyz.in www.mllwebapi.xyz.in;
underscores_in_headers on;
client_max_body_size 10M;
ssl_protocols TLSv1.3;
ssl_certificate /home/ubuntu/175e9.crt;
ssl_certificate_key /home/ubuntu/key.key;
location / {
proxy_pass http://mlljava/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
proxy_pass_request_headers on;
}
}
Does adding this to server configuration helps?
# force https-redirects
if ($scheme = http) {
return 401 https://$server_name$request_uri;
Configure Nginx SSL + force HTTP to redirect to HTTPS + force www to non-www on Serverpilot free plan (Using Nginx configuration file only)
Nginx: force SSL on one path, non-SSL on others
Related
I am a newb and i installed jupyterhub with nginx reverse proxy on my ubuntu 18.04 server. I built my own root CA and self signed certificate with openssl. Https connections works very well if my rootCA is installed on my others computers. I want to block access for the computers who don't have my rootCA.
the file /etc/nginx/nginx.conf is untouched and my config file /etc/nginx/sites-available/jupyter.conf is:
# top-level http config for websocket headers If Upgrade is defined,
# Connection = upgrade If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name 192.168.4.70 mlserver.net localhost;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name 192.168.4.70 mlserver.net localhost;
ssl_certificate /etc/ssl/certs/mlserver.net.crt;
ssl_certificate_key /etc/ssl/private/mlserver.net.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
How can i edit this file to block access for computers who dont have certificate ?
What nginx directive add ?
Thanx.
I want to block access for the computers who don't have my rootCA.
This is not possible. The server has no information if the client has successfully validated the server certificate (i.e. clients which have the rootCA) or if a client simply skipped certificate validation (clients which don't have rootCA).
One could try to add a HSTS header so that browsers will not simply allow to ignore certificate problems. But this can also be bypassed on the client side without the server noticing, it just makes it a bit harder.
If you want to control who can access the notebook you would need proper authentication of the clients instead. Knowledge of the rootCA is not client authentication.
I try to deploy a Mercure hub on a server.
There is already a Symfony app (REST API) served with Apache2 (and Nginx configured in reverse proxy). My idea is to keep the API proxy to Apache2 and configure the Mercure subscriptions to be forwarded to the Mercure Hub (a Caddy server).
All is ok for the API part, but it's impossible to configure Nginx and Caddy correctly to work together. I precise that I reach the hub successfully when it's not behind Nginx. I use a custom certificate and, for some reason, each time I try to subscribe to the hub, I have this error :
DEBUG http.stdlib http: TLS handshake error from 127.0.0.1:36250: no
certificate available for '127.0.0.1'
If I modify my Nginx configuration with proxy_pass https://mydomain:3000; instead of proxy_pass https://127.0.0.1:3000;, the error becomes :
DEBUG http.stdlib http: TLS handshake error from PUBLIC-IP:36250: no
certificate available for 'PRIVATE-IP'
There is no further explaination in the Caddy or Nginx logs.
My guess is Nginx does not transfer the proper requested domain to Caddy, but I don't know why as I applied correctly the configuration instructions I found on the specification. Any help would be appreciated, thank you !
Caddy.dev config
{
# Debug mode (disable it in production!)
{$DEBUG:debug}
# Port update
http_port 3001
https_port 3000
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME:localhost}
log
tls /path-to-certificate/fullchain.pem /path-to-certificate/privkey.pem
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins http://localhost
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
NGinx Virtualhost config
server {
listen 80 http2;
server_name mercure-hub-domain.com;
return 301 https://mercure-hub-domain.com;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mercure-hub-domain.com;
ssl_certificate /path-to-certificate/fullchain.pem; # managed by Certbot
ssl_certificate_key /path-to-certificate/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass https://127.0.0.1:3000;
proxy_read_timeout 24h;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 300s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Configuration des logs
access_log /var/log/nginx/my-project/access.log;
error_log /var/log/nginx/my-project/error.log;
}
Command to launch the Mercure hub
sudo SERVER_NAME='mercure-hub-domain.com:3000' DEBUG=debug MERCURE_PUBLISHER_JWT_KEY='MY-KEY' MERCURE_SUBSCRIBER_JWT_KEY='MY-KEY' ./mercure run -config Caddyfile.dev
Is it possible to have NiFi with user authentication but with SSL termination on NGINX. I have NGINX running on port 443 and a proxy_pass passing to nifi at port 8080. I played around with these headers:
X-ProxyScheme - the scheme to use to connect to the proxy
X-ProxyHost - the host of the proxy
X-ProxyPort - the port the proxy is listening on
X-ProxyContextPath - the path configured to map to the NiFi instance
But it seems impossible to get NiFi to recognise it's on https connection behind the proxy. I updated my auth configuration however NiFi still throws an error:
IllegalStateException: User authentication/authorization is only supported when running over HTTPS.. Returning Conflict response.
java.lang.IllegalStateException: User authentication/authorization is only supported when running over HTTPS
Basically https to nginx than to http port for nifi.
Am not familiar with NiFi, but on RHEL with nginx the below gives me a reverse proxy with a HTTPS connection terminated in nginx and an onward HTTP connection with a /abc_end_point. Perhaps you can use this as a template?
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
ssl_certificate "/etc/pki/tls/certs/abc.com.crt";
ssl_certificate_key "/etc/pki/tls/private/abc.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /abc_end_point {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:9090/abc_end_point;
}
}
You are trying to setup Nifi with SSL offloading on the reverse proxy (nginx) - this kind of setup is not supported.
See: http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-and-SSL-offloading-td7790.html#a7799
I recommended to use TLS (HTTPS) also between reverse proxy and Nifi.
I am trying to setup nginx + keycloak to protect my spring boot microservices.
I hope to config the following structure
internet ---https---> nginx --http--> keycloak and other protected microservices
keycloak <----http----> zuul and other microservices
I could not figure out how to set the nginx to correctly pass original https request to http backend servers
here are my configurations:
NGINX
server {
listen 443 ssl default_server;
server_name localhost;
ssl_certificate /etc/nginx/localhost.crt;
ssl_certificate_key /etc/nginx/localhost.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/ledgerrun.access.log;
proxy_set_header Host $host:443;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# keycloak redirect
location /auth {
proxy_pass https://keycloak:8443;
}
location / {
proxy_pass http://192.168.1.15:15700;
}
# all /lr-servies redirect to 15700
#location ~* ^/lr-(.*) {
# proxy_pass https://192.168.1.15:15700;
#}
}
Keycloak instance opens 8443 for https and 8080 for http.
Spring Cloud Zuul config as the following
server:
port: "15700"
keycloak:
auth-server-url: "https://my.external.ip:8443/auth"
realm: "LedgerRunCTP"
public-client: true
resource: gateway-service
security-constraints[0]:
authRoles[0]: "user"
securityCollections[0]:
name: "internal services"
patterns[0]: "/swagger-ui.html"
I have to configure the "auth-server-url" to https, since client need to login with keycloak first and that requires https for non-local connection. Is there a way to just use http for keycloak?
with above configuration, once keycloak authenticated the user, gets the following error:
The plain HTTP request was sent to HTTPS port
What is the correct configuration to set it up to achieve
only https to nginx
all http behind nginx
I have set up an nginx reverse proxy server on my web server, which is receiving SSL traffic, and reverse proxying it to port 8080 on my web server, which is an exposed port running the nextcloud docker image. I am able to log in from a desktop web browser, but I am not able to log in from my iPhone. When I log in from the app, I receive error message "Access Forbidden, Invalid Request." This Github issue identifies the issue as auth headers being removed from the request, though the solution it gives is for Apache, not for Nginx. I'm really not familiar with authorization headers. How would I modify my Nginx server directive to take care of the issue?
Current setup
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name cloud.foo.com;
ssl_certificate /etc/letsencrypt/live/cloud.foo.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.foo.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
You may need to add a setting to explicitly pass the Authorization header in the response from the proxied server.
For example:
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass_header Authorization;
}
Based on the reverse proxy settings I've seen for another authenticated service, it's probable that by default, Nginx does not pass the Authorization header from the response of a proxied server to a client. Although this is not listed in the documentation, it is probably necessary to avoid interference with the authentication modules.