Nginx reverse proxy config whitelist IP - nginx

I am using Nginx reverse proxy with Kubernetes services. Config is following:
events {
}
http {
upstream my-service-3000 {
server my-service:3000;
}
server {
listen 443 ssl;
server_name myserver.net;
ssl_certificate /key.pem;
ssl_certificate_key /key.pem;
location / {
allow myIP;
deny all;
proxy_pass http://my-service-3000;
}
}
server {
...
}
}
It works fine (doing reverse proxy, terminating ssl, changing port, finding Kubernetes service), till the moment I try whitelist only my IP. When I try to access service via https - I got 403 from Nginx. I've tried to move around allow/deny commands, but it do not help. Any suggestions where could be the problem?
Also I am behind proxy by my self - so I am using my external organisation IP.

The whitelisting should be under the http directive, not under the location directive.
http {
allow MyIp;
deny all;
upstream my-service-3000 {
server my-service:3000;
}
server {
listen 443 ssl;
server_name myserver.net;
ssl_certificate /key.pem;
ssl_certificate_key /key.pem;
location / {
proxy_pass http://my-service-3000;
}
}
server {
...
}
}

Related

Can NGINX choose proxy_pass backend based on IP?

I have a situation where we have multiple test environments. Each environment needs to access different versions of a service and we have an NGINX proxy that sits in-front of these different services. Currently we're using multiple servers to do the proxy. Is there a way I can use NGINX allow or deny to filter which backend the environments connect to based on remote IP?
The v1 environment has IP addresses in the 10.0.1.0/24 range, and v2 is only connected to by IP's in 10.0.2.0/24.
Current Config
Simplified for brevity.
server {
listen 80;
server_name service.v1.net;
proxy_pass http://10.0.10.56:8081;
}
server {
listen 80;
server_name service.v2.net;
proxy_pass http://10.0.10.56:8082;
}
What I've tried
Clearly this doesn't work.
server {
listen 80;
server_name service.net;
location / {
# v1 proxy
allow 10.0.1.0/24;
deny all;
proxy_pass http://10.0.10.56:8081;
}
location / {
# v2 proxy
allow 10.0.2.0/24;
deny all;
proxy_pass http://10.0.10.56:8082;
}
}
Also Note...
I know this can be done with serving a proxy on different ports and iptables rules - I'm trying to figure out if NGINX can do this by itself.
You can use ngx_http_geo_module for that. (Which should just work out of the box). It sets variables depending on the client IP address, which can then be used in an if.
geo $environment {
10.0.1.0/24 v1;
10.0.2.0/24 v2;
}
server {
listen 80;
server_name service.net;
location / {
if ($environment = v1) {
proxy_pass http://10.0.10.56:8081;
}
if ($environment = v2) {
proxy_pass http://10.0.10.56:8082;
}
}
}
All other IPs will see a 404 in this case.
Although this works, be advised that using if within a location block can be very tricky: http://wiki.nginx.org/IfIsEvil

How to combine nginx "stream" and "http" for the same servername?

I would like to handle 2 servernames, say "web1.example.com" and "web2.example.com" on the same port (443) in the same nginx config where the first should be a local http server, and the second needs to be forwarded to an external upstream without terminating the SSL connection.
How do I configure this?
Details:
I can use nginx to look at the first SSL message (CLientHello) and use it to proxy/forward the entire connection without terminating SSL. This can even look at the SNI and choose a different upstream based on the servername in it. This uses the ngx_stream_ssl_preread_module with proxy_pass and ssl_preread on. The config is something like this:
stream {
upstream web1 {
server 10.0.0.1:443;
}
upstream web2 {
server 10.0.0.2:443;
}
map $ssl_preread_server_name $upstream {
web1.example.com web1;
web1-alias.example.com web1;
web2.example.com web2;
}
server {
listen 443;
resolver 1.1.1.1;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass $upstream;
ssl_preread on;
}
}
This is configured in the stream config section of nginx.
But I can also configure a local http server in the http config section of nginx.
So what if I want web1 ("web1.example.com" in the example) to use such a "local nginx http server", and not an external "upstream server"? ("web2" should still be forwarded as before.) So I want to configure "web1.example.com" in the http config section of nginx, and "forward" to it in the stream config section of nginx.
To be clear, I want "web1.example.com" to be configured like this:
http {
server {
listen 443 ssl;
server_name web1.example.com web1-alias.example.com;
ssl_certificate ...
location ...
...
}
}
This all works find if I do either stream or http listening on the same port. But how do I do both on the same port?
How can I "call" the http config section from the streams config section? Can proxy_pass refer to a local nginx http server somehow?
I don't think you can use both on the same port, but maybe something like this would work?
stream {
upstream web1 {
server 127.0.0.1:8443;
}
upstream web2 {
server 10.0.0.2:443;
}
map $ssl_preread_server_name $upstream {
web1.example.com web1;
web1-alias.example.com web1;
web2.example.com web2;
}
server {
listen 443;
resolver 1.1.1.1;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 8443 ssl;
server_name web1.example.com web1-alias.example.com;
ssl_certificate ...
location ...
...
}
}

NGINX location directive in stream

I've installed Nginx on one of my servers in order to be used as a load balancer for my Rancher application.
I based my configuration on the one found here: https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/
And so my config is:
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server <ipnode1>:80 max_fails=3 fail_timeout=5s;
server <ipnode2>:80 max_fails=3 fail_timeout=5s;
server <ipnode3>:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server <ipnode1>:443 max_fails=3 fail_timeout=5s;
server <ipnode2>:443 max_fails=3 fail_timeout=5s;
server <ipnode3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
My configuration is working as expected but I've recently installed Nextcloud on my cluster. Which is giving me the following error:
Your web server is not properly set up to resolve “/.well-known/caldav”. Further information can be found in the
documentation.
Your web server is not properly set up to resolve “/.well-known/carddav”. Further information can be found in the
documentation.
So I would like to add a "location" directive but I'm not able to do it.
I tried to update my config as follow:
...
stream {
upstream rancher_servers_http {
...
}
server {
listen 80;
proxy_pass rancher_servers_http;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
upstream rancher_servers_https {
...
}
server {
listen 443;
proxy_pass rancher_servers_https;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But it's telling me
"location" directive is not allowed here in /etc/nginx/nginx.conf:21
Assuming location directive is not allowed in a stream configuration I tried to add an http block like this:
...
stream {
...
}
http {
server {
listen 443;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
server {
listen 80;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But then I got this message:
bind() to 0.0.0.0:443 failed (98: Address already in use)
(same for the port 80).
Can someone help me with this ? How can I add the location directive without affecting my actual configuration ?
Thank you for reading.
Edit
Well it seems that the stream directive prevent me from adding other standard directives. I tried to add the client_max_body_size inside server but I'm having the same issue:
directive is not allowed here
Right now your setup uses nginx as an TCP proxy. Such configuration of nginx passes through traffic without analysis - it can be ssh, rdp, whatever traffic and it will work regardless of protocols because nginx do not try to check stream content.
That is the reason why location directive does not work in context of streams - it is http protocol related function.
To take advantage of high level protocol analysis nginx need to be aware of protocol going through it, i.e. be configured as an HTTP reverse proxy.
For it to work server directive should be placed in http scope instead of stream scope.
http {
server {
listen 0.0.0.0:443 ssl;
include /etc/nginx/snippets/letsencrypt.conf;
root /var/www/html;
server_name XXXX;
location / {
proxy_pass http://rancher_servers_http;
}
location /.well-known/carddav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
root /var/www/html;
server_name xxxx;
location / {
proxy_pass http://rancher_servers_http;
}
}
}
Drawback of this approach for you would be need of certificate management reconfiguration.
But you will load off ssl encryption to nginx and gain intelligent ballancing based on http queries.

nginx reverse proxy between 2 https servers

I'm a bit new to using nginx so I'm likely missing something obvious. I'm trying to create an nginx server that will reverse proxy to a set of web servers that use https.
I've been able to get it to work with one server list this:
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://<server1>.herokuapp.com;
}
}
However, as soon I try to add in the 'upstream' configuration element it no longer works.
upstream backend {
server <server1>.herokuapp.com;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
I've tried adding in 443, but that also fails.
upstream backend {
server <server1>.herokuapp.com:443;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
Any ideas what I'm doing wrong here?

how can I have nginx require HTTPS for one location, while having it be optional for all others?

Currently I have nginx configured with a single site that serves both HTTP and HTTPS, using two listen directives:
listen 80 default_server;
listen 443 ssl;
I'd like to use this configuration for all locations within the site; however there is only location where I would like to require HTTPS.
location / {
// Both HTTP and HTTPS
}
location /admin {
// Require HTTPS
}
How would I go about doing this? Are seperate HTTP and HTTPS server configs required?
I'd split those to 2 servers
server {
listen 80;
server_name example.com www.example.com;
location / {
#normal config
}
location /admin {
return 301 https://$http_host$request_uri;
}
}
server {
listen 443 ssl;
server_name example.com www.example.com;
location / {
#normal config for frontend
}
location /admin {
#admin settings
}
}
You can split the common config into a separate file and include them in both servers instead of rewriting both every time you change any thing

Resources