Any posible way to make secure links for multiple ips? - nginx

I'm running a fairly well used CDN system using Nginx and I need to secure my links so that they aren't shared between users.
The current config works perfectly..
# Setup Secure Links
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr secret";
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 410; }
However with the internet going ever more mobile and with many users now coming from university campuses etc I'm seeing tons of failed requests, and annoyed end users because the requester's IP has changed between requests.
The requesting IP is almost always in the same range, so for example:
Original Request: 192.168.0.25
File Request: 192.168.0.67
I'd be happy to lock these secure links down to a range, such as
192.168.0.0 - 192.168.0.255
or go even further and make it even bigger
192.168.0.0 - 192.168.255.255
but I can't figure out a way to do this in nginx, or if the secure_link feature even supports this.
If this isn't possible - does anyone have any other ideas on how to secure links that would be less restrictive, but still be reasonably safe? I had a look at using the browser string instead, but many of our users have download managers or use 3rd part desktop clients - so this isn't viable.
I'm very much trying to do this without having to have any dynamic code to check a remote database as this is very high volume and I'd rather not have that dependancy.

You can use more than one auth directive within Nginx, so you could drop the IP from the secure link and specify that as a separate directive.
Nginx uses CIDR ranges, so for your example it would simply be a case of
allow 192.168.0.0/16;
deny all;

You can use the map approach
map $remote_addr $auth_addr {
default $remote_addr;
~*^192\.168\.100 192.168.100;
~*^192\.169 192.169;
}
And then later use something
secure_link_md5 "$secure_link_expires$uri$auth_addr secret";
I have not used such an approach, but I am assuming it should work. If it doesn't please let me know

I managed to get this working thanks to #Tarun Lalwani for pointing out the maps idea.
# This map breaks down $remote_addr into octets
map $remote_addr $ipv4_first_two_octets {
"~(?<octet1>\d+)\.(?<octet2>\d+)\.(?<octet3>\d+)\.(?<octet4>\d+)" "${octet1}.${octet2}";
default "0.0";
}
location / {
# Setup Secure Links secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$ipv4_first_two_octets secret";
}

Related

Caddy reverse proxy to /dir/ to localnet:port

I got Caddy from official repo on docker hub all up and running with automatic https on several subdomains. So far so good.
sub1.domain.com {
respond "Test"
}
https://sub1.domain.com:3333 {
reverse_proxy 192.168.7.6:3000
}
https://sub1.domain.com:4444 {
reverse_proxy 192.168.7.6:4000
}
sub2.domain.com {
respond "Test"
}
There are two things I do not understand.
1) I would rather have the proxy working on subdirs forwarding to ports, but this fails, as the dir seems to be maintained as well while proxying. Example:
https://sub1.domain.com:4444 {
reverse_proxy /dir/ 192.168.7.6:4000
}
So eventually I end up at 192.168.7.6:4000/dir/ instead of only 192.168.7.6:4000
2) When I call sub2.domain.com combined with a port from sub1 it shows a blank page (source empty as well). So for example sub2.domain.com:4444. I would rather expect a timeout or error page?
Many thanks for hints and suggestions in advance!
Matching requests does not rewrite them. So, matching on /dir/ does not change the URI of the request. It's simply a filter.
To strip a path prefix, you can do:
uri strip_prefix /dir
Since this is pretty common, there's some work to make this even easier in the future: https://github.com/caddyserver/caddy/pull/3281
For more help, feel free to ask on our forums, the audience there is much better targeted for Caddy users: https://caddy.community

Certbot /.well-known/acme-challenge

Should I leave the /.well-known/acme-challenge always exposed on the server?
Here is my config for the HTTP:
server {
listen 80;
location '/.well-known/acme-challenge' {
root /var/www/demo;
}
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
}
Which basically redirects all the requests to https, except for the acme-challenge (for auto renewal). My question: Is it alright to keep location '/.well-known/acme-challenge' always exposed on port 80? Or better to comment/uncomment it manually, when need to reissue the certificate? Are there any security issues with that?
Any advise or links to read for about the this location appreciated. Thanks!
Acme challenge link only needed for verifying domain to this ip address
You do not need to keep the token available once your certificate has been signed. However, there is not much harm in leaving it available either, as explained by a Certbot engineer:
The token is part of a particular challenge which is no longer active, from the ACME server's point of view, after the server has tried to validate it. It would reveal a little bit of information about how you get certificates, but should not allow someone else to issue certificates for your site or impersonate you.
In case someone finds this helpful, I just asked my hosting customer support and they explained it as per following...
Yes, “well-known” folder is automatically created by cPanel in order
to validate your domain for AutoSSL purposes. AutoSSL is an added
feature of cPanel/WHM which offer you free SSL certificate for your
domains, its also known as self-signed SSL certificate. The folder
.well-known created while the time of the domain validation process as
a part of AutoSSL installation
And it is not the file that needs to be removed, It does not cause any
issue.
The period before the file name (.well-known) means it is a hidden directory. If your server gets hacked the information is available to the hacker.

Allow cross origin application side (no control of API)

I have an application which should be able to read in data from any data source, meaning any API from any domain.
How to get around the Cross-Origin problem when you don't have any control over the API or even the domain it is coming from?
I know that you could simulate the same domain by adding a
location /data/ {
proxy_pass http://exampleAPIdomain.com/data/;
}
block to allow for a specific API domain (here: exampleAPIdomain.com), but in my case I want to be open for any domain.
Is that even possible?
Yes, that is possible by using a variable in the proxy_pass-directive:
proxy_pass $somevariable$request_uri;
You can set the actual host via a header for example, then the directive would be:
proxy_pass $http_someheader$request_uri;
Security note: If you expose this to the internet without some form of authorization, then everybody can use your proxy to proxy anything.

How do I reverse proxy the homepage/root using nginx?

I'm interested in sending folks who go to the root/homepage of my site to another server.
If they go anywhere else (/news or /contact or /hi.html or any of the dozens of other pages) they get proxied to a different server.
Since "/" is the nginx catchall to send anything that's not defined to a particular server, and since "/" also represents the homepage, you can see my predicament.
Essentially the root/homepage is its own server. Everything else is on a different server.
Thoughts?
location =/ {
# only "/" requests
}
location / {
# everything else
}
More information: http://nginx.org/en/docs/http/ngx_http_core_module.html#location

NGinx : How to test if a cookie is set or not without using 'if'?

I am using the following configuration for NGinx currently to test my app :
location / {
# see if the 'id' cookie is set, if yes, pass to that server.
if ($cookie_id){
proxy_pass http://${cookie_id}/$request_uri;
break;
}
# if the cookie isn't set, then send him to somewhere else
proxy_pass http://localhost:99/index.php/setUserCookie;
}
But they say "IFisEvil". Can anyone show me a way how to do the same job without using "if"?
And also, is my usage of "if" is buggy?
There are two reasons why 'if is evil' as far as nginx is concerned. One is that many howtos found on the internet will directly translate htaccess rewrite rules into a series of ifs, when separate servers or locations would be a better choice. Secondly, nginx's if statement doesn't behave the way most people expect it to. It acts more like a nested location, and some settings don't inherit as you would expect. Its behavior is explained here.
That said, checking things like cookies must be done with ifs. Just be sure you read and understand how ifs work (especially regarding directive inheritance) and you should be ok.
You may want to rethink blindly proxying to whatever host is set in the cookie. Perhaps combine the cookie with a map to limit the backends.
EDIT: If you use names instead of ip addresses in the id cookie, you'll also need a resolver defined so nginx can look up the address of the backend. Also, your default proxy_pass will append the request onto the end of the setUserCookie. If you want to proxy to exactly that url, you replace that default proxy_pass with:
rewrite ^ /index.php/setUserCookie break;
proxy_pass http://localhost:99;

Resources