I got Caddy from official repo on docker hub all up and running with automatic https on several subdomains. So far so good.
sub1.domain.com {
respond "Test"
}
https://sub1.domain.com:3333 {
reverse_proxy 192.168.7.6:3000
}
https://sub1.domain.com:4444 {
reverse_proxy 192.168.7.6:4000
}
sub2.domain.com {
respond "Test"
}
There are two things I do not understand.
1) I would rather have the proxy working on subdirs forwarding to ports, but this fails, as the dir seems to be maintained as well while proxying. Example:
https://sub1.domain.com:4444 {
reverse_proxy /dir/ 192.168.7.6:4000
}
So eventually I end up at 192.168.7.6:4000/dir/ instead of only 192.168.7.6:4000
2) When I call sub2.domain.com combined with a port from sub1 it shows a blank page (source empty as well). So for example sub2.domain.com:4444. I would rather expect a timeout or error page?
Many thanks for hints and suggestions in advance!
Matching requests does not rewrite them. So, matching on /dir/ does not change the URI of the request. It's simply a filter.
To strip a path prefix, you can do:
uri strip_prefix /dir
Since this is pretty common, there's some work to make this even easier in the future: https://github.com/caddyserver/caddy/pull/3281
For more help, feel free to ask on our forums, the audience there is much better targeted for Caddy users: https://caddy.community
Related
I came into a situation today. Please share your expertise 🙏
I have a project (my-app.com) and one of the features is to generate a status page consisting of different endpoints.
Current Workflow
User login into the system
User creates a status page for one of his sites (e.g.google) and adds different endpoints and components to be included on that page.
System generates a link for a given status page.
For Example. my-app.com/status-page/google
But the user may want to see this page in his custom domain.
For Example. status.google.com
Since this is a custom domain, we need on-demand TLS functionality. For this feature, I used Caddy and is working fine. Caddy is running on our subdomain status.myserver.com and user's custom domain status.google.com has a CNAME to our subdomain status.myserver.com
Besides on-demand TLS, I am also required to do reverse proxy as
shown below.
For Example. status.google.com ->(CNAME)-> status.myserver.com ->(REVERSE_PROXY)-> my-app.com/status-page/google
But Caddy supports only protocol, host, and port format for reverse proxy like my-app.com but my requirement is to support reverse proxy for custom page my-app.com/status-page/google. How can I achieve this? Is there a better alternative to Caddy or a workaround with Caddy?
You're right, since you can't use a path in a reverse-proxy upstream URL, you'd have to do rewrite the request to include the path first, before initiating the reverse-proxy.
Additionally, upstream addresses cannot contain paths or query strings, as that would imply simultaneous rewriting the request while proxying, which behavior is not defined or supported. You may use the rewrite directive should you need this.
So you should be able to use an internal caddy rewrite to add the /status-page/google path to every request. Then you can simply use my-app.com as your Caddy reverse-proxy upstream. This could look like this:
https:// {
rewrite * /status-page/google{path}?{query}
reverse_proxy http://my-app.com
}
You can find out more about all possible Caddy reverse_proxy upstream addresses you can use here: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#upstream-addresses
However, since you probably can't hard-code the name of the status page (/status-page/google) in your Caddyfile, you could set up a script (e.g. at /status-page) which takes a look at the requested URL, looks up the domain (e.g. status.google.com) in your database, and automatically outputs the correct status-page.
am struggling finding a solution with reverse proxy.
The goal is to be able to dynamically reroute, based on URI path, the incoming requests, e.g :
https://a.b.c/23432/IP.IP.IP.IP.IP/Path should be proxied to https://IP.IP.IP.IP:23432/Path
While it is working at first sight with
location ~ ^/(?<targetport>([0-9]+)?)/(?<targethost>[^/]+) {
proxy_pass http://$targethost:$targetport;
[...]
in the end, only the first element (index.html) is served correctly. The requests made by this page (let's say js/my.js) obviously forget the return path, and are generated to access https://a.b.c/js/my.js, and fail to be served.
I tried setting http_referer (even reverse_proxying the request to it) but it doesn't help as am unable to reparse it correctly
What am I missing here ?
Thanks for your help
Problem solved, the proxied site was prefixing all the resources with /, killing the initial path
I'm running a fairly well used CDN system using Nginx and I need to secure my links so that they aren't shared between users.
The current config works perfectly..
# Setup Secure Links
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr secret";
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 410; }
However with the internet going ever more mobile and with many users now coming from university campuses etc I'm seeing tons of failed requests, and annoyed end users because the requester's IP has changed between requests.
The requesting IP is almost always in the same range, so for example:
Original Request: 192.168.0.25
File Request: 192.168.0.67
I'd be happy to lock these secure links down to a range, such as
192.168.0.0 - 192.168.0.255
or go even further and make it even bigger
192.168.0.0 - 192.168.255.255
but I can't figure out a way to do this in nginx, or if the secure_link feature even supports this.
If this isn't possible - does anyone have any other ideas on how to secure links that would be less restrictive, but still be reasonably safe? I had a look at using the browser string instead, but many of our users have download managers or use 3rd part desktop clients - so this isn't viable.
I'm very much trying to do this without having to have any dynamic code to check a remote database as this is very high volume and I'd rather not have that dependancy.
You can use more than one auth directive within Nginx, so you could drop the IP from the secure link and specify that as a separate directive.
Nginx uses CIDR ranges, so for your example it would simply be a case of
allow 192.168.0.0/16;
deny all;
You can use the map approach
map $remote_addr $auth_addr {
default $remote_addr;
~*^192\.168\.100 192.168.100;
~*^192\.169 192.169;
}
And then later use something
secure_link_md5 "$secure_link_expires$uri$auth_addr secret";
I have not used such an approach, but I am assuming it should work. If it doesn't please let me know
I managed to get this working thanks to #Tarun Lalwani for pointing out the maps idea.
# This map breaks down $remote_addr into octets
map $remote_addr $ipv4_first_two_octets {
"~(?<octet1>\d+)\.(?<octet2>\d+)\.(?<octet3>\d+)\.(?<octet4>\d+)" "${octet1}.${octet2}";
default "0.0";
}
location / {
# Setup Secure Links secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$ipv4_first_two_octets secret";
}
I was looking at how to redirect from HTTP to HTTPS on the Lighttpd website, and it looked really easy. (https://redmine.lighttpd.net/projects/1/wiki/HowToRedirectHttpToHttps)
$HTTP["scheme"] == "http" {
# capture vhost name with regex conditiona -> %0 in redirect pattern
# must be the most inner block to the redirect rule
$HTTP["host"] =~ ".*" {
url.redirect = (".*" => "https://%0$0")
}
}
but it doesn't reroute at all.
I have been trying to access the websites by way of www.test.com, http://www.test.com, and http://test.com but it doesnt seem to work.
It just says: ERR Connection Refused. I have confirmed that the website works in http and https without this code, but when doing this, it doesnt seem to work.
I would like to understand it more since I will have a bunch of other domains routing through here eventually.
I have also tried more specific calls as well which didnt work:
$HTTP["scheme"] == "http" {
# capture vhost name with regex conditiona -> %0 in redirect pattern
# must be the most inner block to the redirect rule
$HTTP["host"] =~ "www.test.com" {
url.redirect = (".*" => "https://%0$0")
}
}
Doing the above code in the question is actually valid. The issue is, as pointed out by #Gstrauss is that in order to have redirect capabilities, you need to make sure that module is actually enabled. I looked into the modules.conf file and noticed it was not enabled.
Upon enabling the mod_redirect, and restarting the server, no matter if i went to HTTP or HTTPS version of my site, it would forward me to the HTTPS version of the site.
I'm interested in sending folks who go to the root/homepage of my site to another server.
If they go anywhere else (/news or /contact or /hi.html or any of the dozens of other pages) they get proxied to a different server.
Since "/" is the nginx catchall to send anything that's not defined to a particular server, and since "/" also represents the homepage, you can see my predicament.
Essentially the root/homepage is its own server. Everything else is on a different server.
Thoughts?
location =/ {
# only "/" requests
}
location / {
# everything else
}
More information: http://nginx.org/en/docs/http/ngx_http_core_module.html#location