NGINX Reverse Proxy - Completely "shadowing" an URL subtree - nginx

I have two servers, an application server running (backend) listening at example.com:8000 and an NGINX as reverse proxy server (frontend) listening at example.com:443.
The easy part is to configure NGINX so that all requests get proxied through to the backend system:
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
}
This works like a charm; however there is another additional requirement, which I could not figure out how to realize yet. There is subtree of URLs at the backend system considered "ugly", say example.com:8000/my/ugly/path/, that should never be seen by the end user but instead "replaced" by example.com:443/pretty-path/. The problem is, that the backend system does not know about the "shadowing" of this subtree and generates URLs containing /my/ugly/path/ (both in HTTP headers and HTML content). So what is the best way to make /my/ugly/path/ transparently disappear?

You can use the sub_filter directive. Note that nginx must be built to support this feature; use nginx -V to display configured options, look for --with-http_sub_module, or test your config file with nginx -t. Untested sample config below!
location / {
sub_filter 'www.example.com/my/ugly/path'
'www.example.com/pretty/path';
sub_filter_once off;
}
Ref. nginx sub_filter documentation here.

Related

How to rewrite locations in nginx reverse proxy to load owncloud page perfectly?

I'm pretty unexperienced using reverse proxy, so my question can be really lame, sorry for that.
I'm trying to reach my owncloud server through nginx reverse-proxy, but it can't load perfectly.
I have an NGINX reverse-proxy server using multiple locations. I would like to make a new public access to my owncloud server located in another machine with apache.
I would like to use _https://my_public_url/owncloud_ to reach my owncloud server, so I made the location block like this:
Whem I'm using
location / {
proxy_pass http://my_owncloudserver_url/;
everything is fine.
But in this case:
location /owncloud/ {
proxy_pass http://my_owncloudserver_url/;
I get the index.php/login page without any formatting, as /apps, /core, etc. requests are still requested from "https://my_public_url/apps/...", "https://my_public_url/core/...", etc. instead of "https://my_public_url/owncloud/core/..." where the files are located, as these requests don't match with /owncloud/ location and aren't proxied.
I guess I should use rewrite to change the urls of these requests, putting the "/owncloud/" part into the url.
If I'm using a separate location to match with "/core/..." requests, like:
location /core/ {
rewrite ^/core/(.*)$ /owncloud/core/$1 permanent;
}
then it seems to be OK, but I won't make a lot of different locations to match with all various requests.
How could I fix this?
I'm running out if ideas, although it must be pretty easy.
Thanks,
sanglee
I'm not sure about Owncloud. But in Nextcloud you have to configure some proxy parameters in the config.php https://docs.nextcloud.com/server/16/admin_manual/configuration_server/config_sample_php_parameters.html#proxy-configurations
Please consider to use Nextcloud because it is faster than Owncloud, if fully open source, more features and is actively maintained by the community.
Update OWNCLOUD_SUB_URL” to “/owncloud” when running the container, or find the subtitute config if running not using containers
And on nginx config
location /owncloud {
proxy_pass http://my_owncloudserver_url/owncloud;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
}

nginx reverse proxy for Mikrotik.com

Hi we need to cache some site like mikrotik in our system but this site is use "HTTPS" protocol and our caching system only cache HTTP protocol site So we thing if we used nginx as reverse proxy so client when visit https://www.mikrotik.com our DNS system well redirect to nginx local ip and nginx well fetch data from mikrotik as HTTP to client our problem is all thing is OK but if client visit any thing inside mikrotik webpage it well redirected to mikrotik original site .. this is sample of nginx config
server
{
listen 80;
server_name mikrotik.com;
location / {
proxy_pass https://mikrotik.com/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I guess this is because all the links on the site will still be to https://mikrotik.com/... which you haven't proxied
You could try using the Nginx sub filter module and rewrite all the links within Nginx before you serve them on your network. Adding some directives like this to your location block will get Nginx to rewrite the links from https to http on the fly:
location / {
....
....
sub_filter_types text/html;
sub_filter 'https://mikrotik.com/' 'http://mikrotik.com/';
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
I use this config for a similar purpose but the opposite way around, so going from http to https. Not sure if it will work for you in reverse as the whole point of SSL is to stop you doing this kind of thing, which is basically just a man in the middle attack

Icecast2 running under nginx not able to connect

I want to start saying that I've looked all over the place to find an answer to this problem and it just seems like either nobody else ran into this problem or nobody is doing it. So, I recently install icecast2 on my Debian server, The thing is that I'm completely able to broadcast to my server from my local network connecting to its local IP on port 8000 and hear the stream over the internet on radio.example.com since I proxy it with nginx, so far no problems at all. The problem lies when I want to broadcast to the domain I gave with nginx stream.example.com
I have two theories, one is that the proxy is not giving the source IP to icecast so it thinks it's beign broadcasted from 127.0.0.1 and the other is that nginx is doing something strange with the data stream and thus not delivering the correct format to icecast.
Any thoughts? Thanks in advance!
Here is the nginx config
server {
listen 80;
listen [::]:80;
server_name radio.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://127.0.0.1:8000/radio;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':80/' '/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
server {
listen 80;
listen [::]:80;
server_name stream.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://localhost:8000/;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':8000/' ':80/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
And this is what I get on icecast error.log
[2018-08-10 14:15:45] INFO source/get_next_buffer End of Stream /radio
[2018-08-10 14:15:45] INFO source/source_shutdown Source from 127.0.0.1 at "/radioitavya" exiting
Not sure how much of this is directly relevant to the OP's question, but here's a few snippets from my config.
These are the basics of my block to serve streams to clients over SSL on port 443.
In the first location block any requests with a URI of anything other than /ogg, /128, /192 or /320 are rewritten to prevent clients accessing any output from the Icecast server other than the streams themselves.
server {
listen 443 ssl http2;
server_name stream.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
rewrite ~*(ogg) https://stream.example.com/ogg last;
rewrite ~*([0-1][0-5]\d) https://stream.example.com/128 last;
rewrite ~*(?|([1][6-9]\d)|([2]\d\d)) https://stream.example.com/192 last;
rewrite ~*([3-9]\d\d) https://stream.example.com/320 break;
return https://stream.example.com/320;
}
location ~ ^/(ogg|128|192|320)$ {
proxy_bind $remote_addr transparent;
set $stream_url http://192.168.100.100:8900/$1;
types { }
default_type audio/mpeg;
proxy_pass_request_headers on;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Host $host;
proxy_set_header Range bytes=0-;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
tcp_nodelay on;
proxy_pass $stream_url;
}
}
Setting proxy_bind with the transparent flag:
allows outgoing connections to a proxied server originate from a
non-local IP address, for example, from a real IP address of a client
This addresses the issues of local IP addresses in your logs/stats instead of client IPs, for this to work you also need to reconfigure your kernel routing tables to capture the responses sent from the upstream server and route them back to Nginx.
This requires root access and a reasonable understanding of Linux networking configuration, which I appreciate not everyone has. I also appreciate not everyone who uses Icecast and might want to reverse proxy will read this. A much better solution would be making Icecast more Nginx friendly, so I had a go.
I cloned Icecast from github and had a look over the code. I've maybe missed some but these lines looked relevant to me:
./src/logging.c:159: client->con->ip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(client->con->ip));
For servers which do not support the PROXY protocol the Nginx default method of passing the client IP upstream is via the X-Real-IP header. Icecast seems to be using the value of client->con->ip for logging listener IPs. Let's change things up a bit. I added this:
const char *realip;
realip = httpp_getvar (client->parser, "x-real-ip");
if (realip == NULL)
realip = client->con->ip;
And changed the previous lines to this:
./src/logging.c:163: realip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(realip));
then I built Icecast from source as per the docs. The proxy_set_header X-Real-IP $remote_addr; directive in my Nginx conf is passing the client IP, if you have additional upstream servers also handling the request you will need to add some set_real_ip_from directives specifying each IP, real_ip_recursive on; and use the $proxy_add_x_forwarded_for; which will capture the IP address of each server which handles the request.
Fired up my new Icecast build and this seems to work perfectly. If the X-Real-IP header is set then Icecast logs this as the listener IP and if not then it logs the client request IP so it should work for reverse proxy and normal setups. Seems too simple, maybe I missed something #TBR?
OK so you should now have working listener streams served over SSL with correct stats/logs. You have done the hard bit. Now lets stream something to them!
Since the addition of the stream module to Nginx then handling incoming connections is simple regardless of whether of not they use PUT/SOURCE.
If you specify a server within a stream directive Nginx will simply tunnel the incoming stream to the upstream server without inspecting or modifying the packets. Nginx streams config lesson 101 is all you need:
stream {
server {
listen pub.lic.ip:port;
proxy_pass ice.cast.ip:port;
}
}
I guess one problem unsuspecting people may encounter with SOURCE connections in Nginx is specifying the wrong port in their Nginx config. Don't feel bad, Shoutcast v1 is just weird. Point to remember is:
Instead of the port you specify in the
client encoder it will actually attempt to connect to port+1
So if you were using port 8000 for incoming connections, either set the port to 7999 in client encoders using the Shoutcast v1 protocol, or set up your Nginx stream directives with 2 blocks, one for port 8000 and one for port 8001.
Your Nginx install must be built with the stream module, it's not part of the standard build. Unsure? Run:
nginx -V 2>&1 | grep -qF -- --with-stream && echo ":)" || echo ":("
If you see a smiley face you are good to go. If not you'll need to build Nginx and include it. Many repositories have an nginx-extras package which includes the stream module.
Almost finished, all that we need now is access to the admin pages. I serve these from https://example.com/icecast/ but Icecast generates all the URIs in the admin page links using the root path, not including icecast/ so they won't work. Let's fix that using the Nginx sub filter module to add icecast/ to the links in the returned pages:
location /icecast/ {
sub_filter_types text/xhtml text/xml text/css;
sub_filter 'href="/' 'href="/icecast/';
sub_filter 'url(/' 'url(/icecast/';
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
proxy_pass http://ice.cast.ip:port/;
}
The trailing slash at the end of proxy_pass http://ice.cast.ip:port/; is vitally important for this to work.
If a proxy_pass directive is specified just as server:port then the full original client request URI will be appended and passed to the upstream server. If the proxy_pass has anything URI appended (even just /) then Nginx will replace the part of the client request URI which matches the location block (in this case /icecast/) with the URI appended to the proxy_pass. So by appending a slash a request to https://example.com/icecast/admin/ will be proxied to http://ice.cast.ip:port/admin/
Finally I don't want my admin pages accessible to the world, just my IP and the LAN, so I also include these in the location above:
allow 127.0.0.1;
allow 192.168.1.0/24;
allow my.ip.add.ress;
deny all;
That's it.
sudo nginx -s reload
Have fun.
tl;dr - Don't reverse proxy Icecast.
Icecast for various reasons is better not reverse proxied. It is a purpose built HTTP server and generic HTTP servers tend to have significant issues with the intricacies of continuous HTTP streaming.
This has been repeatedly answered. People like to try anyway and invariably fail in various ways.
If you need it on port 80/443, then run it on those ports directly
If you have already something running on port 80/443, then use another of the remaining 2^64 IPv6 addresses in your /64 and if you are still using legacy IP, get another address, e.g. by spinning up a virtual server in the cloud.
Need HTTPS, Icecast supports TLS (on Debian and Ubuntu make sure to install the official Xiph.org packages as distro packages come without openSSL support)
Make sure to put both private and public key into one file.
This line....
subs_filter '#localhost' '#stream.example.com' gi;
Should probably be....
subs_filter '#localhost' '#example.com' gi;
I am not familiar with nginx so my best guess would be that this line is linking radio.example.com to the main site of example.com. By adding the stream.example.com you are confusing it by directing it to a site that doesn't exist.
I got this from a config file posted here:
Anyway, it wouldn't hurt to try it.

Nginx reverse proxy fails to tunnel file requests

I'm running an application inside Tomcat on port 2211 and all is well. However I would like to serve this application whenever anyone browses to site.com/service and for that I came up with this Nginx proxy pass setup.
location /service {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass https://127.0.0.1:2211;
}
But when I browse to site.com/service I can only see my jetty application in plain HTML. For some reason all files even though they exist on Tomcat the browser receives a 404 reply for all of them.
I've looked into how the browser is requesting the file, for example:
<img src="/themes/logo.png">
This image instead of being requested at site.com/service/themes/logo.png is being asked at site.com/themes/logo.png, which obviously doesn't work and therefore 404 not found. The same happens to all other files, it should be looking for them at site.com/service not on the root folder site.com.
Surely Nginx is missing some configuration parameters, could you point towards it?
Both nginx and the image are behaving correctly, the proxy is returning the html just as it is, the problem is that the tomcat server thinks that it's the root, so it returns all the assets relative to the root, you can fix this in a couple of ways
Either change the subfolder /service to a subdomain service.domain.com this way the assets will truely be in the root.
Somehow configure the tomcat server to return all it's links relative to the /service folder, an easy way would be adding a base inside the head tag
<base href='http://domain.com/service'>
This way the urls will all be absolute, but that will only make the urls functional under that proxy
Instead of modifying your tomcat application, you can tell nginx to add that header by it self, by doing some replacement in the returned html using sub_filter, it would insert in inside the head tag
sub_filter '</head>' '<base href='$scheme://$server_name/service></head>';

How should you forward the value of HTTP port when proxying?

So, I have an nginx doing reverse proxying to a rails server. The rails server has an oauth login, and the lib that does it builds the callback URL using 'X-Forwarded-Host'. The issue is that when nginx is listening on a port other than 80 the callback URL is not being properly formatted. Looking at the configuration I realized this is because it builds the URL from 'X-Forwarded-Host', and the config I used did not include the port in it. I have modified my configuration to the following to make this work:
server {
listen 8081;
server_name app;
location / {
proxy_pass http://app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host:8081;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
My question is, what is 'X-Forwarded-Host' actually defined as? Nginx treats 'Http-Host' as the host + port, but I've found around the net that sometimes X-Forwarded-Host is treated as the host only, and there seems to be a variable called 'X-Forwarded-Port' that is sometimes used but I couldn't find anything in the nginx docs about it except that there is a variable available to print in the logs called 'proxy-port', but this is the port being forwarded to, rather than the port it accepted the connection on (which for me is nothing, because I'm using a unix socket). What's the proper solution? Nginx does not allow me to a X-Forwarded-Port header manually, and I'm not even sure that I should. Looking around the net, it appears that other http servers treat this variably differently, for example:
https://github.com/nodejitsu/node-http-proxy/issues/341
http://mattrobenolt.com/handle-x-forwarded-port-header-in-django/
Some related links:
Someone asserts the definition of Http-Host:
http://ask.wireshark.org/questions/22988/http-host-header-with-and-without-port-number
Someone saying there's no standards for these headers:
What is a full specification of X-Forwarded-Proto HTTP header?
An unanswered, related stack overflow:
https://serverfault.com/questions/536576/nginx-how-do-i-forward-a-http-request-to-another-port
My question is, what is 'X-Forwarded-Host' actually defined as?
It's not defined, which is why things are so inconsistent. I've seen the port specified separately and as part of the host. For what it's worth, specifying it with the host seems common.
I just had a similar question. I think a common way is to specify the port in X-Forwarded-Host header as Daniel Fowler suggested in his answer.
I also saw that sometimes "unofficial" X-Forwarded-Port header is used.
I created a similar question where I summarize how I think the servers should behave - all possible combinations (also with X-Forwarded-Proto which I think can also have impact).
The server "app" has nginx on port 8081 and rails on 80.
This maps all requests to http://app:80, and changes the response-header "Location" from http://app:80 to http://app:8081.
server {
listen 8081;
server_name app;
location / {
proxy_pass http://app:80;
proxy_redirect http://app/ http://app:8081/;
proxy_redirect http://app:80/ http://app:8081/;
}
}

Resources