Restrict access to static resource on nginx behind alteon - nginx

I have nginx with two resources /public_url and /private_url behind Alteon load balancer. I can restrict access to private_url using nginx.conf:
location /private_url {
include whitelist.conf;
deny all;
}
Note that public_url remains available from any IP aadd
This works when I access nginx directly. However, when I access nginx through Alteon I can only see Alteon's IP address. As a result, I cannot distinguish whitelisted and blacklisted IP addresses.
What is the right way to organize IP filtering to private_url, but serve public_url to everyone when nginx is behind alteon?

A load balancer by default removes all of the client's headers when passing the request to an upstream. Including the ip address and replacing it with the server's ip. In nginx as a load balancer you would add proxy_set_header X-Real-IP $remote_addr;. I'm not sure about Alteon but I found this link explaining how to achieve that: https://support.radware.com/app/answers/answer_view/a_id/15085/~/how-to-insert-x-forwarded-header-for-piped-server-load-balanced-traffic-to-real

Related

Enable reverse proxy and block access to the original port

I am hosting an app (Kibana) on port 5601. I want to restrict access to it by whitelisting IPs, so I am trying to host it behind Nginx. Below is my Nginx conf.
server {
listen *:5700;
server_name _;
allow 10.20.30.40; # My IP
deny all;
location / {
proxy_pass http://localhost:5601;
}
}
It works as only I can access the app on port 5700 and everyone else gets a 403. However, others can directly goto localhost:5601 and bypass the whole security. How do I stop direct access to port 5601?
localhost:5601 is a connection only accessible to users/processes running on the same host that is running Nginx & Kibana. It needs to be there so that Nginx can proxy_pass traffic to Kibana.
However, I think you are talking about external users also connecting to port 5601 from remote systems.
Kibana does not need to listen to traffic from external systems on port 5601. Note that by default at least some Kibana installs do not listen to external systems and you may not need to make any changes.
However to be sure:
Edit your kibana.yml file (possibly /etc/kibana/kibana.yml)
Ensure that server.host: "localhost" is the only server.host line and is not commented out
Restart Kibana
To further manage your system using best practices. I would strongly recommend operating some form of firewall and only opening access to ports and protocols which you expect external users to need.

nginx redirect subdomain to seperate server ip

I have a dynamic IP which I manage using ddclient. I use no-ip to maintain the hostnames to point to my IP.
I have www.somename.com, sub.somename.com and app.somename.com. Obviously, these all point to my IP. The first two are a couple of wordpress pages on a server (server1) running NGINX, with separate configs in sites-available for each site. The latter is a separate application server (server2) running GitLab.
My router does not allow me to switch on subdomain, so all port 80 traffic is routed to server1. I'm hoping there is a config I can apply in nginx that will allow me to send all traffic for app.somename.com to a local IP address on my network (192.168.0.nnn), but keep the address of the page as app.subdomain.com.
Right now, I have :-
/etc/nginx/site-available$ ls
somename.com domain sub.somename.com app.somename.com
The relevant ones are linked in sites-enabled. For the app server, I have :-
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
}
}
The problem, is that in the browser address bar, this results in :-
http://192.168.1.16/some/pages
Where I want :-
http://app.somename.com/some/pages
How do I resolve this?
You could try like this!
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
proxy_set_header Host app.somename.com;
}
}

Real life usage of the X-Forwarded-Host header?

I've found some interesting reading on the X-Forwarded-* headers, including the Reverse Proxy Request Headers section in the Apache documentation, as well as the Wikipedia article on X-Forwarded-For.
I understand that:
X-Forwarded-For gives the address of the client which connected to the proxy
X-Forwarded-Port gives the port the client connected to on the proxy (e.g. 80 or 443)
X-Forwarded-Proto gives the protocol the client used to connect to the proxy (http or https)
X-Forwarded-Host gives the content of the Host header the client sent to the proxy.
These all make sense.
However, I still can't figure out a real life use case of X-Forwarded-Host. I understand the need to repeat the connection on a different port or using a different scheme, but why would a proxy server ever change the Host header when repeating the request to the target server?
If you use a front-end service like Apigee as the front-end to your APIs, you will need something like X-FORWARDED-HOST to understand what hostname was used to connect to the API, because Apigee gets configured with whatever your backend DNS is, nginx and your app stack only see the Host header as your backend DNS name, not the hostname that was called in the first place.
This is the scenario I worked on today:
Users access certain application server using "https://neaturl.company.com" URL which is pointing to Reverse Proxy. Proxy then terminates SSL and redirects users' requests to the actual application server which has URL of "http://192.168.1.1:5555". The problem is - when application server needed to redirect user to other page on the same server using absolute path, it was using latter URL and users don't have access to this. Using X-Forwarded-Host (+ X-Forwarded-Proto and X-Forwarded-Port) allowed our proxy to tell application server which URL user used originally and thus server started to generate correct absolute path in its responses.
In this case there was no option to stop application server to generate absolute URLs nor configure it for "public url" manually.
I can tell you a real life issue, I had an issue using an IBM portal.
In my case the problem was that the IBM portal has a rest service which retrieves an url for a resource, something like:
{"url":"http://internal.host.name/path"}
What happened?
Simple, when you enter from intranet everything works fine because internalHostName exists but... when the user enter from internet then the proxy is not able to resolve the host name and the portal crashes.
The fix for the IBM portal was to read the X-FORWARDED-HOST header and then change the response to something like:
{"url":"http://internet.host.name/path"}
See that I put internet and not internal in the second response.
For the need for 'x-forwarded-host', I can think of a virtual hosting scenario where there are several internal hosts (internal network) and a reverse proxy sitting in between those hosts and the internet. If the requested host is part of the internal network, the requested host resolves to the reverse proxy IP and the web browser sends the request to the reverse proxy. This reverse proxy finds the appropriate internal host and forwards the request sent by the client to this host. In doing so, the reverse proxy changes the host field to match the internal host and sets the x-forward-host to the actual host requested by the client. More details on reverse proxy can be found in this wikipedia page http://en.wikipedia.org/wiki/Reverse_proxy.
Check this post for details on x-forwarded-for header and a simple demo python script that shows how a web-server can detect the use of a proxy server: x-forwarded-for explained
One example could be a proxy that blocks certain hosts and redirects them to an external block page. In fact, I’m almost certain my school filter does this…
(And the reason they might not just pass on the original Host as Host is because some servers [Nginx?] reject any traffic to the wrong Host.)
X-Forwarded-Host just saved my life. CDNs (or reverse proxy if you'd like to go down to "trees") determine which origin to use by Host header a user comes to them with. Thus, a CDN can't use the same Host header to contact the origin - otherwise, the CDN would go to itself in a loop rather than going to the origin. Thus, the CDN uses either IP address or some dummy FQDN as the Host header fetching content from the origin. Now, the origin may wish to know what was the Host header (aka website name) the content is asked for. In my case, one origin served 2 websites.
Another scenario, you license your app to a host URL then you want to load balance across n > 1 servers.

Alias for IP address?

I'm deploying a Ruby project over a network with Nginx. The way you access the web-interface of the project is going to the server's IP address with the port (192.168.1.113:3000). This is rather cumbersome. How could I use a location such as http://clock.local?
Usually operating systems have a "hosts" file where you can set a name that points to an IP. That's where "localhost" is specified (at least for me).
Anyway, I think you can set an alias to the IP there, but the port won't work. I guess you'll still need to specify it manually. So it'll be http://alias:3000/.
Not familiar with nginx, but why can't you just add an entry into /etc/hosts (or WINDIR/system32/drivers/etc/hosts) to resolve the IP address to a user defined alias?
If you only need to to resolve from one or two machines, just put the alias in /etc/hosts. Otherwise, if you've got a local private DNS server, you can add your desired name there so that it's available to everybody on the LAN. I'd also build a proxy on port 80 so that you don't need to specify the port. (Assuming port 80 on that machine isn't already being used.)
Edit: I take that back, it does't matter if 80 is already being used, you can proxy by vhost:
server {
server_name whatever.whatever;
root /path/to/doc_root
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $remote_addr;
}
}

Reliably getting a web client IP

What is the most reliable way of obtaining the IP address of a remote client connecting to your website? Some options I've looked into are:
Server variables (such as REMOTE_ADDR in Apache), though this is usually the proxy address.
A Java applet, but IE (at least the one I'm using) seems to deny it.
The only other thing I'm thinking about is having the client connect over HTTPS, in which case the proxy should be bypassed (generally speaking), and so REMOTE_ADDR would be accurate.
Any ideas?
Anything client-side (javascript, java) will give you the PCs IP address. Which could be an internal IP address like 10.0.0.1.
Re: SSL + REMOTE_ADDR, most workplace proxies send all the SSL through an application level proxy, SOME just allow 443 outbound. Any thing coming thru a proxy will still give you the proxy address, as the proxy is still the computer making the connection to your webserver.
HTTPS through a proxy is still a possibility, if the proxy is non-transparent (say, with a client on a corporate network). With HTTPS through a proxy, the REMOTE_ADDR will still be the proxy address - the proxy is still in the path, it just only gets to see the encrypted traffic.
If the client is going through a proxy, you'll have to rely on the proxy telling you their IP. The X-Forwarded-For header will contain this, but you can only really rely on this if you trust the proxy. If this is for logging purposes, log both REMOTE_ADDR and X-Forwarded-For. If it's for something else, you'll need to maintain a whitelist of proxies (as determined by REMOTE_ADDR) that you'll accept X-Forwarded-For from.

Resources