Just a note, before doing this, I created a DNS record with:
*.dev.x.mydomain.com A 118.123.123.123
Then I added a config to nginx.conf, actually it did work well, excpet a problem, so the following is an modified simplified version.
Basically the problem is that the deny/allow doesn't seem to work.
The config part in nginx.conf:
server {
listen 80;
server_name snippets--v2.dev.x.mydomain.com;
allow 220.123.123.123;
deny all;
location /ip {
return 200 '{"code":"0", "type": "success", "ip": "${remote_addr}"}';
allow 220.123.123.123;
deny all;
}
}
With this setup, undoubtedly it should work, specifically, it should block accesses from all IPs but except 220.123.123.123.
But actually, it does work on /, but doesn't on /ip.
When I access /ip, I see my IP address, it shows e.g. 37.123.123.123; not the allowed IP 220.123.123.123, right? But wait, why I can see this screen at the first place? Where's going the deny statement...?
So this is a weird problem I have. On the other server blocks the almost same setups are working well, so I have really no idea what's missing here. Thanks.
This answer explains why allow/deny does not work with return.
You could either use the Nginx Echo Module or use a geo filter to determine if the IP should be allowed or denied. Example
Related
I need to access a webserver in a private network, that has no direct access from outside. Opening router ports etc. is not an option.
I try to solve this with a raspi in that network, that i can manage via upswift.io.
Amongst other things, upswift allows temporary remote access to a given port over url's like
http://d-4307-5481-nc7nflrh26s.forwarding.upswift.io:56947/
This will map to a port that i can define.
With this, i can access a VNC Server on the pi, start a browser there and access the webserver i need.
But i hope to find a more elegant way, where i can access the Site from my local browser, and where the Pi does not need to run a Desktop.
As far as i found out, this can be done with a reverse proxy like nginx.
I found a lot of tutorials on it, but i struggle at one point:
After being able to install nginx and accessing it's default index page from my local browser through the temporary upswift.io url, i can't get it to work as reverse proxy.
I think my conf needs to look like
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://192.x.x.2;
}
}
Where example.com would be the name or IP under which the device is accessed.
Now, this would not work for me, as that name is dynamic.
So i wonder if there's a way to configure nginx so it does not need that name. I would expect that is possible, as the default webserver config works without it too. Are reverse proxies different in that regard?
Or, is there a better way than with a reverse proxy to do what i want?
You could try to define it as a default block
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://192.x.x.2;
}
}
I am trying to configure Digest Auth in nginx, I am using the unoffical module for that, NGINX Digest module, and for the most part I can get it to work just fine, I am able to lock down an endpoint, unless it's a GET, here is my location config.
location /config {
proxy_pass http://internal_config_service/config;
limit_except GET {
auth_digest "peek a boo";
}
}
However, I have a scenario, where I to allow localhost unchallenged, and I'm not really finding a great way to do that.
Things I've explored, I've tried allow 127.0.0.1; I've even looked into trying to do something with if and checking $host is local, and not adding the digest directives, but I don't think that's even possible, because my understanding is config is pretty static.
The one solution I can think of that might work, but requires a fair amount of work, and extra confusion to someone new, is to basically create 2 servers, one that is accessible by localhost only, and allows localhost through unchallenged, and cannot be accessed externally. And then a 2nd server that is publicly accessible and is locked down with digest.
I'm hoping for a better solution, but I am still kind of learning the intricacies of NGINX as I go, but not optimistic of a better solution.
you can use the satisfy directive:
http://nginx.org/en/docs/http/ngx_http_core_module.html#satisfy
The problem: I dont know if the auth_digest (unofficial module) will be part auf the Auth-Face in the NGINX request processing. But, if this is the case you can make use of auth_request in addition. But give this a try:
...
location /authreq {
satisfy any;
allow 127.0.0.1;
deny all;
auth_digest "something";
# If auth_digest is not working try
auth_request /_authdigest;
}
location = /_authdigest {
internal;
auth_digest "something";
}
Update to your question regarding allow 127.0.0.1; deny all
This will NOT block all other clients / traffic. Its telling NGINX in combination with satisfy any that if the IP is not 127.0.0.1 any other auth function (auth_basic, auth_jwt, auth_request) has to be successfull to let the request pass. In my demo: If I am not send a request to localhost I will have to go through the auth_request location. If the auth_request is something like 200 it satisfies my configuration and I am allowed to be connected to the proxy upstream.
I have build a little njs script disabling the auth_digest for the user and authenticating the proxy request against an digest auth protected backend. But thats not what you need, isnt't it?
If you want to split up the configuration one for localhost and the other one for the public ip your server configuration could look like this:
server {
listen 127.0.0.1:80;
## do localhost configuration here
}
server {
listen 80;
## apply configuration for the IP of nic eth0 (for example) here.
}
In Nginx, can one somehow block or allow access from certain ports, in a location? Looking at the allow & deny docs it seems to me that they cannot be used for this purpose. Right? But is there no other way to do this?
Background:
In an Nginx virtual host, I'm allowing only a certain IP to publish websocket events:
server {
listen 80;
location /websocket/publish {
allow 192.168.0.123;
deny all;
}
However, soon the IP address of the appserver will we unknown, because everything will run inside Docker and I think I'll have no idea which ip a certain container will have.
So I'm thinking I could do this instead:
server {
listen 80;
listen 81;
location /websocket/publish {
# Let the appserver publish via port 81.
allow :81; # <–– "invalid parameter" error
# Block everything else, so browsers cannot publish via port 80.
deny all;
}
... other locations, accessible via port 80
And then have the firewall block traffic to port 81 from the outside world. But allow :81 doesn't work. Is there no other way? Or am I on the wrong track; are there better ways to do all this?
(As far as I've understood from the docs about the websocket Nginx plugin I use (namely Nchan) I cannot add the /websocket/publish endpoint in another server { } block that listens on port 81 only. Edit: Turns out I can just use different server blocks, because Nchan apparently ignores in which server block I place the config stuff, see: https://github.com/slact/nchan/issues/157. So I did that, works fine for me now. However would still be interesting to know if Nginx supports blocking a port in a location { ... }. )
I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.
I'd like to stop NGINX from logging my own IP addreess in my access.log Is this possible? I can easily do it in Apache but I haven't been able to find anything like this for NGINX.
This should really be on serverfault so I'll vote for a move.
But I can help a little here.
Short version, no you can't.
Long version. You can hack around it by using different backends for where you log one and don't log the other. Or by creating an extra server on a different port. But there isn't really a clean way of filtering an IP address from the logs.
You can however filter by url, perhaps that is an option for you?
You could create a virtual host that will log only your accesses, while the main log will log the rest. In this case you would access the new virtual host from your machine.
server {
listen 80;
server_name domain.com www.domain.com;
access_log logs/domain.access.log;
Then you create a second one
server {
listen 80;
server_name me.domain.com;
access_log logs/me.domain.access.log;
Or remove the last line.
This way your accesses won't mix with the external accesses.
You have to add me.domain.com in DNS or in your /etc/hosts, with the same IP as the main domain.