I am trying to configure Digest Auth in nginx, I am using the unoffical module for that, NGINX Digest module, and for the most part I can get it to work just fine, I am able to lock down an endpoint, unless it's a GET, here is my location config.
location /config {
proxy_pass http://internal_config_service/config;
limit_except GET {
auth_digest "peek a boo";
}
}
However, I have a scenario, where I to allow localhost unchallenged, and I'm not really finding a great way to do that.
Things I've explored, I've tried allow 127.0.0.1; I've even looked into trying to do something with if and checking $host is local, and not adding the digest directives, but I don't think that's even possible, because my understanding is config is pretty static.
The one solution I can think of that might work, but requires a fair amount of work, and extra confusion to someone new, is to basically create 2 servers, one that is accessible by localhost only, and allows localhost through unchallenged, and cannot be accessed externally. And then a 2nd server that is publicly accessible and is locked down with digest.
I'm hoping for a better solution, but I am still kind of learning the intricacies of NGINX as I go, but not optimistic of a better solution.
you can use the satisfy directive:
http://nginx.org/en/docs/http/ngx_http_core_module.html#satisfy
The problem: I dont know if the auth_digest (unofficial module) will be part auf the Auth-Face in the NGINX request processing. But, if this is the case you can make use of auth_request in addition. But give this a try:
...
location /authreq {
satisfy any;
allow 127.0.0.1;
deny all;
auth_digest "something";
# If auth_digest is not working try
auth_request /_authdigest;
}
location = /_authdigest {
internal;
auth_digest "something";
}
Update to your question regarding allow 127.0.0.1; deny all
This will NOT block all other clients / traffic. Its telling NGINX in combination with satisfy any that if the IP is not 127.0.0.1 any other auth function (auth_basic, auth_jwt, auth_request) has to be successfull to let the request pass. In my demo: If I am not send a request to localhost I will have to go through the auth_request location. If the auth_request is something like 200 it satisfies my configuration and I am allowed to be connected to the proxy upstream.
I have build a little njs script disabling the auth_digest for the user and authenticating the proxy request against an digest auth protected backend. But thats not what you need, isnt't it?
If you want to split up the configuration one for localhost and the other one for the public ip your server configuration could look like this:
server {
listen 127.0.0.1:80;
## do localhost configuration here
}
server {
listen 80;
## apply configuration for the IP of nic eth0 (for example) here.
}
Related
Just a note, before doing this, I created a DNS record with:
*.dev.x.mydomain.com A 118.123.123.123
Then I added a config to nginx.conf, actually it did work well, excpet a problem, so the following is an modified simplified version.
Basically the problem is that the deny/allow doesn't seem to work.
The config part in nginx.conf:
server {
listen 80;
server_name snippets--v2.dev.x.mydomain.com;
allow 220.123.123.123;
deny all;
location /ip {
return 200 '{"code":"0", "type": "success", "ip": "${remote_addr}"}';
allow 220.123.123.123;
deny all;
}
}
With this setup, undoubtedly it should work, specifically, it should block accesses from all IPs but except 220.123.123.123.
But actually, it does work on /, but doesn't on /ip.
When I access /ip, I see my IP address, it shows e.g. 37.123.123.123; not the allowed IP 220.123.123.123, right? But wait, why I can see this screen at the first place? Where's going the deny statement...?
So this is a weird problem I have. On the other server blocks the almost same setups are working well, so I have really no idea what's missing here. Thanks.
This answer explains why allow/deny does not work with return.
You could either use the Nginx Echo Module or use a geo filter to determine if the IP should be allowed or denied. Example
I have a super basic question. I have a GoDaddy account set up with subdomain xxx.mydomain.com. I also have some services running in an AWS instance on xxx.xxx.xxx.xxx:7000. My question is, what do I do to configure so that when people click xxx.mydomain.com it goes to xxx.xxx.xxx.xxx:7000?
I am not talking about domain forwarding. In fact, I also hope to do the same for yyy.mydomain.com to link it to xxx.xxx.xxx.xxx:5000. I am running Ngnix in xxx.xxx.xxx.xxx. Maybe I need to configure something there?
You want a reverse proxy.
Add two A-records to your DNS configuration to map the subdomains to the IP address of the AWS instance. With GoDaddy, put xxx / yyy in the "Host" field and the IP address in the "Points to" field. (more info)
Since you already have Nginx running, you can use it as a reverse proxy for the two subdomains. Therefore, add two more server blocks to Nginx's configuration file. A very simple one could look like this:
http {
# ...
server {
server_name xxx.mydomain.com;
location / {
proxy_pass http://localhost:7000;
}
}
server {
server_name yyy.mydomain.com;
location / {
proxy_pass http://localhost:5000;
}
}
}
You might want to rewrite some headers depending on your services/applications (more info). Also, consider to use Nginx for SSL termination (more info).
I'm reverse proxying an AWS API Gateway stage using nginx. This is pretty straightforward:
location /api {
proxy_pass https://xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com:443/production;
proxy_ssl_server_name on;
}
However, this approach will make nginx serve a stale upstream when the DNS entry for xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com changes as it's resolving the entry once on startup.
Following this article: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ I am now trying to define my proxy target in a variable like this:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com/production;
proxy_pass https://$apigateway:443/;
proxy_ssl_server_name on;
}
This will make API Gateway respond with a ForbiddenException: Forbidden to requests that would pass the previous setup without using a variable. Reading this document: https://aws.amazon.com/de/premiumsupport/knowledge-center/api-gateway-troubleshoot-403-forbidden/ it tells me this could be either WAF filtering my request (when WAF is not enabled for that API) or it could be a missing host header for a private API (when the API is public).
I think I might be doing one these things wrong:
The syntax used for setting the variable is wrong
Using the variable will make nginx send different headers to API Gateway and I need to intervene manually. I did try setting a Host header already, but it did not make any difference though.
The nginx version in use is 1.17.3
You have the URI /production embedded in the variable, so the :443 is tagged on to the end of the URI rather than the host name. I'm not convinced you need the :443, being the default port for https.
Also, when variables are used in proxy_pass and a URI is specified in the directive, it is passed to the server as is, replacing the original request URI. See this document for details.
You should use rewrite...break to change the URI and remove any optional URI from the proxy_pass statement.
For example:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com;
rewrite ^/api(.*)$ /production$1 break;
proxy_pass https://$apigateway;
proxy_ssl_server_name on;
}
Also, you will need a resolver statement somewhere in your configuration.
It seems like a false positive at WAF. Did you try to disable AWS WAF https://docs.aws.amazon.com/waf/latest/developerguide/remove-protection.html ?
I'm using the below config in nginx to proxy RDP connection:
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://192.168.0.100:3389;
}
}
but the connection doesn't go through. My guess is that the problem is http in proxy_pass. Googling "Nginx RDP" didn't yield much.
Anyone knows if it's possible and if yes how?
Well actually you are right the http is the problem but not exactly that one in your code block. Lets explain it a bit:
In your nginx.conf file you have something similar to this:
http {
...
...
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So everything you write in your conf files are inside this http block/scope. But rdp is not http is a different protocol.
The only workaround I know for nginx to handle this is to work on tcp level.
So inside in your nginx.conf and outside the http block you have to declare the stream block like this:
stream {
# ...
server {
listen 80;
proxy_pass 192.168.0.100:3389;
}
}
With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing the server_name attribute you can't use it in the stream scope, plus you lose all the logging functionality that comes on the http level.
For more info on this topic check the docs
For anyone who is looking to load balance RDP connection using Nginx, here is what I did:
Configure nginx as you normally would, to reroute HTTP(S) traffic to your desired server.
On that server, install myrtille (it needs IIS and .Net 4.5) and you'll be able to RDP into your server from a browser!
Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}