how to "allow from hostname" in nginx config - nginx

I'm currently doing this in my nginx.conf:
allow 1.2.3.4;
deny;
What I'd really like to do is this:
allow my.domain.name;
deny;
I.e., I want nginx to do an A record lookup on my.domain.name at the time of the request, and if it matches the IP that the request is coming from, then allow it. I don't see any built-in mechanism to do this however. Anybody have a native way to do this before I start coding something custom?

ngx_http_rdns_module does what you need: https://www.nginx.com/resources/wiki/modules/rdns/ (https://github.com/flant/nginx-http-rdns)
Summary
This module allows to make a reverse DNS (rDNS) lookup for incoming connection and provides simple access control of incoming hostname by allow/deny rules (similar to HttpAccessModule allow/deny directives; regular expressions are supported). Module works with the DNS server defined by the standard resolver directive.
Example
location / {
resolver 127.0.0.1;
rdns_deny badone\.example\.com;
if ($http_user_agent ~* FooAgent) {
rdns on;
}
if ($rdns_hostname ~* (foo\.example\.com)) {
set $myvar foo;
}
#...
}

This answer is an alternative which let resolution of domain out of nginx but targets the exact same goal, being able to have resolved ips included in nginx configuration.
1) Create a file allowed-domain.list which contains the domains you want to grant access to :
jean-paul.mydomain.com
rufus.mydomain.com
robert.mydomain.com
2) Create a bash script domain-resolver.sh which do the lookup for you :
#!/usr/bin/env bash
filename="$1"
while read -r line
do
ddns_record="$line"
if [[ ! -z $ddns_record ]]; then
resolved_ip=`getent ahosts $line | awk '{ print $1 ; exit }'`
if [[ ! -z $resolved_ip ]]; then
echo "allow $resolved_ip;# from $ddns_record"
fi
fi
done < "$filename"
3) Give the right permission to this script chmod +x domain-resolver.sh
4) Add a cron job which produces a valid nginx configuration and restarts nginx :
#!/usr/bin/env bash
/pathtoscript/domain-resolver.sh /pathtodomainlist/allowed-domain.list > /pathtooutputdir/allowed-ips-from-domains.conf
service nginx reload > /dev/null 2>&1
This can be a #daily job or you can have it run every hour, minute, sec...
5) Update your nginx configuration to take this output into account :
include /pathtooutputdir/allowed-ips-from-domains.conf;
deny all;
You can improve this adding an ip format check, prevent ipv6 if you don't want it, group everything in a single file...

There is no such feature in official distribution of nginx. Beacause it may heavily reduce performance.
Third party modules http://wiki.nginx.org/3rdPartyModules also doesn't contain this feature.

You can use lua script.
You need to install nginx-mod-http-lua and lua-nginx-dns modules.
The example nginx lua script will be:
location /test/ {
access_by_lua_block {
local resolver = require "nginx.dns.resolver";
local r, err = resolver:new {
nameservers = { "8.8.8.8", "1.1.1.1" },
retrans = 5, -- timeout retransmits
timeout = 500, -- 500msec
no_random = true, -- always start from the first name server
};
if not r then
ngx.log(ngx.ERR, "failed to instantiate the DNS resolver: " .. err)
return
end
local answers, err, tries = r:query("my.domain.name", nil, {});
if not answers then
ngx.log(ngx.ERR, "failed to query the DNS server: " .. err)
return
end
if answers.errcode then
ngx.log(ngx.ERR, "server returned error code: " .. answers.errcode .. ": " .. answers.errstr)
return;
end
for i, ans in ipairs(answers) do
if ans.address == ngx.var.remote_addr then
return
end
end
ngx.log(ngx.ERR, "Not allow IP : " .. ngx.var.remote_addr);
ngx.exit(ngx.HTTP_FORBIDDEN);
}
}

you can add this to return not found for any domain that do not match with yours
if ($host !~* (yourhostname.com)) {
return 404;
}

Related

Which directive I can run before ssl_certificate_by_lua_block to get user-agent information in openresty

I am using OpenResty to generate SSL certificates dynamically.
I am trying to find out the user-agent of request before running ssl_certificate_by_lua_block and decide If I want to continue with the request or not.
I found out that ssl_client_hello_by_lua_block directive runs before ssl_certificate_by_lua_block but if I try to execute ngx.req.get_headers()["user-agent"] inside ssl_client_hello_by_lua_block I get the following error
2022/06/13 09:20:58 [error] 31918#31918: *18 lua entry thread aborted: runtime error: ssl_client_hello_by_lua:6: API disabled in the current context
stack traceback:
coroutine 0:
[C]: in function 'error'
/usr/local/openresty/lualib/resty/core/request.lua:140: in function 'get_headers'
ssl_client_hello_by_lua:6: in main chunk, context: ssl_client_hello_by_lua*, client: 1.2.3.4, server: 0.0.0.0:443
I tried rewrite_by_lua_block but it runs after ssl_certificate_by_lua_block
Are there any directive that can let me access ngx.req.get_headers()["user-agent"] and run before ssl_certificate_by_lua_block as well?
My Nginx conf for reference.
nginx.conf
# HTTPS server
server {
listen 443 ssl;
rewrite_by_lua_block {
local user_agent = ngx.req.get_headers()["user-agent"]
ngx.log(ngx.ERR, "rewrite_by_lua_block user_agent -- > ", user_agent)
}
ssl_client_hello_by_lua_block {
ngx.log(ngx.ERR, "I am from ssl_client_hello_by_lua_block")
local ssl_clt = require "ngx.ssl.clienthello"
local host, err = ssl_clt.get_client_hello_server_name()
ngx.log(ngx.ERR, "hosts -- > ", host)
-- local user_agent = ngx.req.get_headers()["user-agent"]
-- ngx.log(ngx.ERR, "user_agent -- > ", user_agent)
}
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt;
ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key;
location / {
proxy_pass http://backend_proxy$request_uri;
}
}
If someone is facing the same issue.
Here is the email group of OpenResty that helped me.
I was not thinking correctly. The certificate negotiation happens before a client send user-agent data(that comes in after the SYNACK reaches the client). So you cant save issuing the certificate in the process. Hard luck.
Once the handshake and the Client/Server Hello happens then the server has the user-agent, you can do the blocking under access_by_lua_block.

Dynamic routing with nginx, lua and redis

I am trying to make nginx perform proxying based on the URI with the help of lua and redis.
So far, I am able to successfully proxy simple URI like '/hello' to desired target. Was able to achieve this by saving the mappings in a redis hashmap something like,
HGETALL "127.0.0.1:8080"
1) "/demo1/test/hello4"
2) "example.com/demo1/test/hello4"
3) "/hello"
4) "example.com/hello"
nginx.conf
worker_processes 2;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
resolver 8.8.4.4; # use Google's open DNS server
set $target '';
access_by_lua '
local http_host = ngx.var.http_host
if not http_host then
ngx.log(ngx.ERR, "no http-host found")
return ngx.exit(400)
end
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 second
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.log(ngx.ERR, "failed to connect to redis: ", err)
return ngx.exit(500)
end
local fPath, err = red:hget(http_host, ngx.var.uri)
if not fPath then
ngx.log(ngx.ERR, "No fPath: ", err)
return ngx.exit(500)
end
ngx.var.target = fPath
';
proxy_pass $target;
}
}
}
However, I also want to handle dynamic URI's like example:-
user/id/1 -> "example.com/user/id/1",
user/id/2 -> "example.com/user/id/2",
user/id/3 -> "example.com/user/id/3",
and so on....
I am not sure how can I create a key value pair in redis and lua logic for this case which can handle the dynamicity of the id's.
I tried looking but haven't been able to find the right direction or some resource to aid me in figuring this out.
Any help would be really great!
If you want to achieve this in production, I would recommend using mature API gateways like Apache APISIX or Kong. To implement it yourself, maybe you could store paths with wildcard or Lua patterns in Redis to allow later matching to the original URI. Applying some simple heuristics would help reduce the range of checking.

NGINX read body from proxy_pass response

I have two servers:
NGINX (it exchanges file id to file path)
Golang (it accepts file id and return it's path)
Ex: When browser client makes request to https://example.com/file?id=123, NGINX should proxy this request to Golang server https://go.example.com/getpath?file_id=123, which will return the response to NGINX:
{
data: {
filePath: "/static/..."
},
status: "ok"
}
Then NGINX should get value from filePath and return file from the location.
So the question is how to read response (get filePath) in NGINX?
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.
Different kinds of reverse proxies support ESI(Edge Side Includes) technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.
Nginx has such technology as well. It is called SSI (Server Side Includes).
location /file {
ssi on;
proxy_pass http://go.example.com;
}
Your upstream server can produce body with content <!--# include file="/path-to-static-files/some-static-file.ext" --> and nginx will replace this in-body directive with content of the file.
But you mentioned streaming...
It means that files will be of arbitrary sizes and building response with SSI would certainly eat precious RAM resources so we need a Plan #B.
There is "good enough" method to feed big files to the clients without showing static location of the file to the client.
You can use nginx's error handler to server static files based on information supplied by upstream server.
Upstream server for example can send back redirect 302 with Location header field containing real file path to the file.
This response does not reach the client and is feed into error handler.
Here is an example of config:
location /file {
error_page 302 = #service_static_file;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_pass http://go.example.com;
}
location #service_static_file {
root /hidden-files;
try_files $upstream_http_location 404.html;
}
With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.
For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.
The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
Looks like you are wanting to make an api call for data to run decision and logic against. That's not quite what proxying is about.
The core proxy ability of nginx is not designed for what you are looking to do.
Possible workaround: extending nginx...
Nginx + PHP
Your php code would do the leg work.
Serve as a client to connect to the Golang server and apply additional logic to the response.
<?php
$response = file_get_contents('https://go.example.com/getpath?file_id='.$_GET["id"]);
preg_match_all("/filePath: \"(.*?)\"/", $response, $filePath);
readfile($filePath[1][0]);
?>
location /getpath {
try_files /getpath.php;
}
This is just the pseudo-code example to get it rolling.
Some miscellaneous observations / comments:
The Golang response doesn't look like valid json, replace preg_match_all with json_decode if so.
readfile is not super efficient. Consider being creative with a 302 response.
Nginx + Lua
sites-enabled:
lua_package_path "/etc/nginx/conf.d/lib/?.lua;;";
server {
listen 80 default_server;
listen [::]:80 default_server;
location /getfile {
root /var/www/html;
resolver 8.8.8.8;
set $filepath "/index.html";
access_by_lua_file /etc/nginx/conf.d/getfile.lua;
try_files $filepath =404;
}
}
Test if lua is behaving as expected:
getfile.lua (v1)
ngx.var.filepath = "/static/...";
Simplify the Golang response body to just return a bland path then use it to set filepath:
getfile.lua (v2)
local http = require "resty.http"
local httpc = http.new()
local query_string = ngx.req.get_uri_args()
local res, err = httpc:request_uri('https://go.example.com/getpath?file_id=' .. query_string["id"], {
method = "GET",
keepalive_timeout = 60,
keepalive_pool = 10
})
if res and res.status == ngx.HTTP_OK then
body = string.gsub(res.body, '[\r\n%z]', '')
ngx.var.filepath = body;
ngx.log(ngx.ERR, "[" .. body .. "]");
else
ngx.log(ngx.ERR, "missing response");
ngx.exit(504);
end
resty.http
mkdir -p /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http_headers.lua" -P /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http.lua" -P /etc/nginx/conf.d/lib/resty

Nginx - how to access Client Certificate's Subject Alternative Name (SAN) field

I have an Nginx server which clients make requests to with a Client certificate containing a specific CN and SAN. I want to be able to extract the CN (Common Name) and SAN (Subject Alternative Names) fields of that client cert.
rough example config:
server {
listen 443 ssl;
ssl_client_certificate /etc/nginx/certs/client.crt;
ssl_verify_client on; #400 if request without valid cert
location / {
root /usr/share/nginx/html;
}
location /auth_test {
# do something with the CN and SAN.
# tried these embedded vars so far, to no avail
return 200 "
$ssl_client_s_dn
$ssl_server_name
$ssl_client_escaped_cert
$ssl_client_cert
$ssl_client_raw_cert";
}
}
Using the embedded variables exposed as part of the ngx_http_ssl_module module I can access the DN (Distinguished Name) and therefore CN etc but I don't seem to be able to get access to the SAN.
Is there some embedded var / other module / general Nginx foo I'm missing? I can access the raw cert, so is it possible to decode that manually and extract it?
I'd really rather do this at the Nginx layer as opposed to passing the cert down to the application layer and doing it there.
Any help much appreciated.
You can extract them with the Nginx-builtin map, e.g. for CN:
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~,CN=(?<CN>[^,]+) $CN;
}
I'm not a lua expert, but here's what I got working:
local openssl = require('openssl')
dnsNames = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='dNSName') then
table.insert(dnsNames, v3:toprint())
end
end
end
end
end
end
end
ngx.say(table.concat(dnsNames, ':'))
You can do it through OpenResty + Lua-OpenSSL and parse the raw certificate to get it.
Refer this: https://github.com/Seb35/nginx-ssl-variables/blob/master/COMPATIBILITY.md#ssl_client_s_dn_x509
Just like this:
local varibleName = string.match(require("openssl").x509.read(ngx.var.ssl_client_raw_cert):issuer():oneline(),"/C=([^/]+)")
Had the same problem, when I try to retrieve "subject DN" by a upstream server.
Someone might find the following advice useful. Thus, there is an access
to such fields as ("subject DN" an so on) - you have to look at link1. Beside it, I had to through this data into the request header, so I've done it via 'proxy_set_header' (link2). It was possible without any extra Nginx extension (there is not need to rebuild them with --modules, just default modules)
This is an example how an URI value can be extracted from client certificate extensions and then forwarded to the upstream server as a header. This is useful when implementing WebID over TLS authentication, for example.
location / {
proxy_pass http://upstream;
set_by_lua_block $webid_uri {
local openssl = require('openssl')
webIDs = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='uniformResourceIdentifier') then
table.insert(webIDs, v3:data())
end
end
end
end
end
end
end
return webIDs[1]
}
proxy_set_header X-WebID-URI $webid_uri;
}
Let me know if it can be improved.

Serve static files based on dynamic URLs with Flask+Nginx?

In Flask, if you place a file in a directory called static/, then any URL of the form http://localhost/static/foo.jpg will serve that file from static/foo.jpg.
This can also be accomplished via an nginx config:
location /static {
alias /var/www/mywebsite/static;
}
However, I want to do dynamic URL rewriting.
If someone requests the URL http://localhost/username/foo.jpg, I want to tell nginx to fetch the static file from an arbitrary URL, say, /var/www/assets/11235/1bcd5.jpg. I want the user to see a pretty url, and I want the location to be transparent to the user.
Is there an easy way to do this? Ideally, I would be able to do something so that nginx serves the file. However, if Flask needs to serve it, then that is fine too (it isn't like my project has any users yet!)
What am I missing here?
If the files can be stored with the names that are directly referenced in the "pretty" URL, then you can do a simple rewrite in nginx.
However, it appears that you want to map URL path info to other representations on the disk, as in username -> 11235 and foo.jpg -> 1bcd5.jpg. If the content being served should be protected by authentication or sessions, then you should probably keep the mapping and rewriting inside your Flask app, since Flask provides the means to do that.
If the content can be treated as public and only needs the mapping done, then nginx can be configured to grab query string parameters, look them up in a datastore, and rewrite the URL.
Here's an example from agentzh that was originally posted on the nginx mailing list:
Consider you seo uri is /baz, the true uri is /foo/bar. And I have the
following table in my local mysql "test" database:
create table my_url_map(id serial, url text, seo_url);
insert into my_url_map(url, seo_url)values('/foo/bar', '/baz');
And I build my nginx 0.8.41 this way:
./configure \
--add-module=/path/to/ngx_devel_kit \
--add-module=/path/to/set-misc-nginx-module \
--add-module=/path/to/ngx_http_auth_request_module-0.2 \
--add-module=/path/to/echo-nginx-module \
--add-module=/path/to/lua-nginx-module \
--add-module=/path/to/drizzle-nginx-module \
--add-module=/path/to/rds-json-nginx-module
Also, I have lua 5.1.4 and the lua-yajl library installed to my system.
And here's the central part in my nginx.conf:
upstream backend {
drizzle_server 127.0.0.1:3306 dbname=test
password=some_pass user=monty protocol=mysql;
drizzle_keepalive max=300 mode=single overflow=ignore;
}
lua_package_cpath '/path/to/your/lua/yajl/library/?.so';
server {
...
location /conv-mysql {
internal;
set_quote_sql_str $seo_uri $query_string; # to prevent sql injection
drizzle_query "select url from my_url_map where seo_url=$seo_uri";
drizzle_pass backend;
rds_json on;
}
location /conv-uid {
internal;
content_by_lua_file 'html/foo.lua';
}
location /jump {
internal;
rewrite ^ $query_string? redirect;
}
# your SEO uri
location /baz {
set $my_uri $uri;
auth_request /conv-uid;
echo_exec /jump $my_uri;
}
}
Contents of foo.lua, the essential glue:
local yajl = require('yajl')
local seo_uri = ngx.var.my_uri
local res = ngx.location.capture('/conv-mysql?' .. seo_uri)
if (res.status ~= ngx.HTTP_OK) then
ngx.throw_error(res.status)
end
res = yajl.to_value(res.body)
if (not res or not res[1] or not res[1].url) then
ngx.throw_error(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.var.my_uri = res[1].url;
Then let's access /baz from the client side:
$ curl -i localhost:1984/baz
HTTP/1.1 302 Moved Temporarily
Server: nginx/0.8.41 (without pool)
Date: Tue, 24 Aug 2010 03:28:42 GMT
Content-Type: text/html
Content-Length: 176
Location: http://localhost:1984/foo/bar
Connection: keep-alive
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/0.8.41 (without pool)</center>
</body>
</html>

Resources