Nginx auth_request handler accessing POST request body? - nginx

I'm using Nginx (version 1.9.9) as a reverse proxy to my backend server. It needs to perform authentication/authorization based on the contents of the POST requests. And I'm having trouble reading the POST request body in my auth_request handler. Here's what I got.
Nginx configuration (relevant part):
server {
location / {
auth_request /auth-proxy;
proxy_pass http://backend/;
}
location = /auth-proxy {
internal;
proxy_pass http://auth-server/;
proxy_pass_request_body on;
proxy_no_cache "1";
}
}
And in my auth-server code (Python 2.7), I try to read the request body like this:
class AuthHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def get_request_body(self):
content_len = int(self.headers.getheader('content-length', 0))
content = self.rfile.read(content_len)
return content
I printed out the content_len and it had the correct value. However, the self.rfile.read() will simply hang. And eventually it will time out and returns "[Errno 32] Broken pipe".
This is how I posted test data to the server:
$ curl --data '12345678' localhost:1234
The above command hangs as well and eventually times out and prints "Closing connection 0".
Any obvious mistakes in what I'm doing?
Thanks much!

The code of the nginx-auth-request-module is annotated at nginx.com. The module always replaces the POST body with an empty buffer.
In one of the tutorials, they explain the reason, stating:
As the request body is discarded for authentication subrequests, you will
need to set the proxy_pass_request_body directive to off and also set the
Content-Length header to a null string
The reason for this is that auth subrequests are sent at HTTP GET methods, not POST. Since GET has no body, the body is discarded. The only workaround with the existing module would be to pull the needed information from the request body and put it into an HTTP header that is passed to the auth service.

Related

Nginx as reverse proxy: How to display a custom error page for upstream errors, UNLESS the upstream says not to?

I have an Nginx instance running as a reverse proxy. When the upstream server does not respond, I send a custom error page for the 502 response code. When the upstream server sends an error page, that gets forwarded to the client, and I'd like to show a custom error page in that case as well.
If I wanted to replace all of the error pages from the upstream server, I would set proxy_intercept_errors on to show a custom page on each of them. However, there are cases where I'd like to return the actual response that the upstream server sent: for example, for API endpoints, or if the error page has specific user-readable text relating to the issue.
In the config, a single server is proxying multiple applications that are behind their own proxy setups and their own rules for forwarding requests around, so I can't just specify this per each location, and it has to work for any URL that matches a server.
Because of this, I would like to send the custom error page, unless the upstream application says not to. The easiest way to do this would be with a custom HTTP header. There is a similar question about doing this depending on the request headers. Is there a way to do this depending on the response headers?
(It appears that somebody else already had this question and their conclusion was that it was impossible with plain Nginx. If that's true, I would be interested in some other ideas on how to solve this, possibly using OpenResty like that person did.)
So far I have tried using OpenResty to do this, but it doesn't seem compatible with proxy_pass: the response that the Lua code generates seems to overwrite the response from the upstream server.
Here's the location block I tried to use:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
content_by_lua_block{
ngx.say("This seems to overwrite the content from the proxy?!")
}
body_filter_by_lua_block {
ngx.arg[1]="Truncated by code!"
ngx.arg[2]=false
if ngx.status >= 400 then
if not ngx.resp.get_headers()["X-Verbatim"] then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
end
}
}
I don't think that you can send custom error pages based on the response header since the only way, as per my knowledge, you could achieve that was using either map or if directive. Since both of these directives don't have scope after the request is sent to the upstream, they can't possibly read the response header.
However, you could do this using openresty and writing your own lua script. The lua script to do such a thing would look something like:
location / {
body_filter_by_lua '
if ngx.resp.get_headers()["Cust-Resp-Header"] then
local file = io.open('/path/to/file.html', 'r')
local html_text = f:read()
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
';
#
.
.
.
}
You could also use body_filter_by_lua_block (you could enclose your lua code inside curly brances instead writing as nginx string) or body_filter_by_lua_file (you could write your lua code in a separate file and provide the file path).
You can find how to get started with openresty here.
P.S.: You can read the response status code from the upstream using ngx.status. As far as reading the body is concerned, the variable ngx.arg[1] would contain the response body after the response from the upstream which we're modifying here. You can save the ngx.arg[1] in a local variable and try to read the error message from that using some regexp and appending later in the html_text variable. Hope that helps.
Edit 1: Pasting here a sample working lua block inside a location block with proxy_pass:
location /hello {
proxy_pass http://localhost:3102/;
body_filter_by_lua_block {
if ngx.resp.get_headers()["erratic"] == "true" then
ngx.arg[1] = "<html><body>Hi</body></html>"
end
}
}
Edit 2: You can't use content_by_lua_block with proxy_pass or else your proxy wouldn't work. Your location block should look like this (assuming X-Verbatim header is set to "false" (a string) if you've to override the error response body from the upstream).
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
body_filter_by_lua_block {
if ngx.status >= 400 then
if ngx.resp.get_headers()["X-Verbatim"] == "false" then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
end
end
}
}
This is somewhat opposite of the requested but I think it can fit anyway. It shows the original response unless upstream says what to show.
There is a set of X-Accel custom headers that are evaluated from upstream responses. X-Accel-Redirect allows you to tell NGINX to process another location instead. Below is an example how it can be used.
This is a Flask application that gives 50/50 normal responses and errors. The error responses come with X-Accel-Redirect header, instructing NGINX to reply with contents from the #error_page location.
import flask
import random
application = flask.Flask(__name__)
#application.route("/")
def main():
if random.randint(0, 1):
resp = flask.Response("Random error") # upstream body contents
resp.headers['X-Accel-Redirect'] = '#error_page' # the header
return resp
else:
return "Normal response"
if __name__ == '__main__':
application.run("0.0.0.0", port=4000)
And here is a NGINX config for that:
server {
listen 80;
location / {
proxy_pass http://localhost:4000/;
}
location #error_page {
return 200 "That was an error";
}
}
Putting these together you will see either "Normal response" from the app, or "That was an error" from the #error_page location ("Random error" will be suppressed). With this setup you can create a number of various locations (#error_502, #foo, #etc) for various errors and make your application to use them.

Nginx Openresty - Change http status after reading the response body

I have an openresty nginx to proxy elasticsearch. So the grafana client contacts nginx and nginx in return fetches the response from elasticsearch. The goal is to change the http status to 504 if the response body from the elasticsearch contains the key "timedout": true
The response body is read using body_by_filter_lua_block but this directive doesn't support to change the http status.
http {
lua_need_request_body on;
server {
listen 8000;
location / {
proxy_pass "http://localhost:9200"
header_filter_by_lua_block {
ngx.header.content_length = nil
}
body_filter_by_lua_block {
if string.find(ngx.arg[1], "\"timedout\":true") then
ngx.arg[1] = nil
}
}
}
}
The above code just makes the response body nil . But Is there a way to change the http status ? Or if it's not supported in nginx , is there any other proxy server which can do this job ?
Any help would be appreciated.
You cannot change a status within body_filter_by_lua_block, because at this moment all response headers already sent downstream.
If you definitely need it - don't use proxy_pass.
Instead use content_by_lua_block and within it use lua-resty-http to issue a request, read full body, analyze it and respond with any status code you want.
This approach is full buffered and may have significant performance implication for big responses.
Also you should keep in mind that body may be compressed.

Nginx: Change status code of error response

I have nginx to reverse proxy backend. I need to change status code of all 403 backend's responses to 401, preserving everything else (This is needed to make 3rd party client app work with backend)
My current configuration is
server {
...
error_page 403 = #unauthorized;
location / {
...
proxy_intercept_errors on;
}
location #unauthorized {
return 401;
}
}
Problem with this configuration is that all original headers/content are lost. Is there a way to modify status code only, instead of responding with a whole new response? Or maybe get access to original response and copy everything from original response in #unauthorized?
Thanks.

Proxy a request - get a parameter from URL, add a header and update request URL using Nginx

I am looking for a way to do the following using Nginx:
Intercept a request
Read URL, parse it and read a value from it.
Add that value as a new request header
Update the URL (remove a particular value)
Forward the request to another server
e.g
Request URL - http://<<nginx>>/test/001.xml/25
Final URL - http://<<server>>/test/001.xml with header (x-replica: 25)
I have a nginx server setup with a upstream for the actual server. I was wondering how do I setup Nginx to achieve this ?
Since the data exists within the request URI itself (available by the $uri variable in nginx), you can parse that using the nginx lua module. nginx will need to be compiled with lua for this to work, see: openresty's nginx lua module.
From there you can use the set_by_lua_block or set_by_lua_file directive given $uri as a parameter.
In configuration this would look something like:
location / {
...
set_by_lua_file $var_to_set /path/to/script.lua $uri;
# $var_to_set would contain the result of the script from this point
proxy_set_header X-Replica $var_to_set;
...
}
In script.lua we can access the $uri variable from in the ngx.arg list (see these docs):
function parse_uri( uri )
parsed_uri = uri
-- Parse logic here
return parsed_uri
end
return parse_uri( ngx.arg[1] )
Similarly, you can modify this function or create another to make a variable with the updated $uri.

Using Lua in nginx to pass a request to FastCGI

Using nginx compiled with Lua support, how can we make a sort of sub-request to a FastCGI handler, much like nginx's fastcgi_pass directive?
What I'd like to do is something like this:
location = / {
access_by_lua '
res = ngx_fastcgi.pass("backend")
';
}
(Obviously, this doesn't work.)
I'm pouring over HttpLuaModule where I see mention ngx_fastcgi and ngx.location.capture, which, evidently, makes
non-blocking internal requests to other locations configured with
disk file directory or any other nginx C modules like ... ngx_fastcgi,
...
But then following the link of ngx_fastcgi takes me to HttpFastcgiModule which explains only nginx directives, not Lua-scriptable commands. Is ngx.location.capture the right function to use? (These requests, by the way, will be to localhost, just on a different port, like 9000 or 9001.)
How can I use Lua in nginx to forward a request, or make a sub-request, to a FastCGI endpoint?
Use the ngx.location.capture() method to perform a subrequest to a predefined location block. Then, from within the location block, perform the external, FastCGI request. Because the subrequest itself isn't actually a network operation, but is performed purely within nginx C-based environment, there's very little overhead. Further, because the FastCGI request and other "proxy_pass"-type requests are event-based, nginx can operate as an efficient intermediary.
As an example, you could have the following:
location / {
access_by_lua '
response = ngx.location.capture("/my-subrequest-handler")
if response.status == 404 then
return ngx.exit(401) -- can't find/authenticate user, refuse request
end
ngx.say(response.status)
';
# other nginx config stuff here as necessary--perhaps another fastcgi_pass
# depending upon the status code of the response above...
}
location = /my-subrequest-handler {
internal; # this location block can only be seen by nginx subrequests
fastcgi_pass localhost:9000; # or some named "upstream"
fastcgi_pass_request_body off; # send client request body upstream?
fastcgi_pass_request_headers off; # send client request headers upstream?
fastcgi_connect_timeout 100ms; # optional; control backend timeouts
fastcgi_send_timeout 100ms; # same
fastcgi_read_timeout 100ms; # same
fastcgi_keep_conn on; # keep request alive
include fastcgi_params;
}
In the above example, even though I'm performing a subrequest to "/my-subrequest-handler", the actual URL passed to the FastCGI process is the one requested by the HTTP client calling into nginx in the first place.
Note that that ngx.location.capture is a synchronous, but non-blocking operation which means that your code execution stops until a response is received, but the nginx worker is free to perform other operations in the meantime.
There are some really cool things that you can do with Lua to modify the request and response at any point in the nginx pipeline. For example, you could change the original request by adding headers, removing headers, even transforming the body. Perhaps the caller wants to work with XML, but the upstream application only understands JSON, we can convert to/from JSON when calling the upstream application.
Lua is not built into nginx by default. Instead it's a 3rd party module that must be compiled in. There's a flavor of nginx called OpenResty that builds in Lua+LuaJIT along with a few other modules that you may or may not need.

Resources