Nginx Openresty - Change http status after reading the response body - nginx

I have an openresty nginx to proxy elasticsearch. So the grafana client contacts nginx and nginx in return fetches the response from elasticsearch. The goal is to change the http status to 504 if the response body from the elasticsearch contains the key "timedout": true
The response body is read using body_by_filter_lua_block but this directive doesn't support to change the http status.
http {
lua_need_request_body on;
server {
listen 8000;
location / {
proxy_pass "http://localhost:9200"
header_filter_by_lua_block {
ngx.header.content_length = nil
}
body_filter_by_lua_block {
if string.find(ngx.arg[1], "\"timedout\":true") then
ngx.arg[1] = nil
}
}
}
}
The above code just makes the response body nil . But Is there a way to change the http status ? Or if it's not supported in nginx , is there any other proxy server which can do this job ?
Any help would be appreciated.

You cannot change a status within body_filter_by_lua_block, because at this moment all response headers already sent downstream.
If you definitely need it - don't use proxy_pass.
Instead use content_by_lua_block and within it use lua-resty-http to issue a request, read full body, analyze it and respond with any status code you want.
This approach is full buffered and may have significant performance implication for big responses.
Also you should keep in mind that body may be compressed.

Related

Nginx as reverse proxy: How to display a custom error page for upstream errors, UNLESS the upstream says not to?

I have an Nginx instance running as a reverse proxy. When the upstream server does not respond, I send a custom error page for the 502 response code. When the upstream server sends an error page, that gets forwarded to the client, and I'd like to show a custom error page in that case as well.
If I wanted to replace all of the error pages from the upstream server, I would set proxy_intercept_errors on to show a custom page on each of them. However, there are cases where I'd like to return the actual response that the upstream server sent: for example, for API endpoints, or if the error page has specific user-readable text relating to the issue.
In the config, a single server is proxying multiple applications that are behind their own proxy setups and their own rules for forwarding requests around, so I can't just specify this per each location, and it has to work for any URL that matches a server.
Because of this, I would like to send the custom error page, unless the upstream application says not to. The easiest way to do this would be with a custom HTTP header. There is a similar question about doing this depending on the request headers. Is there a way to do this depending on the response headers?
(It appears that somebody else already had this question and their conclusion was that it was impossible with plain Nginx. If that's true, I would be interested in some other ideas on how to solve this, possibly using OpenResty like that person did.)
So far I have tried using OpenResty to do this, but it doesn't seem compatible with proxy_pass: the response that the Lua code generates seems to overwrite the response from the upstream server.
Here's the location block I tried to use:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
content_by_lua_block{
ngx.say("This seems to overwrite the content from the proxy?!")
}
body_filter_by_lua_block {
ngx.arg[1]="Truncated by code!"
ngx.arg[2]=false
if ngx.status >= 400 then
if not ngx.resp.get_headers()["X-Verbatim"] then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
end
}
}
I don't think that you can send custom error pages based on the response header since the only way, as per my knowledge, you could achieve that was using either map or if directive. Since both of these directives don't have scope after the request is sent to the upstream, they can't possibly read the response header.
However, you could do this using openresty and writing your own lua script. The lua script to do such a thing would look something like:
location / {
body_filter_by_lua '
if ngx.resp.get_headers()["Cust-Resp-Header"] then
local file = io.open('/path/to/file.html', 'r')
local html_text = f:read()
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
';
#
.
.
.
}
You could also use body_filter_by_lua_block (you could enclose your lua code inside curly brances instead writing as nginx string) or body_filter_by_lua_file (you could write your lua code in a separate file and provide the file path).
You can find how to get started with openresty here.
P.S.: You can read the response status code from the upstream using ngx.status. As far as reading the body is concerned, the variable ngx.arg[1] would contain the response body after the response from the upstream which we're modifying here. You can save the ngx.arg[1] in a local variable and try to read the error message from that using some regexp and appending later in the html_text variable. Hope that helps.
Edit 1: Pasting here a sample working lua block inside a location block with proxy_pass:
location /hello {
proxy_pass http://localhost:3102/;
body_filter_by_lua_block {
if ngx.resp.get_headers()["erratic"] == "true" then
ngx.arg[1] = "<html><body>Hi</body></html>"
end
}
}
Edit 2: You can't use content_by_lua_block with proxy_pass or else your proxy wouldn't work. Your location block should look like this (assuming X-Verbatim header is set to "false" (a string) if you've to override the error response body from the upstream).
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
body_filter_by_lua_block {
if ngx.status >= 400 then
if ngx.resp.get_headers()["X-Verbatim"] == "false" then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
end
end
}
}
This is somewhat opposite of the requested but I think it can fit anyway. It shows the original response unless upstream says what to show.
There is a set of X-Accel custom headers that are evaluated from upstream responses. X-Accel-Redirect allows you to tell NGINX to process another location instead. Below is an example how it can be used.
This is a Flask application that gives 50/50 normal responses and errors. The error responses come with X-Accel-Redirect header, instructing NGINX to reply with contents from the #error_page location.
import flask
import random
application = flask.Flask(__name__)
#application.route("/")
def main():
if random.randint(0, 1):
resp = flask.Response("Random error") # upstream body contents
resp.headers['X-Accel-Redirect'] = '#error_page' # the header
return resp
else:
return "Normal response"
if __name__ == '__main__':
application.run("0.0.0.0", port=4000)
And here is a NGINX config for that:
server {
listen 80;
location / {
proxy_pass http://localhost:4000/;
}
location #error_page {
return 200 "That was an error";
}
}
Putting these together you will see either "Normal response" from the app, or "That was an error" from the #error_page location ("Random error" will be suppressed). With this setup you can create a number of various locations (#error_502, #foo, #etc) for various errors and make your application to use them.

websockets in openresty proxy

I created proxy with MFA using OpenResty, it mainly works ok.
But I have problem with websockets: Firefox says that it "cannot connect with server wss://...". Looking in browser's network panel I can see switching protocols request that seems be ok. My nginx.conf looks as bellow:
worker_processes auto;
env TARGET_APPLICATION_HOST;
env TARGET_APPLICATION_PORT;
env TARGET_USE_SSL;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
location / {
resolver local=on ipv6=off valid=100s;
content_by_lua_block {
local http = require "resty.http"
local httpc = http.new()
httpc:set_timeout(500)
local ok, err = httpc:connect(
os.getenv("TARGET_APPLICATION_HOST"),
os.getenv("TARGET_APPLICATION_PORT"))
if not ok then
ngx.log(ngx.ERR, err)
return
end
if os.getenv("TARGET_USE_SSL") == "TRUE" then
-- Trigger the SSL handshake
session, err = httpc:ssl_handshake(False, server, False)
end
httpc:set_timeout(2000)
httpc:proxy_response(httpc:proxy_request())
httpc:set_keepalive()
}
}
}
}
It is simpler version of production proxy, but returns the same error with websockets. I tried to use proxy with pure nginx and it works ok with websockets, but I need capabilites of OpenResty (proxing different hosts basing of cookie value).
Is there any simple mistake in the above file or OpenResty does not have websocket abilities?
lua-resty-http is a HTTP(S) client libraty, it does not (and probably will not) support the WebSocket protocol.
There is another library for the WebSocket protocol: lua-resty-websocket. It implements both client and server, so it should be possible to write the proxy using this library.
I need capabilites of OpenResty (proxing different hosts basing of cookie value)
ngx.balancer does exactly what you need, check the example and this answer.

Nginx: Change status code of error response

I have nginx to reverse proxy backend. I need to change status code of all 403 backend's responses to 401, preserving everything else (This is needed to make 3rd party client app work with backend)
My current configuration is
server {
...
error_page 403 = #unauthorized;
location / {
...
proxy_intercept_errors on;
}
location #unauthorized {
return 401;
}
}
Problem with this configuration is that all original headers/content are lost. Is there a way to modify status code only, instead of responding with a whole new response? Or maybe get access to original response and copy everything from original response in #unauthorized?
Thanks.

How to change Content-length in body_filter_by_lua* in openresty

I am using openresty as a proxy server, which may change response from upstream. Directive header_filter_by_lua* is executed before body_filter_by_lua*. But I changed Content-length in body_filter_by_lua*, and headers has been sent at that time.
So how to set correct Content-length when response from upstream is changed in body_filter_by_lua*?
Thank you!
From https://github.com/openresty/lua-nginx-module#body_filter_by_lua:
When the Lua code may change the length of the response body, then it is required to always clear out the Content-Length response header (if any) in a header filter to enforce streaming output, as in
location /foo {
# fastcgi_pass/proxy_pass/...
header_filter_by_lua_block { ngx.header.content_length = nil }
body_filter_by_lua 'ngx.arg[1] = string.len(ngx.arg[1]) .. "\\n"';
}
I expect that nginx would use http://greenbytes.de/tech/webdav/rfc2616.html#chunked.transfer.encoding in this case (didn't test)

Nginx auth_request handler accessing POST request body?

I'm using Nginx (version 1.9.9) as a reverse proxy to my backend server. It needs to perform authentication/authorization based on the contents of the POST requests. And I'm having trouble reading the POST request body in my auth_request handler. Here's what I got.
Nginx configuration (relevant part):
server {
location / {
auth_request /auth-proxy;
proxy_pass http://backend/;
}
location = /auth-proxy {
internal;
proxy_pass http://auth-server/;
proxy_pass_request_body on;
proxy_no_cache "1";
}
}
And in my auth-server code (Python 2.7), I try to read the request body like this:
class AuthHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def get_request_body(self):
content_len = int(self.headers.getheader('content-length', 0))
content = self.rfile.read(content_len)
return content
I printed out the content_len and it had the correct value. However, the self.rfile.read() will simply hang. And eventually it will time out and returns "[Errno 32] Broken pipe".
This is how I posted test data to the server:
$ curl --data '12345678' localhost:1234
The above command hangs as well and eventually times out and prints "Closing connection 0".
Any obvious mistakes in what I'm doing?
Thanks much!
The code of the nginx-auth-request-module is annotated at nginx.com. The module always replaces the POST body with an empty buffer.
In one of the tutorials, they explain the reason, stating:
As the request body is discarded for authentication subrequests, you will
need to set the proxy_pass_request_body directive to off and also set the
Content-Length header to a null string
The reason for this is that auth subrequests are sent at HTTP GET methods, not POST. Since GET has no body, the body is discarded. The only workaround with the existing module would be to pull the needed information from the request body and put it into an HTTP header that is passed to the auth service.

Resources