In my nginx config I have the following lines set up to serve a fallback error page from lua:
error_page 502 #fallback;
location #fallback {
content_by_lua_file 'fallback.lua';
}
location / {
return 502;
}
Then in my lua file, I have the following at the top of the file:
ngx.log(ngx.ERR, "reported status is: " .. ngx.status)
I expect it to be 502 but this reports that ngx.status is 0.
I've tried to fix this by writing
set $status 502
but nginx complains that $status is a duplicate of an existing variable and won't load the config.
How can I get lua to know about the nginx status from a return directive?
This was a bug in ngx_lua. Response status code for the "return" directive is stored diffrently than normal response status code, that is, in r->err_status rather than r->headers_out.status. The ngx.status API just read the latter rather than the former.
This issue was already fixed in ngx_lua's master branch as commit 82ba941d:
https://github.com/chaoslawful/lua-nginx-module/commit/82ba941d
This fix will be included in the next release of ngx_lua (0.9.1) and ngx_openresty (1.4.3.1).
Thank you for the report!
Looks like a bug in the lua module was preventing this from being set properly.
https://github.com/chaoslawful/lua-nginx-module/commit/82ba941d
Related
I've got a default nginx config (I don't explicitly return 400 for any scenario):
http {
...
include /etc/nginx/mime.types;
default_type application/octet-stream;
...
include /etc/nginx/conf.d/*.conf;
}
and it returns
<html>
<head><title>400 Request Header Or Cookie Too Large</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>Request Header Or Cookie Too Large</center>
<hr><center>nginx</center>
</body>
</html>
* Closing connection 0
which is OK.
However I'd like to alter nginx config to return 431 instead of 400 for that kind of error, what's the easiest way to do it?
I tried looking at the following questions:
nginx 431 Request Header Fields Too Large
400 Bad Request Request Header Or Cookie Too Large nginx
nginx 431 Request Header Fields Too Large
but they're more about fixing the error instead of returning a different status code.
The alternative solution could be to bump the limit:
server {
# ...
large_client_header_buffers 4 32k;
# ...
}
by following https://stackoverflow.com/a/19285146/17109505.
I also found a config that do map $status $status_text { under http block:
map $status $status_text {
...
431 'Request Header Fields Too Large';
...
}
will it be able to fix that?
You don't show anything from your real server config except the include /etc/nginx/conf.d/*.conf; so I can only guess what is it really contains. You can try to use error_page directive the following way:
error_page 400 =431 /431.html;
Create and put somewhere (for example, to the /usr/share/nginx/html folder) the following HTML file (again, you can put anything inside that file, this is just an example):
<html>
<head><title>431 Request Header Or Cookie Too Large</title></head>
<body>
<center><h1>431 Bad Request</h1></center>
<center>Request Header Or Cookie Too Large</center>
<hr><center>nginx</center>
</body>
</html>
This custom error page should't have any external assets (i.e. scripts, styles, images etc.) references (however you are ok to use any inline styles, scripts, or even BASE64-encoded images).
Now use the following location at your server block:
location = /431.html {
internal;
root /usr/share/nginx/html;
}
I also found a config that do map $status $status_text { ... } under http block:
map $status $status_text {
...
431 'Request Header Fields Too Large';
...
}
will it be able to fix that?
What an interesting solution! It uses an internal nginx variable $status to get the error message from the response status code and then use those $status and $status_text variables via Server Side Includes in the universal error.html error page. I see such a technique the first time, very nice approach. However I'm not sure it will be usefull for your particular case. If you'd want to use a similar universal error page, you will need at least two error_page directives:
error_page 400 =431 /error.html; # change response status code from 400 to 431
error_page 401 402 403 ... /error.html; # left other response status codes unchanged
I have an Nginx instance running as a reverse proxy. When the upstream server does not respond, I send a custom error page for the 502 response code. When the upstream server sends an error page, that gets forwarded to the client, and I'd like to show a custom error page in that case as well.
If I wanted to replace all of the error pages from the upstream server, I would set proxy_intercept_errors on to show a custom page on each of them. However, there are cases where I'd like to return the actual response that the upstream server sent: for example, for API endpoints, or if the error page has specific user-readable text relating to the issue.
In the config, a single server is proxying multiple applications that are behind their own proxy setups and their own rules for forwarding requests around, so I can't just specify this per each location, and it has to work for any URL that matches a server.
Because of this, I would like to send the custom error page, unless the upstream application says not to. The easiest way to do this would be with a custom HTTP header. There is a similar question about doing this depending on the request headers. Is there a way to do this depending on the response headers?
(It appears that somebody else already had this question and their conclusion was that it was impossible with plain Nginx. If that's true, I would be interested in some other ideas on how to solve this, possibly using OpenResty like that person did.)
So far I have tried using OpenResty to do this, but it doesn't seem compatible with proxy_pass: the response that the Lua code generates seems to overwrite the response from the upstream server.
Here's the location block I tried to use:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
content_by_lua_block{
ngx.say("This seems to overwrite the content from the proxy?!")
}
body_filter_by_lua_block {
ngx.arg[1]="Truncated by code!"
ngx.arg[2]=false
if ngx.status >= 400 then
if not ngx.resp.get_headers()["X-Verbatim"] then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
end
}
}
I don't think that you can send custom error pages based on the response header since the only way, as per my knowledge, you could achieve that was using either map or if directive. Since both of these directives don't have scope after the request is sent to the upstream, they can't possibly read the response header.
However, you could do this using openresty and writing your own lua script. The lua script to do such a thing would look something like:
location / {
body_filter_by_lua '
if ngx.resp.get_headers()["Cust-Resp-Header"] then
local file = io.open('/path/to/file.html', 'r')
local html_text = f:read()
ngx.arg[1] = html_text
ngx.arg[2] = true
return
end
';
#
.
.
.
}
You could also use body_filter_by_lua_block (you could enclose your lua code inside curly brances instead writing as nginx string) or body_filter_by_lua_file (you could write your lua code in a separate file and provide the file path).
You can find how to get started with openresty here.
P.S.: You can read the response status code from the upstream using ngx.status. As far as reading the body is concerned, the variable ngx.arg[1] would contain the response body after the response from the upstream which we're modifying here. You can save the ngx.arg[1] in a local variable and try to read the error message from that using some regexp and appending later in the html_text variable. Hope that helps.
Edit 1: Pasting here a sample working lua block inside a location block with proxy_pass:
location /hello {
proxy_pass http://localhost:3102/;
body_filter_by_lua_block {
if ngx.resp.get_headers()["erratic"] == "true" then
ngx.arg[1] = "<html><body>Hi</body></html>"
end
}
}
Edit 2: You can't use content_by_lua_block with proxy_pass or else your proxy wouldn't work. Your location block should look like this (assuming X-Verbatim header is set to "false" (a string) if you've to override the error response body from the upstream).
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:65000;
body_filter_by_lua_block {
if ngx.status >= 400 then
if ngx.resp.get_headers()["X-Verbatim"] == "false" then
local file = io.open('/usr/share/nginx/error.html', 'w')
local html_text = file:read("*a")
ngx.arg[1] = html_text
ngx.arg[2] = true
end
end
}
}
This is somewhat opposite of the requested but I think it can fit anyway. It shows the original response unless upstream says what to show.
There is a set of X-Accel custom headers that are evaluated from upstream responses. X-Accel-Redirect allows you to tell NGINX to process another location instead. Below is an example how it can be used.
This is a Flask application that gives 50/50 normal responses and errors. The error responses come with X-Accel-Redirect header, instructing NGINX to reply with contents from the #error_page location.
import flask
import random
application = flask.Flask(__name__)
#application.route("/")
def main():
if random.randint(0, 1):
resp = flask.Response("Random error") # upstream body contents
resp.headers['X-Accel-Redirect'] = '#error_page' # the header
return resp
else:
return "Normal response"
if __name__ == '__main__':
application.run("0.0.0.0", port=4000)
And here is a NGINX config for that:
server {
listen 80;
location / {
proxy_pass http://localhost:4000/;
}
location #error_page {
return 200 "That was an error";
}
}
Putting these together you will see either "Normal response" from the app, or "That was an error" from the #error_page location ("Random error" will be suppressed). With this setup you can create a number of various locations (#error_502, #foo, #etc) for various errors and make your application to use them.
I am trying to add the Rate limiting using Nginx. I have to add this to my domain.com/api/index.php file.
When I am adding the following location block then the Nginx is returning 405 (method not allowed) error on the URL when it passes the user request and the rest of the requests are returning 503 error.
I am trying to fix this 405 error from my file. I believe that I am doing something wrong and I am not able to find it. I am new with Nginx and this is the first time I am attempting this. The following are my settings.
location = /domain.com/api/index.php {
root /var/www/;
limit_req zone=mylimit burst=2 nodelay;
}
Upon testing this URL return 405 error and request limit return 503 after every 2 connections. Can anyone please tell me what's wrong here and how to fix it?
I am still not able to figure out the issue and its fix. Can anyone please help?
I have nginx to reverse proxy backend. I need to change status code of all 403 backend's responses to 401, preserving everything else (This is needed to make 3rd party client app work with backend)
My current configuration is
server {
...
error_page 403 = #unauthorized;
location / {
...
proxy_intercept_errors on;
}
location #unauthorized {
return 401;
}
}
Problem with this configuration is that all original headers/content are lost. Is there a way to modify status code only, instead of responding with a whole new response? Or maybe get access to original response and copy everything from original response in #unauthorized?
Thanks.
I'm using Nginx (version 1.9.9) as a reverse proxy to my backend server. It needs to perform authentication/authorization based on the contents of the POST requests. And I'm having trouble reading the POST request body in my auth_request handler. Here's what I got.
Nginx configuration (relevant part):
server {
location / {
auth_request /auth-proxy;
proxy_pass http://backend/;
}
location = /auth-proxy {
internal;
proxy_pass http://auth-server/;
proxy_pass_request_body on;
proxy_no_cache "1";
}
}
And in my auth-server code (Python 2.7), I try to read the request body like this:
class AuthHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def get_request_body(self):
content_len = int(self.headers.getheader('content-length', 0))
content = self.rfile.read(content_len)
return content
I printed out the content_len and it had the correct value. However, the self.rfile.read() will simply hang. And eventually it will time out and returns "[Errno 32] Broken pipe".
This is how I posted test data to the server:
$ curl --data '12345678' localhost:1234
The above command hangs as well and eventually times out and prints "Closing connection 0".
Any obvious mistakes in what I'm doing?
Thanks much!
The code of the nginx-auth-request-module is annotated at nginx.com. The module always replaces the POST body with an empty buffer.
In one of the tutorials, they explain the reason, stating:
As the request body is discarded for authentication subrequests, you will
need to set the proxy_pass_request_body directive to off and also set the
Content-Length header to a null string
The reason for this is that auth subrequests are sent at HTTP GET methods, not POST. Since GET has no body, the body is discarded. The only workaround with the existing module would be to pull the needed information from the request body and put it into an HTTP header that is passed to the auth service.