Is there any way I can get nginx to not forward a specific request header to uwsgi?
I want to enable nginx basic auth, but if the Authorization header gets forwarded to my app it breaks things (for reasons, I won't go into). If it was just a simple proxy_pass I would be able to do proxy_set_header Authorization ""; but I don't think this works with uwsgi_pass and there's no equivalent uwsgi_set_header as far as I can see.
Thanks.
Try hide header and ignore header directives:
uwsgi_hide_header
Syntax: uwsgi_hide_header field; Default: — Context: http, server,
location
By default, nginx does not pass the header fields “Status” and
“X-Accel-...” from the response of a uwsgi server to a client. The
uwsgi_hide_header directive sets additional fields that will not be
passed. If, on the contrary, the passing of fields needs to be
permitted, the uwsgi_pass_header directive can be used.
uwsgi_ignore_headers
Syntax: uwsgi_ignore_headers field ...; Default: —
Context: http, server, location Disables processing of certain
response header fields from the uwsgi server. The following fields can
be ignored: “X-Accel-Redirect”, “X-Accel-Expires”,
“X-Accel-Limit-Rate” (1.1.6), “X-Accel-Buffering” (1.1.6),
“X-Accel-Charset” (1.1.6), “Expires”, “Cache-Control”, “Set-Cookie”
(0.8.44), and “Vary” (1.7.7).
If not disabled, processing of these header fields has the following
effect:
“X-Accel-Expires”, “Expires”, “Cache-Control”, “Set-Cookie”, and
“Vary” set the parameters of response caching; “X-Accel-Redirect”
performs an internal redirect to the specified URI;
“X-Accel-Limit-Rate” sets the rate limit for transmission of a
response to a client; “X-Accel-Buffering” enables or disables
buffering of a response; “X-Accel-Charset” sets the desired charset of
a response.
It's probably too late for you but for anyone who would have the same problem, this answer provides a valid solution.
In this case the Authorization header could be passed by using the following directive:
uwsgi_param HTTP_Authorization "";
Related
I have an app-facing Nginx Plus (R22) gateway, which is validating JWT token in Authorization header. Lately I found one of our legacy mobile apps had a bug in which the authorization header has a typo: it was missing a space between the bearer keyword and the token. (example: bearereyJ...)
I used a simple map to make sure I add a space, and set it inside $authorization variable, which works fine:
map "$http_authorization" $authorization {
~*^bearer(?<token>(.*))$ "bearer $token";
default $http_authorization;
}
I also set the Authorization header in my location, but my request is still getting rejected and I keep getting 401, even though upon reviewing, the token is valid.
location ~ ...{
proxy_set_header Authorization $authorization;
proxy_pass ...;
}
How can I make sure I rewrite the header before the JWT validation happens?
Having asked that, my current approach as a workaround would be to set up two locations, one would rewrite the header and will not validate the token, then proxy to another location which will check the modified header and proxy to its destination. Is it a good approach?
Thanks in advance!
Here's the workaround I had to do in order to make this work:
Setup header rewrite inside http block, to make sure there's a space between the bearer word and the token:
map "$http_authorization" $authorization {
~*^bearer(\s*)(?<token>(.*))$ "bearer $token";
default $http_authorization;
}
proxy from one location to another, one unauthenticated that rewrites the header, then proxy to another location that actually authenticates:
location ~ ... {
auth_jwt off;
proxy_set_header Authorization $authorization;
proxy_pass http://$upstream/reauthenticate/$request_uri;
}
location ~ /reauthenticate/(?<original_uri>(.*)){
proxy_pass http://$upstream/$original_uri;
}
While we did not end up using this solution, I do think it's better to have this somewhere just in case someone will be looking for it in the future. This is the Better solution I was looking for, and would avoid the 499 status code:
map $http_authorization $token {
"~^Bearer\s?(.+)$" $1;
}
...
auth_jwt "test" token=$token;
I have a web application that wants to access files from a third party site without CORS enabled. The requests can be to an arbitrary domain with arbitrary parameters. I'm sending a request to my domain containing the target encoded as a GET parameter, i.e.
GET https://www.example.com/proxy/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fhttp%2Fngx_http_proxy_module.html
Then in Nginx I do
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
}
This works for single files, but the target server sometimes returns a Location header, which I want to intercept and modify for the client to retry.
Basically I would like to escape $sent_http_location, append it to https://www.example.com/proxy/?url= and pass that back to the browser to retry.
I've tried doing
set_escape_uri $tmp $sent_http_location;
proxy_redirect $sent_http_header /pass/?v=$tmp;
but this doesn't work. I've also tried saving the Location header, then ignoring the incoming header with
proxy_hide_header
and replacing it with my own
proxy_set_header
but ignoring causes me to lose the variable saving it.
How can I configure Nginx to accomplish this handling of redirects so that I could pass a encoded URL would be returned to the user when the proxied site redirects?
There are several problems with your unsuccessful approach:
proxy_set_header sets the header that goes to the upstream server, not to the client. So even if $sent_http_location hadn't been empty, your configuration couldn't possibly work as you wanted it to.
$sent_http_<header> variables point exactly to the same area of memory as the response headers that will be send to the client. So when proxy_hide_header takes effect, the specified header is being removed from the memory along with the value of the corresponding $sent_http_<header>.
set_escape_uri works at a very early stage of the request processing, way before proxy_pass is called and the Location header is returned from the upstream server. So it will always process the at that time empty variable $sent_http_location and the result also will always be the empty variable.
The last problem is the most serious. The only way to make set_escape_uri work after proxy_pass is to force Nginx to leave the current location and start the processing all over again. This can be done with the error_page trick:
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
proxy_intercept_errors on;
error_page 301 = #rewrite_301;
}
location #rewrite_301 {
set_escape_uri $location $upstream_http_location;
return 301 /pass/?v=$location;
}
Note the use of $upstream_http_location instead of $sent_http_location. When Nginx leaves the context of the location, it assumes that the request will be proxied to another upstream, or processed in some other way, and so it clears the headers recieved from the last proxy_pass, to make room for new response headers.
Unlike $sent_http_<header> vairables, which represent response headers that will be send to the client, $upstream_http_<header> variables represent response headers that were recieved from the upstream. Because of that, they are only replaced with new values when the request is proxied to another upstream server. So, once set, these variables can be used at any moment, they will not be cleared.
We have a downstream application which is setting some custom headers to the requests from browser before hitting nginx. nginx serves only static contents.
ie browser >> application A >> nginx
The requirement is that the nginx should be able to return all the headers which it receives as is to the downstream server which would give it back to the browser. by default its returning only the generic headers ( cookies etc, expiry etc ) and not retuning the custom ones sent by the downstream server.
For instance, there is a header with name appnumber which nginx receives with value app01. I tried to explicitly set it with the following rule to set it manually if it exist, but did not help as it throws error that variables are not allowed.
if ($appnumber) {
add_header appnumber $appnumber;
}
Can someone please guide me here?
The request headers are stored under $http_ variable. You could try something like
if ($appnumber) {
add_header appnumber $http_appnumber;
}
Refer http://nginx.org/en/docs/http/ngx_http_core_module.html and nginx - read custom header from upstream server
I have a web application that wants to access files from a third party site without CORS enabled. The requests can be to an arbitrary domain with arbitrary parameters. I'm sending a request to my domain containing the target encoded as a GET parameter, i.e.
GET https://www.example.com/proxy/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fhttp%2Fngx_http_proxy_module.html
Then in Nginx I do
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
}
This works for single files, but the target server sometimes returns a Location header, which I want to intercept and modify for the client to retry.
Basically I would like to escape $sent_http_location, append it to https://www.example.com/proxy/?url= and pass that back to the browser to retry.
I've tried doing
set_escape_uri $tmp $sent_http_location;
proxy_redirect $sent_http_header /pass/?v=$tmp;
but this doesn't work. I've also tried saving the Location header, then ignoring the incoming header with
proxy_hide_header
and replacing it with my own
proxy_set_header
but ignoring causes me to lose the variable saving it.
How can I configure Nginx to accomplish this handling of redirects so that I could pass a encoded URL would be returned to the user when the proxied site redirects?
There are several problems with your unsuccessful approach:
proxy_set_header sets the header that goes to the upstream server, not to the client. So even if $sent_http_location hadn't been empty, your configuration couldn't possibly work as you wanted it to.
$sent_http_<header> variables point exactly to the same area of memory as the response headers that will be send to the client. So when proxy_hide_header takes effect, the specified header is being removed from the memory along with the value of the corresponding $sent_http_<header>.
set_escape_uri works at a very early stage of the request processing, way before proxy_pass is called and the Location header is returned from the upstream server. So it will always process the at that time empty variable $sent_http_location and the result also will always be the empty variable.
The last problem is the most serious. The only way to make set_escape_uri work after proxy_pass is to force Nginx to leave the current location and start the processing all over again. This can be done with the error_page trick:
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
proxy_intercept_errors on;
error_page 301 = #rewrite_301;
}
location #rewrite_301 {
set_escape_uri $location $upstream_http_location;
return 301 /pass/?v=$location;
}
Note the use of $upstream_http_location instead of $sent_http_location. When Nginx leaves the context of the location, it assumes that the request will be proxied to another upstream, or processed in some other way, and so it clears the headers recieved from the last proxy_pass, to make room for new response headers.
Unlike $sent_http_<header> vairables, which represent response headers that will be send to the client, $upstream_http_<header> variables represent response headers that were recieved from the upstream. Because of that, they are only replaced with new values when the request is proxied to another upstream server. So, once set, these variables can be used at any moment, they will not be cleared.
I use nginx as a reverse proxy and I would like it to cache POST requests. My back-end is correctly configured to return appropriate cache-control headers for POST requests. In nginx I have configured:
proxy_cache_methods POST;
proxy_cache_key "$request_method$request_uri$request_body";
This works great for small HTTP POST requests. However I started noticing that for large requests (e.g. file uploads) it seems like the $request_body is ignored in the proxy_cache_key. When a form containing a file upload is submitted twice with completely different data, nginx will return the cached result.
What could cause this? How can I configure nginx to use the $request_body (or a hash of $request_body) in the proxy_cache_key even for large POST requests?
So it turns out that when $content_length > client_body_buffer_size,
then the request body is written to a file and the variable $request_body == "".
See also http://mailman.nginx.org/pipermail/nginx/2013-September/040442.html
Rather than using the $request_body within the proxy_cache_key, you may more simply use $content_length.
Of course, it comes with its own limitation, but if you know which query you will receive, it can be also a very interesting workaround.
proxy_cache_key "$scheme$request_method$host$request_uri$content_length";
You may alternatively use $request_body as well to keep the desired behavior for smaller request payload:
proxy_cache_key "$scheme$request_method$host$request_uri$request_body$content_length";