I have a single page app and am trying to use the query parameter nocahce=true to bypass Nginx cache for the first response (HTML file) and ALL subsequent requests initiated by it (to get CSS, JS, etc).
According to this, I can bypass my cache using the query parameter but it is not working as expected.
Steps to reproduce the issue:
Use this minified generic configuration:
http {
...
proxy_cache_path /var/temp/ levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
...
location / {
# Using angularjs.org as an example
proxy_pass https://angularjs.org;
proxy_set_header Host angularjs.org;
proxy_cache STATIC;
proxy_cache_valid 200 10m;
proxy_cache_bypass $arg_nocache;
add_header X-Cache-Status $upstream_cache_status always;
}
}
}
Expected:
The response header of the requests "http://servername" and "http://servername/css/bootstrap" (or any other subsequent requests initiated by http://servername?nocache=true) to bypass the cache, i.e. contain
"X-Cache-Status: BYPASS".
Actual:
The response header of "http://servername" contains "X-Cache-Status: BYPASS" but "http://servername/css/bootstrap" does not, instead the value of "X-Cache-Status" is HIT/MISS/etc depending on the cache status.
Am I using the proxy_cache_bypass in a wrong way or do I need to do more to achieve the expected behavior?
Thanks!
I was able to solve this by using cookie_nocache.
Update the directive proxy_cache_bypass to:
proxy_cache_bypass $cookie_nocache;
If you need to bypass the cache, set a cookie named "nocache" to true (any value that isn't empty nor 0 will work). Since the browser will send the cookies to subsequent requests, this will work.
To quickly test this, open the console and add the cookie like this.
document.cookie="nocache=true"
Related
Is it possible to configure NGINX to something like multiple reverse-proxy? So, instead of one proxy_pass:
location /some/path/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8000;
}
to have multiple proxy_passs (relays)? I need something like load balancer but to send not to one of them, but TO ALL (see the scheme below).
TO BE ACCURATE: IT'S NOT REVERSE PROXY AND IT IS NOT LOAD BALANCING AS WELL.
The response may be retrieving from any of them, maybe the last one or even to be "hardcoded" with some configuration directives - it does not matter (it's enough to be HTTP 200 and will be ignored)... So, the scheme should look like:
.----> server 1
/
<---> NGINX <-----> server 2 (response from 2nd server, but it maybe any of them!)
\
`----> server 3
Maybe some extension for NGINX? Is it possible at all and how to do it if it is?
Yes, it is possible.
What you are searching for is called mirroring. And nginx implements it since version 1.13.4, see the directive mirror for more info.
Example:
location = /mirror1 {
internal;
proxy_pass http://mirror1.backend$request_uri;
}
location = /mirror2 {
internal;
proxy_pass http://mirror2.backend$request_uri;
}
...
location /some/path/ {
mirror /mirror1;
mirror /mirror2;
proxy_pass http://primary.backend;
}
(you can also specify it for whole server (or even http) and disable for locations where you don't need it).
Alternatively you could try post_action (but this is undocumented feature and if I remember correctly is deprecated).
I have an nginx server which acts as a load balancer.
The nginx is configured to upstream 3 tries:
proxy_next_upstream_tries 3
I am looking for a way to pass to the backend server the current try number of this request - i.e. first, second or last.
I believe it can be done by passing this data in the header, however, how can I configure this in nginx and where can I take this data from?
Thanks
I sent this question to Nginx support and they provided me this explanation:
As long as you are using proxy_next_upstream mechanism for
retries, what you are trying to do is not possible. The request
which is sent to next servers is completely identical to the one
sent to the first server nginx tries - or, more precisely, this is the
same request, created once and then sent to different upstream
servers as needed.
If you want to know on the backend if it is handling the first
request or it processes a retry request after an error, a working
option would be to switch proxy_next_upstream off, and instead
retry requests on 502/504 errors using the error_page directive.
See http://nginx.org/r/error_page for examples on how to use
error_page.
So, I did as they advised me:
proxy_intercept_errors on;
location / {
proxy_pass http://example.com;
proxy_set_header NlbRetriesCount 0;
error_page 502 404 #fallback;
}
location #fallback {
proxy_pass http://example.com;
proxy_set_header NlbRetriesCount 1;
}
To bypass cache if upstream is up (max-age 1) and use cache if down (proxy_cache_use_stale) I created following config:
proxy_cache_path /app/cache/ui levels=1:2 keys_zone=ui:10m max_size=1g inactive=30d;
server {
...
location /app/ui/config.json {
proxy_cache ui;
proxy_cache_valid 1d;
proxy_ignore_headers Expires;
proxy_hide_header Expires;
proxy_hide_header Cache-Control;
add_header Cache-Control "max-age=1, public";
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
add_header X-Cache-Date $upstream_http_date;
proxy_pass http://app/config.json;
}
}
But cache is not used when upstream is down and client only gets 504 Gateway Timeout. I've already read following articles:
https://nginx.org/ru/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale
How to configure NginX to serve Cached Content only when Backend is down (5xx Resp. Codes)?
https://serverfault.com/questions/752838/nginx-use-proxy-cache-if-backend-is-down
And It does not work as I expect. Any help is appreciated.
Let's discuss a really simple setup with two servers. One running apache2 serving a simple html page. The other running nginx that reverse proxies to the first one.
http {
[...]
proxy_cache_path /var/lib/nginx/tmp/proxy levels=2:2 keys_zone=one:10m inactive=48h max_size=16g use_temp_path=off;
upstream backend {
server foo.com;
}
server {
[...]
location / {
proxy_cache one;
proxy_cache_valid 200 1s;
proxy_cache_lock on;
proxy_connect_timeout 1s;
proxy_cache_use_stale error timeout updating http_502 http_503 http_504;
proxy_pass http://backend/
}
}
}
This setup works for me. The most important difference is the proxy_cache_valid 200 1s; It means that only responses with http code 200 will be cached, and will only be valid for 1 second. Which does imply that the first request to a certain resource will be get from the backend and put in the cache. Any further request to that same resource will be served from the cache for a full second. After that the first request will go to the backend again, etc, etc.
The proxy_cache_use_stale is the important part in your scenario. It basically says in which cases it should still serve the cached version although the time specified by proxy_cache_valid has already passed. So here you have to decided in which cases you still want to serve from cache.
The directive's parameters are the same as for proxy_next_upstream.
You will need these:
error: In case the server is still up, but not responding, or is not responding correctly.
timeout: connecting to the server, requesting or response times out. This is also why you want to set proxy_connect_timeout to something low. The default is 60s and is way to long for an end-user.
updating: there is already a request for new content on it's way. (not really needed but better from a performance point of view.)
The http_xxx parameters are not going to do much for you, when that backend server is down you will never get a response with any of these codes anyway.
In my real life case however the backend server is also nginx which proxies to different ports on the localhost. So when nginx is running fine, but any of those backends is down the parameters http_502, http_503 and http_504 are quit useful, as these are exactly the http codes I will receive.
The http_403, http_404 and http_500 I would not want to serve from cache. When a file is forbidden (403) or no longer on the backend (404) or when a script goes wrong (500) there is a reason for that. But that is my take on it.
This, like the other similar questions linked to, are examples of the XY Problem.
A users wants to do X, wrongly believes the solution is Y but cannot do Y and so asks for help on how to do Y instead of actually asking about X. This invariably results in problems for those trying to give an answer.
In this case, the actual problem, X, appears to be that you will like to have a failover for your backend but would like to avoid spending money on a separate server instance and would like to know what options are available.
The idea of using a cache for this is not completely off but you have to approach and set the cache like a failover server which means it has to be a totally separate and independent system from the backend. This rules out proxy_cache which is intimately linked to the backend.
In your shoes, I will set up a memcached server and configure this to cache your stuff but not ordinarily serve your requests except on a 50x error.
There is a memcached module that comes with Nginx that can be compiled and used but it does not have a facility to add items to memcached. You will have to do this outside Nginx (usually in your backend application).
A guide to setting memcached up can be found here or just do a web search. Once this is up and running, this will work for you on the Nginx side:
server {
location / {
# You will need to add items to memcached yourself here
proxy_pass http://backend;
proxy_intercept_errors on
error_page 502 504 = #failover;
}
location #failover {
# Assumes memcached is running on Port 11211
set $memcached_key "$uri?$args";
memcached_pass host:11211;
}
}
Far better than the limited standard memcached module is the 3rd party memc module from OpenResty which allows you to add stuff directly in Nginx.
OpenResty also has the very flexible lua-resty-memcached which is actually the best option.
For both instances, you will need to compile them into your Nginx and familiarise yourself on how to set them up. If you need help with this, ask a new question here with the OpenResty tag or try the OpenResty support system.
Summary
What you actually need is a failover server.
This has to be separate and independent of the backend.
You can use a caching system as this but it cannot be proxy_cacheif you cannot live with getting cached results for the minimum time of 1 second.
You will need to extend a typical Nginx installation to do this.
It is not working because http_500, http_502, http_503, http_504 codes are expected from backend. In your case 504 is nginx code.
So you need to have the following:
proxy_connect_timeout 10s;
proxy_cache_use_stale ... timeout ...
or
proxy_cache_use_stale ... updating ...
or both.
I've created an intranet http site where users can upload their files, I have created a location like this one:
location /upload/ {
limit_except POST { deny all; }
client_body_temp_path /home/nginx/tmp;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 10G;
proxy_set_header X-upload /upload/;
proxy_set_header X-File-Name $request_body_file;
proxy_set_body $request_body_file;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_pass http://localhost:8080/;
}
Quite easy as suggested in the official doc. When upload is complete the proxy_pass directive calls the custom URI and makes filesystem operations on newly created temp file.
curl --request POST --data-binary "#myfile.img" http://myhost/upload/
Here's my problem: I need to have some kind of custom hook/operation telling me when the upload begins, something nginx can call before starting the http stream, is there a way to achieve that ? I mean, before uploading big files I need to call a custom url (something like proxy_pass) to inform the server about this upload and execute certain operations.
Is there a way to achieve it ? I have tried with echo-nginx module but it didn't succeed with these http POST (binary form-urlencoded). I don't want to use external scripts to deal with the upload and keep these kind of operations inside nginx (more performant)
Thanks in advance.
Ben
Self replying.
I have found this directive in order to solve my own request.
auth_request <something>
So I can do something like:
location /upload/ {
...
# Pre auth
auth_request /somethingElse/;
...
}
# Newly added section
location /somethingElse/ {
...
proxy_pass ...;
}
This seems to be fine and working, useful for uploads as well as for general auth or basic prechecks
we're using nginx to provide a security layer in front of an AWS presto cluster. We registered an SSL certificate for nginx-presto-cluster.our-domain.com
Requests for presto are passed through nginx with Basic authentication.
An SQL query for presto results in multipe sequential requests to the server, to fetch query results.
We created a nginx.conf that looks like this:
location / {
auth_basic $auth;
auth_basic_user_file /etc/nginx/.htpasswd;
sub_filter_types *;
sub_filter_once off;
sub_filter 'http://localhost:8889/' 'https://presto.nginx-presto-cluster.our-domain.com/';
proxy_pass http://localhost:8889/;
}
Presto's responses contain a nextUri to fetch results. The sub_filter rewrites these Uri's from localhost:8889 to our secure domain, where they are passed through nginx again.
The problem:
The first response has a body that looks exactly as desired:
{
"id":"20171123_104423_00092_u7hmr"
, ...
,"nextUri":"https://presto.nginx-presto-cluster.our-domain.com/v1/statement/20171123_104423_00092_u7hmr/1"
, ...
}
The second request, however, looks like:
{
"id":"20171123_105250_00097_u7hmr"
, ...
, "nextUri":"http://localhost:8889/v1/statement/20171123_105250_00097_u7hmr/2"
, ...
}
We would've expected the rewrite to work always the same way.
Can you help us?
We resolved the issue by adding
proxy_set_header Accept-Encoding "";
into the config snippet above.
The reason is that the traffic might be compressed in the process, if this is enabled.
The string replacement then would not work on the compressed content.
By not accepting any encoding we prevent this compression.