I use nginx with several fastcgi backends (php-cgi, mod-mono-fastcgi4). Now I need to sent an additional http header to the fastcgi backend, basically the same as proxy_set_header does when using nginx as reverse proxy. But to my findings, there is no such thing as fastcgi_set_header in nginx.
Somebody got any ideas how to do this anyways? I dont want to use additional nginx modules as the solution muste be easily deployable on a wide range of customer systems.
I took a quick look at the manual and I think the closest you will find is passing fastcgi parameters:
The request headers are transferred to the FastCGI-server in the form of parameters. In the applications and the scripts run from the FastCGI-server, these parameters are usually accessible in the form of environment variables. For example, the header "User-agent" is transferred as parameter HTTP_USER_AGENT. Besides the headers of the HTTP request, it is possible to transfer arbitrary parameters with the aid of directive fastcgi_param.
http://wiki.nginx.org/HttpFcgiModule#Parameters.2C_transferred_to_FastCGI-server.
fastcgi_param
syntax: fastcgi_param parameter value
http://wiki.nginx.org/HttpFcgiModule#fastcgi_param
The URLs to the nginx wiki articles above are broken.
nginx exposes request header values via variables prefixed with $http_, so a request header of HTTP_USER_AGENT is available via $http_user_agent.
Likewise a request header named CHICKEN_SOUP would be available via $http_chicken_soup.
The example below shows how to pass the the Authorization HTTP request header to PHP scripts running under php-fpm (PHP FastCGI process manager).
location ~ \.php$ {
fastcgi_pass unix:/path/to/socket;
fastcgi_index index.php;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
... other settings
}
Nginx now has:
fastcgi_pass_header 'Cache-Control: no-cache, must-revalidate';
Which can be used in your location rules if you are adding headers which aren't already specified in your request. By default fastcgi uses:
fastcgi_pass_request_headers on;
Which will pass all incoming Headers from the request to fastcgi.
You can do this with the third party module ngx_headers_more. After building nginx with this module included, you can do the following in your configuration:
location / {
more_set_input_headers 'Foo: bar baz';
...
}
Related
I want to set up an NGINX server which provides the following functionality:
When a request is made NGINX to get the page at /path/to/page, it fetches the page at /path/to/page.
If the upstream server is down or NGINX can't connect to it for some reason, NGINX returns a cached version of the page if it has one.
If the cached file is over 6 hours old, don't use it, just return a 502.
If the upstream server is available, never use the cache.
I have an NGINX config here which I think should work based on my understanding of the docs, but it doesn't and I can't see why. The problem is with point (4), this NGINX server returns the cached version of the file even if the upstream server is online.
daemon off;
error_log /dev/stdout info;
events {
}
http {
proxy_cache_path
"/home/jack/Code/NGINX Caching/Codebase/cache" # Cache path
keys_zone=cache:10m # Name of cacahe, max size for keys 10 megabytes
levels=1:2 # Don't store all cached files in a single directory
max_size=500m # Max size of cache
inactive=6h; # Cached file deleted if not used within six hours
proxy_cache_valid 6h;
proxy_cache_key "$request_method$request_uri";
access_log /dev/stdout;
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
proxy_cache_valid 6h;
proxy_buffering on;
proxy_cache_use_stale error timeout;
}
}
}
Replace the proxy_cache_path with a path to a directory on your machine, and run another webserver on your machine on port 8000. When I modify a file served by the server on port 8000, NGINX doesn't see the change until I erase the cache. The issue is with NGINX and not my client (Firefox), even if I turn off caching in the browser, NGINX returns a 200 with the old file contents.
Can you please check if these two directives might help you:
proxy_cache_revalidate:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_revalidate
and
proxy_cache_use_stale: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale
There is a video from nginx.conf '17 online describing all the cool things you could achive with caching: https://www.youtube.com/watch?v=xZrOjmAkFC8. maybe this is also of interest for you.
So, it seems I misunderstood the NGINX proxy cache directives. The docs are quite confusing on this subject, so I'll lay it out point by point.
This official help page gives a decent overview of the various directives, however it makes no mention of something which as it turns out is a very important conceptual building block in understanding how NGINX caching works: the notion of a cached file being stale.
NGINX's default behavior is to always use the cache if it's there, rather than querying the upstream server. With this config, a minimal config to do caching, NGINX will query the upstream server the first time a page is accessed, and then use the cached version forever after that:
events {
}
http {
proxy_cache_path
/path/to/cache
keys_zone=my_cache:10m;
proxy_cache_key "$request_method$request_uri";
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
}
}
}
You can use the proxy_cache_valid directive to tell NGINX when a cached file should be considered "stale". For example, if we set proxy_cache_valid 5m, then 5 minutes after a cache file is created NGINX will stop serving it and querying the upstream server again on the next request. If the upstream is down, NGINX will return a 502. However, during those five minutes, NGINX will still use the cache even if the upstream server is available, so this is still not what we want.
NGINX has another directive, proxy_cache_use_stale, gives NGINX conditions under which it may use cached files even if they're stale. We can combine these together to get a server which caches pages, makes them stale immediately (or almost immediately), and then only uses them if the upstream is down:
events {
}
http {
proxy_cache_path
/path/to/cache
keys_zone=my_cache:10m;
proxy_cache_key "$request_method$request_uri";
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
proxy_cache_valid 1s;
proxy_cache_use_stale error timeout;
}
}
}
This config has almost the behavior we want, except that if the upstream server goes down for an extended period of time, NGINX will continue to use the cache indefinitely. As far as I know there is no way to tell NGINX to totally invalidate/clear a cached file after a given amount of time. Normally that's what proxy_cache_valid is for, but we're already using that for a different purpose, to make files stale after 1 second so they're only used when the upstream is down. We would need some next level after "stale" that means the file is completely invalidated, but I don't think that exists in NGINX.
So the simplest solution is to just clear the cache manually. It's sufficient to just delete all files in the cache directory (or its subdirectories) which were last modified more than 6 hours ago, or whatever you want the expiry time to be. On a Linux system, you can run this script every 5 minutes, for example:
find /path/to/cache -type f -mmin +360 -delete
To bypass cache if upstream is up (max-age 1) and use cache if down (proxy_cache_use_stale) I created following config:
proxy_cache_path /app/cache/ui levels=1:2 keys_zone=ui:10m max_size=1g inactive=30d;
server {
...
location /app/ui/config.json {
proxy_cache ui;
proxy_cache_valid 1d;
proxy_ignore_headers Expires;
proxy_hide_header Expires;
proxy_hide_header Cache-Control;
add_header Cache-Control "max-age=1, public";
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
add_header X-Cache-Date $upstream_http_date;
proxy_pass http://app/config.json;
}
}
But cache is not used when upstream is down and client only gets 504 Gateway Timeout. I've already read following articles:
https://nginx.org/ru/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale
How to configure NginX to serve Cached Content only when Backend is down (5xx Resp. Codes)?
https://serverfault.com/questions/752838/nginx-use-proxy-cache-if-backend-is-down
And It does not work as I expect. Any help is appreciated.
Let's discuss a really simple setup with two servers. One running apache2 serving a simple html page. The other running nginx that reverse proxies to the first one.
http {
[...]
proxy_cache_path /var/lib/nginx/tmp/proxy levels=2:2 keys_zone=one:10m inactive=48h max_size=16g use_temp_path=off;
upstream backend {
server foo.com;
}
server {
[...]
location / {
proxy_cache one;
proxy_cache_valid 200 1s;
proxy_cache_lock on;
proxy_connect_timeout 1s;
proxy_cache_use_stale error timeout updating http_502 http_503 http_504;
proxy_pass http://backend/
}
}
}
This setup works for me. The most important difference is the proxy_cache_valid 200 1s; It means that only responses with http code 200 will be cached, and will only be valid for 1 second. Which does imply that the first request to a certain resource will be get from the backend and put in the cache. Any further request to that same resource will be served from the cache for a full second. After that the first request will go to the backend again, etc, etc.
The proxy_cache_use_stale is the important part in your scenario. It basically says in which cases it should still serve the cached version although the time specified by proxy_cache_valid has already passed. So here you have to decided in which cases you still want to serve from cache.
The directive's parameters are the same as for proxy_next_upstream.
You will need these:
error: In case the server is still up, but not responding, or is not responding correctly.
timeout: connecting to the server, requesting or response times out. This is also why you want to set proxy_connect_timeout to something low. The default is 60s and is way to long for an end-user.
updating: there is already a request for new content on it's way. (not really needed but better from a performance point of view.)
The http_xxx parameters are not going to do much for you, when that backend server is down you will never get a response with any of these codes anyway.
In my real life case however the backend server is also nginx which proxies to different ports on the localhost. So when nginx is running fine, but any of those backends is down the parameters http_502, http_503 and http_504 are quit useful, as these are exactly the http codes I will receive.
The http_403, http_404 and http_500 I would not want to serve from cache. When a file is forbidden (403) or no longer on the backend (404) or when a script goes wrong (500) there is a reason for that. But that is my take on it.
This, like the other similar questions linked to, are examples of the XY Problem.
A users wants to do X, wrongly believes the solution is Y but cannot do Y and so asks for help on how to do Y instead of actually asking about X. This invariably results in problems for those trying to give an answer.
In this case, the actual problem, X, appears to be that you will like to have a failover for your backend but would like to avoid spending money on a separate server instance and would like to know what options are available.
The idea of using a cache for this is not completely off but you have to approach and set the cache like a failover server which means it has to be a totally separate and independent system from the backend. This rules out proxy_cache which is intimately linked to the backend.
In your shoes, I will set up a memcached server and configure this to cache your stuff but not ordinarily serve your requests except on a 50x error.
There is a memcached module that comes with Nginx that can be compiled and used but it does not have a facility to add items to memcached. You will have to do this outside Nginx (usually in your backend application).
A guide to setting memcached up can be found here or just do a web search. Once this is up and running, this will work for you on the Nginx side:
server {
location / {
# You will need to add items to memcached yourself here
proxy_pass http://backend;
proxy_intercept_errors on
error_page 502 504 = #failover;
}
location #failover {
# Assumes memcached is running on Port 11211
set $memcached_key "$uri?$args";
memcached_pass host:11211;
}
}
Far better than the limited standard memcached module is the 3rd party memc module from OpenResty which allows you to add stuff directly in Nginx.
OpenResty also has the very flexible lua-resty-memcached which is actually the best option.
For both instances, you will need to compile them into your Nginx and familiarise yourself on how to set them up. If you need help with this, ask a new question here with the OpenResty tag or try the OpenResty support system.
Summary
What you actually need is a failover server.
This has to be separate and independent of the backend.
You can use a caching system as this but it cannot be proxy_cacheif you cannot live with getting cached results for the minimum time of 1 second.
You will need to extend a typical Nginx installation to do this.
It is not working because http_500, http_502, http_503, http_504 codes are expected from backend. In your case 504 is nginx code.
So you need to have the following:
proxy_connect_timeout 10s;
proxy_cache_use_stale ... timeout ...
or
proxy_cache_use_stale ... updating ...
or both.
Currently I have a proxy cache setup which is very vanilla:
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m;
server {
# ...
location / {
proxy_cache my_cache;
proxy_pass http://my_upstream;
}
}
Now I got the requirement to handle fingerprinted assets. Unfortunately the fingerprint is in the first part of the URL.
Examples:
http://www.example.com/asd9f87asdf/assets/foobar.jpg
http://www.example.com/oihllk8asdf/assets/foobar.jpg
Both requests should ask for
/assets/foobar.jpg
from proxy_pass and add the first part of the URL asd9f87asdf or oihllk8asdf to the used key in the cache.
Is it possible to extract that part of the URL and add it to the proxy cache key?
Could you just redirect any request for /*/assets to /assets? In this sense, you are still getting the initial request (which does not need to be cached), but then the redirect target would be cached?
i am looking for a solution to save data sent via http (e.g. as a POST) as quickly as possible (with lowest overhead) via nginx (v1.2.9). i tried the following nginx configuration, but am not seeing any files written in the directory:
server {
listen 9199;
location /saveme {
client_body_in_file_only on;
client_body_temp_path /tmp/bodies;
}
}
what am i doing wrong? and/or is there a better way to accomplish this? (the data that is written should ideally be one file per request, and it does not matter if it is fairly "raw" in nature. post-processing of the files will be done via a separate process via a queue.)
This question has already been answered here:
Basically, you need to combine log_format and fastcgi_pass. You can then use the access_log directive for example, to specify where the saved variable should be dumped to.
location = /saveme {
log_format postdata $request_body;
access_log /var/log/nginx/postdata.log postdata;
fastcgi_pass php_cgi;
}
It could also work with your method but I think you're missing client_body_buffer_size and `client_max_body_size
Do you mean save cache for HTTP post while someone access and request file and store on hdd rather than memory?
I may suggest use proxy_cache_path and proxy_cache. The proxy_cache_path directive sets the path and configuration of the cache, and the proxy_cache directive activates it.
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_pass http://my_upstream;
}
}
The local disk directory for the cache is called /path/to/cache
levels sets up a two‑level directory hierarchy under /path/to/cache/
keys_zone sets up a shared memory zone for storing the cache keys and metadata such as usage timers
max_size sets the upper limit of the size of the cache
inactive specifies how long an item can remain in the cache without being accessed
the proxy_cache directive activates caching of all content that matches the URL of the parent location block (in the example, /). You can also include the proxy_cache directive in a server block; it applies to all location blocks for the server that don’t have their own proxy_cache directive.
I have a server with Nginx installed.
I also have 2 domains pointing to that server. (domain1.com and domain2.com). The first domain (domain1.com) is the front website. The other domain (domain2.com) is the CDN for static content like: JS, CSS, images and font files.
I setup domains config files and everything is running fine. The nginx server has PHP running on it.
My question is: How to disable PHP on the second domain (domain2.com) unless the request has "?param=something" in the GET request?!
It will be something like:
// PHP is disabled
if($_GET['param']){
// Enable PHP
}
or should I use:
location ~ /something {
deny all
}
And keep PHP running?!
Note: I need php to process the param i pass to output some JS or CSS.
PHP with nginx is very different than PHP with Apache, since there is no mod_php equiv for nginx (AFAIK).
PHP is handled by totally separate daemon (php-fpm, or by passing the request to an apache server, etc.) As a result, you can bypass php completely simply by letting nginx handle the request without passing it off to php-fpm or apache. There is a good chance that your nginx configuration already is setup only handoff .php files to php-fpm.
Now, if you're trying to have requests such as /some-style.css?foo=bar get handled by php, then I'd suggest simply segregating static resources from dynamic ones.
You could create a third domain, or simply use two separate directories.
/static/foo.css
vs
/dynamic/bar.css?xyz=pdq
You could then handoff to php inside the location blocks.
location ~ /static {
try_files $uri =404;
}
location ~ /dynamic {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
With the above configuration, requests starting with /static will bypass php regardless of file extension (even .php) and requests starting with /dynamic will be passed on the php-fpm regardless of file extension (even .css)