I have an app, that generate a pictures. Before him is Nginx. I need make nginx cache with big value of proxy_cache_valid. I set this value to 365 days. But cache expires in 10 minutes, and my app generate same image again. How make long time nginx cache? Here is my nginx config:
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=cache:100m max_size=100G;
proxy_temp_path /var/lib/nginx/proxy 1 2;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_no_cache $cookie_session;
proxy_cache_bypass $cookie_session $http_x_myupdate;
server {
server_name conv.site.com ;
client_max_body_size 32m;
location / {
proxy_cache cache;
proxy_cache_valid 365d
proxy_cache_valid 404 1m;
proxy_ignore_headers X-Accel-Expires Set-Cookie;
proxy_pass http://127.0.0.1:3021 ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Related
My nginx location block is:
location ^~ /get/preview {
add_header X-Proxy-Cache $upstream_cache_status;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_ignore_headers Cache-Control Set-Cookie;
proxy_ssl_protocols TLSv1.3;
proxy_ssl_session_reuse on;
proxy_cache upstream;
proxy_cache_key $scheme$host$uri$is_args$args;
proxy_cache_methods GET HEAD;
proxy_cache_min_uses 0;
proxy_cache_valid 200 301 302 1h;
proxy_cache_use_stale updating;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_pass https://tar.backend.com;
}
This will be a HIT after the 1st request:
https://example.com/get/preview?fileId=17389&x=256&y=256&a=true&v=5fe320ede1bb5
This is always a MISS:
https://example.com/get/preview.png?file=/zedje/118812514_3358890630894241_5001264763560347393_n.jpg&c=5fe3256d45a8c&x=150&y=150
You should check "Expires" header from your upstream. Documentation said "parameters of caching may be set in the header fields “Expires” or “Cache-Control”."
Another option - maybe you have another location for .(png|jpg|css|js)$ files with different options.
I am using a public URL and adding it to my Nginx reverse proxy. I have come across a bad request error when I run my nginx.conf configurations file. I have an access token that also needs to be added
Below is my nginx.conf file.
Any recommendations ?
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost 127.0.0.1;
client_max_body_size 0;
set $allowOriginSite *;
proxy_pass_request_headers on;
proxy_pass_header Set-Cookie;
# External settings, do not remove
#ENV_ACCESS_LOG
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_set_header X-Forwarded-Proto $scheme;
location /test/ {
proxy_pass https://a***.***.com;
}
}
}
403 ERROR
The request could not be satisfied.
I have fixed the issue by using the Nginx rewrite modules. I have posted a link below.
http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
location /test/ {
rewrite ^/test(.*) https://URL$1 break;
Stack used:
Nginx -> Uwsgi (proxy passed) -> Django
I have an API that takes aroundn 80 seconds to execute a query. Nginx closes the connection with the upstream server after 60 seconds. This was found in the nginx error log:
upstream prematurely closed connection while reading response header from upstream
The uWSGI and django application logs do not show anything weird.
This is my nginx configuration:
server {
listen 80;
server_name xxxx;
client_max_body_size 10M;
location / {
include uwsgi_params;
proxy_pass http://127.0.0.1:8000;
proxy_connect_timeout 10m;
proxy_send_timeout 10m;
proxy_read_timeout 10m;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_hide_header Vary;
proxy_set_header Accept-Encoding '';
proxy_ignore_headers Cache-Control Expires;
proxy_set_header Referer $http_referer;
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
How do I increase the timeout, I have tried settings the proxy_pass timeout variables but they do no seem to be working.
Okay, so managed to solve this issue by replacing proxy_pass with uwsgi_pass
This is how my nginx conf looks now:
server {
listen 80;
server_name xxxxx;
client_max_body_size 4G;
location /static/ {
alias /home/rmn/workspace/mf-analytics/public/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi_web.sock;
uwsgi_read_timeout 600;
}
}
And I had to set the socket parameter in my uwsgi ini file.
For some reason, the proxy_pass timeouts just wouldnt take effect.
I have the following nginx configuration for one of my virtual servers:
upstream app_example_https {
server 127.0.0.1:1340;
}
proxy_cache_path /Users/jaanus/dev/nginxcache levels=1:2 keys_zone=S3CACHE:10m;
proxy_cache_key "$scheme$request_method$host$request_uri";
server {
listen 0.0.0.0:1338;
server_name localhost;
ssl on;
ssl_certificate /Users/jaanus/dev/devHttpsCert.pem;
ssl_certificate_key /Users/jaanus/dev/devHttpsKey.pem;
location = / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/;
}
location /static/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/static/;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app_example_https/;
proxy_redirect off;
}
}
What this does in English:
There’s an nginx frontend which serves requests either from a static Amazon S3 site, or an application server.
All requests to / (site root) and /static are reverse-proxied from Amazon S3. All other requests are reverse-proxied from the application server.
Now, the problem: there are two almost identical Location blocks for the S3. This was the only way how I could make this configuration work, where two specific folders (root and /static) are served from S3, and everything else goes to the application server.
Two almost-identical blocks look dumb and are not scalable. When I add such folders, I don’t want to keep duplicating the blocks.
How do I merge the two locations into one Location block, while keeping everything working the same way?
You could put repeating part into external file and include it.
amazon.inc
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com;
your config
location = / {
include amazon.inc;
}
location /static/ {
include amazon.inc;
}
location / {
# proxy to you app
}
If you prefer to keep all in one file, you could use this trick:
error_page 470 = #amazon;
location = / {
return 470;
}
location /static/ {
return 470;
}
location #amazon {
# proxy to amazon
}
You could use regexp to merge several locations together, but I would not recommend to do that because it's hard to read and understand and is less efficient than simple prefix locations. But, just as an example:
# NOT RECOMMENDED
location ~ ^/($|static/) {
# proxy to amazon
}
I have a nginx cache configure as follows:
location / {
rewrite ^/(.*)$ /$1 break;
proxy_pass http://news;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Protocol http;
proxy_read_timeout 480;
proxy_connect_timeout 480;
set $cache_key "$uri";
proxy_cache my-cache;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 30s;
proxy_cache_methods GET;
add_header X-Cache-Status $upstream_cache_status;
}
When I check the x-cache-status header of the response the second time its value is HIT. The problem is that the after about 20 seconds the response is giving MISS. The http response code is 200. Any ideas?
Be aware of the inactive=time setting in the nginx.conf for proxy_cache_path. Proxy_cache_invalid does not override this value.