NGINX in Docker as Cache - How to keep cache on Container restart? - nginx

I set up a NGINX to cache tiles coming from a OpenStreetMaps server.
My goal is to save bandwidth an have fast transfer because the OSM-server is very slooooow.
After filling up my cache with the most used tiles I lost them all after container restart.
But I want to keep the cache.
How do do this?
Here is my config:
proxy_cache_path /TileCacheVol/tile levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /TileCacheVol/tile/tmp;
add_header x-nginx-cache $upstream_cache_status;
upstream openstreetmap_backend {
server c.tile.opentopomap.org;
server b.tile.opentopomap.org;
server a.tile.opentopomap.org;
}
server {
listen 105;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
# Cache Dauer (2y, 365d, 4m, ...)
proxy_cache_valid 200 302 2y;
proxy_cache_valid 404 1m;
proxy_redirect off;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
if (!-f $request_filename) {
proxy_pass http://openstreetmap_backend;
break;
}
}
}
I searched for a solution but did not found it. I'm no expert on this... :-)

It seems that your question is not nginx-related.
Don't know if you really need to run nginx in Docker, but if so, you need to ensure cache persistence using Docker volumes or bind mount mechanism. In your case you need to make /TileCacheVol persistent.

Related

nginx invalid URL prefix with rewrite

I'm using docker and running nginx alongside varnish.
Because I'm running docker, I've set the resolver manually at the top of the nginx configuration (resolver 127.0.0.11 ipv6=off valid=10s;) so that changes to container IPs will be picked up without needing to restart nginx.
This is the relevant part of the config that's giving me trouble:
location ~^/([a-zA-Z0-9/]+)$ {
set $args ''; #clear out the entire query string
set $card_name $1;
set $card_name $card_name_lowercase;
rewrite ^ /cards?card=$card_name break;
proxy_set_header x-cache-key card-type-$card_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_http_version 1.1;
set $backend "http://varnish:80";
proxy_pass $backend;
proxy_intercept_errors on;
proxy_connect_timeout 60s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 503 /maintenance.html;
}
When I visit a URL for this, e.g. https://example.com/Test, I get 500 internal server error.
In the nginx error log, I see the following:
2022/04/27 23:59:45 [error] 53#53: *1 invalid URL prefix in "", client: 10.211.55.2, server: example.com, request: "GET /Test HTTP/2.0", host: "example.com"
I'm not sure what's causing this issue -- http:// is included in the backend, so it does have a proper prefix.
If I just use proxy_pass http://varnish:80, it works fine, but the backend needs to be a variable in order to force docker to use the resolver.
I've stumble across similar issue. I'm not sure why but defining the
set $backend "http://varnish:80";
outside of location block

Nginx isn't storing cache

I'm trying to allow nginx caching in the simplest form. But for some reason it's not working. I'm currently using nginx with gunicorn and flask on an ec2 instance.
This is my /etc/nginx/nginx.conf file:
user nginx;
...
proxy_cache_path /var/cache/nginx keys_zone=mycache:10m;
proxy_cache_methods GET HEAD POST;
server {
listen 80;
access_log /var/log/nginx/agori.access.log main;
error_log /var/log/nginx/agori.error.log;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache mycache;
proxy_cache_valid any 48h;
proxy_buffering on;
proxy_pass http://unix:/home/ec2-user/src/project.sock;
}
}
when check the /var/cache/nginx folder, it's empty. These are the folders permissions:
drwxrwxrwx 2 nginx root 6 May 13 14:03 nginx
This is the request and respond headers:
PS: This is on mobile(ios)
It sounds to me that something in your Nginx config might not be correct (syntax error or not supported by your Nginx version). In most of the case I encountered so far that was the case for me.
You probably know Nginx' reverse proxy example which features the following configuration
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_buffering on;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
I tried to compare that with your configuration file and I my debugging approach would be:
Does nginx log your requests in access_log?
Try to remove whether the example configuration file works after minimal modifications.
Replace the any with a 200 for a start and see whether that works.
If that works, put in step by step all additional config lines like the proxy_cache_methods line.

issue with loading React static files from NGINX web server

NGINX web server takes about 3 minutes to load a React static file of about 3MB. and that slow download the performance of load the application
I have tried zipping the static file on my React file
this is my current configuration in NGINX
upstream app1{
# Use ip_hash algo for session affinity
least_conn;
server 192.168.1.7:3000;
server 192.168.3.7:3000;
}
location /app1 {
# Force timeouts if the backend dies
#proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# Set timeouts
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
#proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://app1/;
proxy_buffering off;
}
location ~* ^/static {
proxy_pass https://app1$request_uri;
proxy_buffering off;
}
location ~ ^/build {
proxy_pass https://app1$request_uri;
}
I want to be able to serve the static content within the NGINX sever without haven't fetch from the application server. also I want to be able to compress the file to the minimum size like 10 kilobyte.

How to set up two Nginx containers as a reverse proxy in an active-passive set up with failover?

I have set up a Nginx container on a Linux-EC2 server. My Nginx config file is as follows:
server {
listen 80;
server_name client-dev.com;
location / {
proxy_pass http://dev-client.1234.io:5001/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80;
server_name client-test.com;
location / {
proxy_pass http://test-client-1234.io:5005/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
It passes the requests with different ports to port 80. Now, I need to make a redundant Nginx in an active-passive mode. In case, if the Nginx container goes down/stops.
To do so, would I need to set another Nginx container in the same server? If so, how should it be setup to do the failover automatically?
I have looked at "upstream" option, but as I found, it would not work for this case. The proxy_pass that I have are external and dynamic that I get them using a script from docker-cloud.
There is another way named "docker-gen" however, I'm not sure how much useful it would be, and I prefer to use another way if there is any?
Any help would be appreciated.
I can think of following options:
Kubernetes: You can create a deployment for your nginx setup and use liveliness probes. Kubernetes will probe the nginx container with http request/interval you provide, if the pod is not healthy, it will be killed and recreated. Using multi nodes in your kubernetes cluster you can even mitigate from node failure.
Docker Swarm: Using docker swarm mode with multiple nodes, you can mitigate from node failure, but nginx health should be checked by an external custom script that can be done with bash and curl.
Using standalone hosts with keepalived: This is traditional nginx active/passive cluster using keepalived. You can also use this with docker but it would be dirty. because all of your containers will be passive on one host.

nginx: increase timeout to prevent 404 not found error?

I have a Django server running Gunicorn, and in front of that I have nginx. I serve static files directly from nginx, and pass other things through to Gunicorn.
I have some slow-running back-end queries, and I'm finding that nginx is quite often timing out before they return - so I see a 404 page.
Is there a way I can increase the timeout level?
This is my nginx conf file:
server {
listen 443;
client_max_body_size 4G;
access_log /webapps/myapp/logs/nginx-access.log;
error_log /webapps/myapp/logs/nginx-error.log;
location /media/ {
alias /webapps/myapp/myapp/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://hello_app_server;
break;
}
}
I think perhaps I need proxy_read_timeout, but I'm not sure from the docs.
Try
proxy_read_timeout 120s;
Put that inside your proxy section.
The default is apparently 60s so try doubling and go from there.
Not too confident about it but i had something similar with a timeout in mysql today on a server at work and doubling that worked. Worth a try and hope it helps.

Resources