How to set up two Nginx containers as a reverse proxy in an active-passive set up with failover? - nginx

I have set up a Nginx container on a Linux-EC2 server. My Nginx config file is as follows:
server {
listen 80;
server_name client-dev.com;
location / {
proxy_pass http://dev-client.1234.io:5001/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80;
server_name client-test.com;
location / {
proxy_pass http://test-client-1234.io:5005/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
It passes the requests with different ports to port 80. Now, I need to make a redundant Nginx in an active-passive mode. In case, if the Nginx container goes down/stops.
To do so, would I need to set another Nginx container in the same server? If so, how should it be setup to do the failover automatically?
I have looked at "upstream" option, but as I found, it would not work for this case. The proxy_pass that I have are external and dynamic that I get them using a script from docker-cloud.
There is another way named "docker-gen" however, I'm not sure how much useful it would be, and I prefer to use another way if there is any?
Any help would be appreciated.

I can think of following options:
Kubernetes: You can create a deployment for your nginx setup and use liveliness probes. Kubernetes will probe the nginx container with http request/interval you provide, if the pod is not healthy, it will be killed and recreated. Using multi nodes in your kubernetes cluster you can even mitigate from node failure.
Docker Swarm: Using docker swarm mode with multiple nodes, you can mitigate from node failure, but nginx health should be checked by an external custom script that can be done with bash and curl.
Using standalone hosts with keepalived: This is traditional nginx active/passive cluster using keepalived. You can also use this with docker but it would be dirty. because all of your containers will be passive on one host.

Related

NGINX in Docker as Cache - How to keep cache on Container restart?

I set up a NGINX to cache tiles coming from a OpenStreetMaps server.
My goal is to save bandwidth an have fast transfer because the OSM-server is very slooooow.
After filling up my cache with the most used tiles I lost them all after container restart.
But I want to keep the cache.
How do do this?
Here is my config:
proxy_cache_path /TileCacheVol/tile levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /TileCacheVol/tile/tmp;
add_header x-nginx-cache $upstream_cache_status;
upstream openstreetmap_backend {
server c.tile.opentopomap.org;
server b.tile.opentopomap.org;
server a.tile.opentopomap.org;
}
server {
listen 105;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
# Cache Dauer (2y, 365d, 4m, ...)
proxy_cache_valid 200 302 2y;
proxy_cache_valid 404 1m;
proxy_redirect off;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
if (!-f $request_filename) {
proxy_pass http://openstreetmap_backend;
break;
}
}
}
I searched for a solution but did not found it. I'm no expert on this... :-)
It seems that your question is not nginx-related.
Don't know if you really need to run nginx in Docker, but if so, you need to ensure cache persistence using Docker volumes or bind mount mechanism. In your case you need to make /TileCacheVol persistent.

Pass websockets and timeout values in nginx ingress controller

I want to configure the following settings in my nginx ingress controller deployment
proxy_socket_keepalive -> on
proxy_read_timeout -> 3600
proxy_write_timeout ->3600
However I am unable to find them as annotations here, although they appear in the list of available nginx directives.
Why is that?
There is no proxy_write_timeout. I assume you meant the proxy_send_timeout.
Both:
nginx.ingress.kubernetes.io/proxy-send-timeout
and:
nginx.ingress.kubernetes.io/proxy-read-timeout
Can be found here and here.
As for the proxy_socket_keepalive, unfortunately, this option cannot be set via annotations. You may want to nest it in the Nginx config, for example:
location / {
client_max_body_size 128M;
proxy_buffer_size 256k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_socket_keepalive on;

issue with loading React static files from NGINX web server

NGINX web server takes about 3 minutes to load a React static file of about 3MB. and that slow download the performance of load the application
I have tried zipping the static file on my React file
this is my current configuration in NGINX
upstream app1{
# Use ip_hash algo for session affinity
least_conn;
server 192.168.1.7:3000;
server 192.168.3.7:3000;
}
location /app1 {
# Force timeouts if the backend dies
#proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# Set timeouts
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
#proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://app1/;
proxy_buffering off;
}
location ~* ^/static {
proxy_pass https://app1$request_uri;
proxy_buffering off;
}
location ~ ^/build {
proxy_pass https://app1$request_uri;
}
I want to be able to serve the static content within the NGINX sever without haven't fetch from the application server. also I want to be able to compress the file to the minimum size like 10 kilobyte.

nginx: [emerg] host not found in upstream

I'm trying to setup a new site on my server, I've updated the nginx settings to redirect the new site, created the directories, setup the correct permissions and created the DNS entries on my domain.
However when I restart nginx I get the following error:
nginx: [emerg] host not found in upstream "petproject" in
/usr/local/nginx/conf/nginx-vhosts.conf:277 nginx: configuration file
/usr/local/nginx/conf/nginx.conf test failed
Could someone tell me how to fix this? I have identical settings with a different domain and directory and it works fine so I can't quite pinpoint whats wrong here.
I have included my nginx.conf below.
server {
listen 217.23.14.107:80;
server_name petproject.com;
access_log /var/log/nginx/petproject.com_access.log;
error_log /var/log/nginx/petproject_error.log;
root /home/petproject/laravel/public;
location ~* \.(jpg|jpeg|gif|css|js|ico|rar|gz|zip|pdf|tar|bmp|xls|doc|swf|mp3|avi|png|htc|txt|htc|flv$
access_log off;
expires 7d;
}
location / {
index index.php index.html index.htm;
proxy_pass http://petproject:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# deny access to apache .htaccess files
location ~ /\.ht
{
deny all;
}
}

nginx 403 Forbidden error

I'm trying to set up graphite to work with grafana in docker based on this project : https://github.com/kamon-io/docker-grafana-graphite
and when I run my dockerfile I get 403 Forbidden error for nginx.
my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have :
server {
listen 80 default_server;
server_name _;
location / {
root /src/grafana/dist;
index index.html;
}
location /graphite/ {
proxy_pass http:/myserver:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
}
But I still keep getting 403 forbidden. Checking the error log for nginx says :
directory index of "/src/grafana/dist/" is forbidden
Stopping and running it again it says :
directory index of "/src/grafana/dist/" is forbidden
I'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding.
Thanks in advance.
That's because you are hitting the first location block and the index file is not found.
A request to '/' will look for 'index.html' in '/src/grafana/dist'.
Confirm that:
1. 'index.html' exists.
2. You have the right permissions.
nginx has read-access to the entire directory tree leading up to 'index.html'. That is, it must be able to read directories 'src', 'src/grafana' and 'src/grafana/dist' as well as 'index.html' itself.
A hacky quick-fix to achieve this would be to do 'sudo chmod -R 755 /src', but I don't recommend it.

Resources