I have a problem with using nginx as a load balancer. I could configure it to work as a load balancer but I don't how to make it cache static contents from the proxied servers in the backend such as html,css,js, etc... This means I want nginx to
weather to cache or not based on the content of the response from the backend servers if it changed to bypass cache and send requests to the backend and if not to serve from cache. I tried and seached a lot in the internet to make it using many directives such as proxy_cache_bypass and proxy_no_cache but I couldn't. Is there any means to do this if anyone has experience in such topic. these are the configurations:
upstream backend {
server www.webserver1.com:443 max_fails=3 fail_timeout=15s;
server www.webserver2.com:443 max_fails=3 fail_timeout=15s;
}
server {
listen 443 ssl;
rewrite_log on;
error_log /var/log/nginx/lb.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Proxy-Cache $upstream_cache_status;
ssl_certificate /etc/nginx/client.crt;
ssl_certificate_key /etc/nginx/client.key;
ssl on;
location / {
proxy_cache backcache;
#proxy_cache_methods GET HEAD POST;
#proxy_cache_bypass $cookie_nocache $arg_nocache;
#proxy_no_cache $cookie_nocache $arg_nocache;
proxy_cache_min_uses 1;
#proxy_cache_revalidate on;
#proxy_cache_valid 200 4m;
proxy_cache_lock on;
proxy_cache_background_update on;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_pass https://backend;
}
}
server {
listen 80 ;
if ($http_x_forwarded_proto != 'https') {
rewrite ^(.*) https://$host$1 redirect;
}
}
these are the contents of a config. file under /etc/nginx/conf.d/ which is included in the main config. file which is /etc/nginx/nginx.conf and also those 2 lines are in the main config. file :
proxy_cache_path /var/lib/nginx/cache keys_zone=backcache:20m max_size=100m;
proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args$cookie_user";
Your backend servers could be the root cause of that problem, if those servers were improperly configured. For example sending Cache-Control headers on requests to static files.
According to that docs by default, NGINX respects the Cache-Control headers from origin servers. It does not cache responses with Cache-Control set to Private, No-Cache, or No-Store or with Set-Cookie in the response header.
You can permanently change this behavior by adding those directives:
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
So the config will look like:
location / {
proxy_cache backcache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_valid 200 302 10m;;
proxy_cache_lock on;
proxy_cache_background_update on;
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_pass https://backend;
}
Hope it will help you to figure out.
Related
I got the next 2 errors in console:
Access to XMLHttpRequest at 'https://api.domain1.com/rest/v1/reviews' from origin 'https://domain1.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
GET https://api.domain1.com/rest/v1/reviews net::ERR_FAILED 200
Network tab of Chrome's inspector shows a list of requests, but just 2 requests are relevant to my issue:
Name: reviews; Method: Options; Status 200; Type: preflight.
Name: reviews; Method: GET + Preflight; Status: CORS error; Type: xhr
My Nginx config:
proxy_cache_path /tmp/backend_cache levels=1:2 keys_zone=backend_cache:250m max_size=250m inactive=2d use_temp_path=off;
proxy_cache_key "$scheme$request_method$http_domain$host$request_uri";
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_lock_age 30s;
proxy_cache_revalidate on;
proxy_cache_valid 200 302 30m;
proxy_cache_use_stale updating error timeout http_500 http_502 http_503 http_504;
upstream fastapi {
ip_hash;
server backend-fastapi:80;
}
server {
listen 80 default_server;
server_name api.domain1.com api.domain2.com;
charset utf-8;
keepalive_timeout 5;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1m;
proxy_connect_timeout 1m;
proxy_pass http://fastapi;
proxy_cache backend_cache;
}
}
Preflight happens because I have custom header.
If I remove the line proxy_cache backend_cache; everything works without problem. Is there a way to use cache and avoid error?
I've installed and set up senaite.lims, which is a Plone extension, running on Plone 4.3.18 installed by the Unified Installer, and adding senaite.lims to the buildout.cfg eggs.
It's running fine on port 8080, and I can get Nginx to work redirecting / to :8080, but when I start using https, suddenly the css of the site doesn't work anymore.
I looked at the source, and the produced html page shows a link to the stylesheet with http://.... which I don't know if may cause problems, but if I actually try to open the .css file in the browser it works fine.
I set up and tried both with port 80 redirecting the https, and serving both a version of http and https, but neither one would get the page to render using .css. If anyone has any tips, or sees something wrongly configured in the nginx below, any help would be greatly appreciated.
Here is my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
sendfile on;
keepalive_timeout 75;
upstream plone {
server 127.0.0.1:8080;
}
server {
listen 80;
listen 443 ssl http2;
server_name 99.99.99.99; # changed for posting on SO
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_buffer_size 128k;
proxy_buffers 8 128k;
proxy_busy_buffers_size 256k;
}
}
}
You missed to rewrite the URL, e.g:
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
Here is a complete working config for SENAITE:
server {
listen 80;
server_name senaite.mydomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name senaite.mydomain.com;
# https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
include snippets/ssl-senaite.mydomain.com.conf;
include snippets/ssl-params.conf;
include snippets/well-known.conf;
access_log /var/log/nginx/senaite.access.log;
error_log /var/log/nginx/senaite.error.log error;
# Allow Cross-Origin Resource Sharing from our HTTP domain
add_header "Access-Control-Allow-Origin" "http://senaite.ridingbytes.com";
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS";
add_header "X-Frame-Options" "SAMEORIGIN";
if ($http_cookie ~* "__ac=([^;]+)(?:;|$)" ) {
# prevent infinite recursions between http and https
break;
}
# rewrite ^(.*)(/logged_out)(.*) http://$server_name$1$2$3 redirect;
location / {
set $backend http://haproxy;
# API calls take a different backend w/o caching
if ($uri ~* "##API") {
set $backend http://api;
}
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
# proxy_pass $backend;
proxy_pass http://plone;
}
}
I'd like to run two instance of Odoo v10 on different links.
the 1st instance will include multiple databases for our testing purposes running on this link mydoamin.com
And for the second instance will be holding demo databases for our clients to demonstrate Odoo for them on this link clients.mydomain.com
Both instances should be running on the same server.
I did a lot of research to figure out how to achieve this approach, but I didn't find any guide can help me to do it by using Nginx reverse proxy.
Here's my Nginx configuration file:
upstream backend-odoo {
server 127.0.0.1:8069;
}
upstream backend-odoo-im {
server 127.0.0.1:8072;
}
server {
listen 80;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://example.com$request_uri? permanent;
}
server {
listen 443 default;
# ssl settings
ssl on;
ssl_certificate
/etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
keepalive_timeout 60;
#increase the upload file size limit
client_max_body_size 300M;
# proxy header and settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
# odoo log files
access_log /var/log/nginx/odoo-access.log;
error_log /var/log/nginx/odoo-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500
http_502 http_503;
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
location / {
proxy_pass http://backend-odoo;
}
location ~* /web/static/ {
# cache static data
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://backend-odoo;
}
location /longpolling {
proxy_pass http://backend-odoo-im;
}
}
PS. I tried to set db filter = ^%d$ in odoo configuration file but it I get nothing.
Try dbfilter = %h$ that works for me better.
You have to rename your databases that it matches the URL's.
yourdomain.com get yourdomain_com as DB name.
I have a Rails app with an NGINX reverse proxy, and I am serving it on AWS. I have NGINX set to direct all traffic to HTTPS. The app and NGINX are on an EC2 behind an Elastic Load Balancer, and I'm terminating SSL on the ELB. When I point the domain at the ELB, the app works great, and the SSL redirecting works as expected.
I'm trying to use CloudFront, with the ELB as the CloudFront origin. When I point the domain at CloudFront, I get one of the following:
A TOO_MANY_REDIRECTS error on Chrome
Some of the assets on the page will load and some won't. One page will load the CSS but no images, on another page the images but no CSS. The missing assets will have a 504 error code.
It's not the certificate (I generated it from Amazon) and I don't think it's my CloudFront setup, since I'm using the exact same setup on a different app with no issues.
I think it's my NGINX setup, though I don't know why it would work on the ELB but not CloudFront. But I'm new to NGINX and may be doing something wrong. Here's my nginx.conf:
worker_processes auto;
events {
}
http {
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_disable "msie6";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/javascript
application/x-javascript
application/json
application/atom+xml;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=APP:100m inactive=24h max_size=2g;
proxy_cache_key "$scheme$host$request_uri";
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
proxy_http_version 1.1;
upstream rails_app {
server app:3000;
}
server {
listen 80;
server_name localhost;
root /var/www/html;
location / {
proxy_redirect off;
proxy_next_upstream error;
if ($http_x_forwarded_proto != "https") {
rewrite ^ https://$host$request_uri? permanent;
}
try_files $uri $uri/ #proxy;
}
location ~* \.(jpg|jpeg|svg|png|gif|ico|css|js|eot|woff|woff2|map)$ {
proxy_cache APP;
proxy_cache_valid 200 1d;
proxy_cache_valid 404 5m;
proxy_ignore_headers "Cache-Control";
expires 1d;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://rails_app;
}
location #proxy {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://rails_app;
}
}
}
Can you see anything that would make CloudFront choke on this?
I have the following nginx configuration for one of my virtual servers:
upstream app_example_https {
server 127.0.0.1:1340;
}
proxy_cache_path /Users/jaanus/dev/nginxcache levels=1:2 keys_zone=S3CACHE:10m;
proxy_cache_key "$scheme$request_method$host$request_uri";
server {
listen 0.0.0.0:1338;
server_name localhost;
ssl on;
ssl_certificate /Users/jaanus/dev/devHttpsCert.pem;
ssl_certificate_key /Users/jaanus/dev/devHttpsKey.pem;
location = / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/;
}
location /static/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/static/;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app_example_https/;
proxy_redirect off;
}
}
What this does in English:
There’s an nginx frontend which serves requests either from a static Amazon S3 site, or an application server.
All requests to / (site root) and /static are reverse-proxied from Amazon S3. All other requests are reverse-proxied from the application server.
Now, the problem: there are two almost identical Location blocks for the S3. This was the only way how I could make this configuration work, where two specific folders (root and /static) are served from S3, and everything else goes to the application server.
Two almost-identical blocks look dumb and are not scalable. When I add such folders, I don’t want to keep duplicating the blocks.
How do I merge the two locations into one Location block, while keeping everything working the same way?
You could put repeating part into external file and include it.
amazon.inc
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com;
your config
location = / {
include amazon.inc;
}
location /static/ {
include amazon.inc;
}
location / {
# proxy to you app
}
If you prefer to keep all in one file, you could use this trick:
error_page 470 = #amazon;
location = / {
return 470;
}
location /static/ {
return 470;
}
location #amazon {
# proxy to amazon
}
You could use regexp to merge several locations together, but I would not recommend to do that because it's hard to read and understand and is less efficient than simple prefix locations. But, just as an example:
# NOT RECOMMENDED
location ~ ^/($|static/) {
# proxy to amazon
}