Nginx Reverse Proxy SSL / Minification - iis-7

I am trying to use NginX as a reverse proxy for a few IIS Servers. The goal is to have NginX sit in from of the IIS / Apache servers caching static items such as CSS / JS / Images. I am also trying to get NginX to automatically minify js / css files using its perl module.
I found a sample script for minification here:
http://petermolnar.eu/linux-tech-coding/nginx-perl-minify-css-js/
With the scrip everything works fine, except the reverse proxy breaks.
Questions:
Is what i am trying to accomplish even possible? I want NginX to first minify the scripts before saving them to cache.
Can nginX automtically set the proper expires headers so that static items are cached as long as possible, and only replaced when querystrings are changed (jquery.js?timestamp=march-2012)
Can NginX GZIP the resources before sending them out.
Can NGinx Forward requests or serve up a "Down For Maintenance page" if it cannot connec to back end server.
Any help would be greatly appreciated.
Here is what i have in my sites-enabled/default so far.
server {
location / {
proxy_pass http://mywebsite.com;
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
location #minify {
perl Minify::minify_handler;
}
location ~ \.css$ {
try_files $uri.min.css #minify;
}
location /*.js {
expires 30d;
}
}

Nginx is the ideal solution for reverse-proxy, it's also Unix way "do one thing and do it well". So I'd advice you to split content serve and minification process out instead of using third-party plugins to do many things at once.
Best practice is to do minify&obfuscate phase on local system before you do a deployment on production, this is easy to say and not hard to do, see the google way to compress static assets. Once you got assets ready-to-use, we can setup nginx configuration.
Answers:
use minify&obfuscate before deploy it on production
you can find assets by regexp (directory name or file extension)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
use gzip on and gzip_static on to serve gzipped files instead of compress it every time when request is coming.
use try_files to detect the maintenance page exists or not
try_files $uri /system/maintenance.html #mywebsite;
if (-f $document_root/system/maintenance.html) {
return 503;
}
See the full nginx config for your case:
http {
keepalive_timeout 70;
gzip on;
gzip_http_version 1.1;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 1100;
gzip_buffers 64 8k;
gzip_comp_level 3;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml;
upstream mywebsite {
server 192.168.0.1 # change it with your setting
}
server {
try_files $uri /system/maintenance.html #mywebsite;
location #mywebsite {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mywebsite;
}
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
}
}

Related

NGINX configuration for gunicorn and prerender.io

I am currently serving my website using Nginx and Gunicorn.
In particular, Nginx is serving static files and Gunicorn is serving rest-api.
This is my current Nginx configuration:
worker_processes 2;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex on; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
# server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:8181 fail_timeout=0;
}
server {
listen 80;
listen [::]:80;
server_name www.miralytics.social;
return 301 https://www.miralytics.social$request_uri;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 443 default ssl;
ssl_certificate /certificates/fullchain1.pem;
ssl_certificate_key /certificates/privkey1.pem;
server_name www.miralytics.social;
gzip on;
gzip_vary on;
gzip_types text/plain text/html text/xml text/css application/x-javascript image/png image/jpeg application/javascript application/octet-stream application/json;
gzip_proxied any;
gzip_http_version 1.1;
gzip_min_length 0;
gzip_comp_level 9;
gzip_buffers 16 8k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
keepalive_timeout 5;
# path for static files
root /home/edge7/UIBackend/dist;
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /home/edge7/UIBackend/dist;
expires 1d;
}
location /auth/register {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://localhost:8181;
}
location /auth/login {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://localhost:8181;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://localhost:8181;
}
add_header Cache-Control no-cache; #(no cache for testing reasons)
}
}
Here the official prerender configuration for Nginx, but as you can see it does not fit my current configuration because I already have #proxy_to_app.
Has anyone experience with this?
You can just modify your config a little bit so where you have this:
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
You would want to change that to:
location / {
proxy_set_header X-Prerender-Token YOUR_TOKEN;
set $prerender 0;
if ($http_user_agent ~* "googlebot|bingbot|yandex|baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator") {
set $prerender 1;
}
if ($args ~ "_escaped_fragment_") {
set $prerender 1;
}
if ($http_user_agent ~ "Prerender") {
set $prerender 0;
}
if ($uri ~* "\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)") {
set $prerender 0;
}
#resolve using Google's DNS server to force DNS resolution and prevent caching of IPs
resolver 8.8.8.8;
if ($prerender = 1) {
#setting prerender as a variable forces DNS resolution since nginx caches IPs and doesnt play well with load balancing
set $prerender "service.prerender.io";
rewrite .* /$scheme://$host$request_uri? break;
proxy_pass http://$prerender;
}
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}

Serving static assets for a single-page-app with nginx

I'm trying to figure out how to have nginx serve static assets, have a fallback to index.html, and forward to the API.
Currently, all I have working is the root route /, and the API forwarding.
This is the behavior that I'm after:
GET / -> nginx sends s3/index.html (current)
* /api -> nginx proxies to the puma server (current)
# Yet to figure out (and the reason for this question)
GET /sub-route -> nginx sends s3/index.html, and routing is handled by the js framework
GET *.css|.js|etc -> nginx forwards to the s3 bucket (all relative to index.html)
Here is my nginx.conf (it has some template things in it, cause (as part of the deploy process) I do:
envsubst '$S3_BUCKET:$NGINX_PORT' < /app/deployment/nginx.template.conf > /app/deployment/nginx.conf
pid /app/tmp/nginx.pid;
events { }
http {
upstream puma {
server unix:///app/sockets/puma.sock;
}
server {
listen ${NGINX_PORT} default_server deferred;
server_name aeonvera.com;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
root /app/public;
access_log /app/log/nginx.access.log;
error_log /app/log/nginx.error.log info;
client_max_body_size 20M;
keepalive_timeout 5;
location ~ ^/api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://puma;
}
# Send all other requests to the index.html
# stored up on s3
location / {
# tell all URLs to go to the index.html
# I got an error with this about having proxy_pass within a location...?
# location ~* \.(?:ico|css|js|gif|jpe?g|png|woff2|woff|ttf)$ {
# proxy_pass "https://s3.amazonaws.com/${S3_BUCKET}/ember/"
#
# gzip_static on;
# expires max;
# add_header Cache-Control public;
# }
# Don't know what this does
rewrite ^([^.]*[^/])$ $1/ permanent;
# only ever GET these resources
limit_except GET {
deny all;
}
# use google as dns
resolver 8.8.8.8;
proxy_http_version 1.1;
proxy_set_header Host 's3.amazonaws.com';
proxy_set_header Authorization '';
# avoid passing along amazon headers
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-delete-marker;
proxy_hide_header x-amz-version-id;
# cookies are useless on these static, public resources
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_set_header cookie "";
proxy_buffering off;
# s3 replies with 403 if an object is inaccessible; essentially not found
proxy_intercept_errors on;
# error_page 500 502 503 504 /500.html;
# the actual static files
proxy_pass "https://s3.amazonaws.com/${S3_BUCKET}/ember/index.html";
}
}
}
Update 1
I've added this above location /
location ~ ^/(assets|fonts) {
rewrite (.*) $1 break;
proxy_pass "https://s3.amazonaws.com/${S3_BUCKET}/ember";
gzip_static on;
expires max;
add_header Cache-Control public;
}
but it gives an error:
nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /app/deployment/nginx.conf:53
The thinking behind this change, is that since all my assets are in known locations, I could tell nginx to proxy to that location, and then have a rewrite (.*) / permanent; for my location /
Update 2
I thought maybe I could re-write the url for /dashboard,
so that nginx would proxy_pass the index.html. No success.
rewrite ^/(.*) / break;
proxy_pass "https://s3.amazonaws.com/${S3_BUCKET}/ember/index.html";
That just redirects to s3's marketing page
Content-Length:0
Content-Type:text/plain; charset=utf-8
Date:Thu, 08 Feb 2018 13:22:49 GMT
Location:https://aws.amazon.com/s3/
Server:nginx/1.12.0
Response Headers ^
I think I figured it out, but It'd be super helpful if someone could verify, as I'm still fairly new to nginx.
location / {
# ...
proxy_set_header Host 's3.amazonaws.com';
# ...
rewrite ^/(.*) /${S3_BUCKET}/ember/index.html break;
proxy_pass "https://s3.amazonaws.com/${S3_BUCKET}/ember/index.html";
}
Having the Host header is important here, because without it, rewrite will change the URL in the user's browser (which we don't want to do, cause that will more than likely mess with the SPA's routing.

Nginx - How to run multiple instance of Odoo with different subdomain names

I'd like to run two instance of Odoo v10 on different links.
the 1st instance will include multiple databases for our testing purposes running on this link mydoamin.com
And for the second instance will be holding demo databases for our clients to demonstrate Odoo for them on this link clients.mydomain.com
Both instances should be running on the same server.
I did a lot of research to figure out how to achieve this approach, but I didn't find any guide can help me to do it by using Nginx reverse proxy.
Here's my Nginx configuration file:
upstream backend-odoo {
server 127.0.0.1:8069;
}
upstream backend-odoo-im {
server 127.0.0.1:8072;
}
server {
listen 80;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://example.com$request_uri? permanent;
}
server {
listen 443 default;
# ssl settings
ssl on;
ssl_certificate
/etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
keepalive_timeout 60;
#increase the upload file size limit
client_max_body_size 300M;
# proxy header and settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
# odoo log files
access_log /var/log/nginx/odoo-access.log;
error_log /var/log/nginx/odoo-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500
http_502 http_503;
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
location / {
proxy_pass http://backend-odoo;
}
location ~* /web/static/ {
# cache static data
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://backend-odoo;
}
location /longpolling {
proxy_pass http://backend-odoo-im;
}
}
PS. I tried to set db filter = ^%d$ in odoo configuration file but it I get nothing.
Try dbfilter = %h$ that works for me better.
You have to rename your databases that it matches the URL's.
yourdomain.com get yourdomain_com as DB name.

Nginx "Proxy_Cache" Configuration Help Needed

I'm running in to a really weird issue. I only want to enable Proxy Cache for "new-site.com". However, when doing so, Nginx is proxy caching all of my websites.
I've went through all my vhost / config files and made sure that all "http" and "server" blocks were opened and closed correctly. It's my understanding that Proxy_Cache is only enabled for a site when you include (for example) "proxy_cache new-site;" in your websites "server" block.
In my "http" block, I load all of my websites .conf files, but none of them include any proxy_cache directives.
What am I doing wrong?
Here is a snippet of my config file :
http {
...
...
# nginx cache
proxy_cache_path /www/new-site.com/httpdocs/cache levels=1:2
keys_zone=new-site:10m
max_size=50m
inactive=1440m;
proxy_temp_path /www/new-site.com/httpdocs/cache/tmp 1 2;
# virtual hosting
include /etc/nginx/vhosts/*.conf;
}
Then here is my "new-site.com" vhost conf file:
server {
listen xxx.xxx.xxx.xxx:80;
server_name new-site.com;
root /www/new-site.com/httpdocs;
index index.php;
...
...
proxy_cache new-site;
location / {
try_files $uri #backend;
}
location ~* \.php {
include /usr/local/etc/nginx/proxypass.conf;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header updating http_502;
proxy_cache_bypass $cookie_session $http_secret_header;
proxy_no_cache $cookie_session;
add_header X-Cache $upstream_cache_status;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_pass http://127.0.0.1:80;
}
location #backend {
include /usr/local/etc/nginx/proxypass.conf;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header updating http_502;
proxy_cache_bypass $cookie_session $http_secret_header;
proxy_no_cache $cookie_session;
add_header X-Cache $upstream_cache_status;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_pass http://127.0.0.1:80;
}
location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|css|js)$ {
....
}
}
Once I moved the line "proxy_cache new-site;" in to a "location" block, that resolved the issue for me.
Not sure why I have this issue when it sits outside a block though.

Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx.
I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is
https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216
I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files.
If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not.
Can anyone see what i'm doing wrong? thanks, Max
Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block:
NOTE: the urls in here (elearning.dev, cmw-dev.co.uk, etc) are all just local host names, ie they're all just aliases for 127.0.0.1.
server {
listen 443 ssl;
keepalive_timeout 70;
ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt;
ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk;
root /home/max/work/charanga/elearn_container/elearn;
# ensure that we serve css, js, other statics when requested
# as SSL, but if the files don't exist (i.e. any non /basket controller)
# then redirect to the non-https version
location / {
try_files $uri #non-ssl-redirect;
}
# securely serve everything under /basket (/basket/checkout etc)
# we need general too, because of the email/username checking
location ~ ^/(basket|general|cmw/account/check_username_availability) {
# make sure cached copies are revalidated once they're stale
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
# this serves Rails static files that exist without running
# other rewrite tests
try_files $uri #rails-ssl;
expires 1h;
}
location #non-ssl-redirect {
return 301 http://$host$request_uri;
}
location #rails-ssl {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 180;
proxy_next_upstream off;
proxy_pass http://127.0.0.1:3000;
expires 0d;
}
}
#upstream elrs {
# server 127.0.0.1:3000;
#}
server {
listen 80;
server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk;
root /home/max/work/charanga/elearn_container/elearn;
access_log /home/max/work/charanga/elearn_container/elearn/log/access.log;
error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug;
client_max_body_size 50M;
index index.html index.htm;
# gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string)
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html
# make sure gzip does not lose large gzipped js or css files
# see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl
gzip_buffers 16 8k;
# Disable gzip for certain browsers.
#gzip_disable "MSIE [1-6].(?!.*SV1)";
gzip_disable "MSIE [1-6]";
# blank gif like it's 1995
location = /images/blank.gif {
empty_gif;
}
# don't serve files beginning with dots
location ~ /\. { access_log off; log_not_found off; deny all; }
# we don't care if these are missing
location = /robots.txt { log_not_found off; }
location = /favicon.ico { log_not_found off; }
location ~ affiliate.xml { log_not_found off; }
location ~ copyright.xml { log_not_found off; }
# convert urls with multiple slashes to a single /
if ($request ~ /+ ) {
rewrite ^(/)+(.*) /$2 break;
}
# X-Accel-Redirect
# Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead
location /zips {
internal;
root /var/www/apps/e_learning_resource/shared/assets;
}
location /tmp {
internal;
root /;
}
location /mnt{
root /;
}
# resource library thumbnails should be served as usual
location ~ ^/resource_library/.*/*thumbnail.jpg$ {
if (!-f $request_filename) {
rewrite ^(.*)$ /images/no-thumb.png
break;
}
expires 1m;
}
# don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here
location ~ "lesson viewer.dcr" {
rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break;
}
# we need this rule so we don't serve the older lessonviewer when the rule below is matched
location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf {
rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break;
}
location ~ v6lessonViewer.swf {
rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break;
}
location ~ lessonViewer.swf {
rewrite ^(.*)$ /assets/players/lessonViewer.swf break;
}
location ~ lgn111.dat {
empty_gif;
}
# try to get autocomplete school names from memcache first, then
# fallback to rails when we can't
location /schools/autocomplete {
set $memcached_key $uri?q=$arg_q;
memcached_pass 127.0.0.1:11211;
default_type text/html;
error_page 404 =200 #rails; # 404 not really! Hand off to rails
}
location / {
# make sure cached copies are revalidated once they're stale
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
# this serves Rails static files that exist without running other rewrite tests
try_files $uri #rails;
expires 1h;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 180;
proxy_next_upstream off;
proxy_pass http://127.0.0.1:3000;
expires 0d;
}
}
EDIT: It just occurred to me that this might be better on superuser or serverfault, or perhaps both. I'm not sure what the cross-site posting rules are.

Resources