nginx + gzip_static_module - nginx

Scenario: I have two files "style.css" and "style.css.gz". Enabled modules gzip_static and gzip. Everything works properly, NGINX serve compressed "style.css.gz". Both files have the same timestamp. I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
gzip on;
gunzip on;
gzip_vary on;
gzip_static always;
gzip_disable "msie6";
gzip_proxied any;
gzip_comp_level 4;
gzip_buffers 32 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
text/cache-manifest
text/xml
text/css
text/plain
...........
Question: If i edit the "style.css" and change a few CSS rules, is possible to serve edited "style.css" instead of "style.css.gz"? (Based on timestamp or smthing like that) Or pre-compress new "style.css.gz" immediately after i finish editing "style.css"?
It is possible to operate using NGINX? Or what is the best solution?
Thanks

I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
You can just run this script manually after you have made any edit/changes to the css file.
It is possible to operate using NGINX? Or what is the best solution?
Alternatively, another solution I can think of (which is how I roll) is that nginx can actually serve and gzip compress files on the fly. Just make sure to disable use of precompressed files with ngx_http_gzip_static_module otherwise nginx will use the precompressed files instead.
gzip_static off;
Enables (“on”) or disables (“off”) checking the existence of precompressed files.

Related

Azuracast/Docker/413 Request entity too large

I installed Azuracast to my VPS and also installed Wordpress for my website. Both installed very nicely using Docker and the install instructions Azuracasts documentation provided. Now my only problem is that I am unable to upload the theme I purchased for the website. I am a total newbie when it comes to tinkering Docker images/containers. I have tracked the problem down and it is most likely the nginx-proxy. I need a super simplified instructions how to add client_max_body_size to the .conf file where it needs to be put. So any kind of assistance regarding this problem will be greatly appreciated.
Providing you are using the default azuracast docker setup, one of the simplest way of doing it is to overwrite the /etc/nginx/azuracast.conf file in your docker-compose.yml file
version: '2.2'
services:
nginx_proxy:
...
volumes:
...
- ./azuracast.nginx.conf:/etc/nginx/azuracast.conf
And in your azuracast.nginx.conf (you can name it anything you want) change the 2nd line representing the max body size as such:
server_tokens off;
client_max_body_size 2050m;
gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;
Hope this helps, well, hope you found the answer before.
For an official answer go to https://github.com/nginx-proxy/nginx-proxy -- but it's a bit more complicated.
Another tip: docker-compose exec nginx_proxy bash and apk add nano and have fun ;)

Configuring gzip with nginx

I am using nginx for the first time and have some confusions regarding configurations. I have a nginx as load balancer and backends as nginx as well. With my understanding I have configured mod_security module on the load balancer as its the entry point. I have also added required response headers on the load balancer. Now I have to enable the gzip for nginx. Confusion is where it should be configured? Load balancer or the backend nginx servers?
You can configure gzip globally in /etc/nginx/nginx.conf or just for one site in e.g. /etc/nginx/sites-available/your-site.
The configuration could like this:
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
It depends.
For dynamic gzipping (e.g. HTML output of your site/app)
If your load balancer is a powerful machine, then you may want to do gzip on the load balancer, in order to reduce CPU usage elsewhere.
If you have some modsecurity rules that require inspecting the response body, and gzipping is done in the nodes, then that would mean that modsecurity needs to ungzip backend response/inspect/re-gzip (and thus cause processing overhead) or those rules would simply not work. That's another case when you want to gzip in load balancer.
In all other cases, I assume gzipping on the nodes would be better.
For static files
.. it's best to rely on static gzip (pre-compress your assets). However, since you have many backends, it means pre-compressing assets on each.
If your backends are different websites/apps (that means, you're not doing actual load balancing), it's not an issue.
If your backends are actual nodes of the same app, then you can do max gzip on each node, and "proxy cache" results on the load balancer.

S3 redirect to sub-directory

I have site served from S3 with Nginx with following Nginx configuration.
server {
listen 80 default_server;
server_name localhost;
keepalive_timeout 70;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript;
location / {
proxy_pass http://my-bucket.s3-website-us-west-2.amazonaws.com;
expires 30d;
}
At present whenever I build new version, I just delete target bucket contain and upload new frontend files to it.
Since I am deleting bucket contain, there is no way I can go back to previous version of frontend even versioning is enabled on bucket. So want to upload new frontend files into version dir (for example 15) in S3 bucket and then setup a redirect from http://my-bucket.s3-website-us-west-2.amazonaws.com/latest to http://my-bucket.s3-website-us-west-2.amazonaws.com/15
anyone knows how this can be done ?
There are multiple ways to do this:
The easiest may be through a symbolic link, provided that your environment allows that.
ln -fhs ./15 ./latest
Another option is an explicit external redirect issued to the user, where the user would see the new URL; this has a benefit in that multiple versions could be accessed at the same time without any sort of synchronisation issues, for example, if a client decides to do a partial download, everything should still be handy, because they'll most likely be doing the partial download on the actual target, not the /latest shortcut.
location /latest {
rewrite ^/latest(.*) /15$1 redirect;
}
The final option is an internal redirect within nginx; this is usually called URL masquerading in some third-party applications; this may or may not be recommended, depending on requirements; an obvious deficiency would be with partial downloads, where a resume of a big download may result in corrupted files:
location /latest {
rewrite ^/latest(.*) /15$1 last;
}
References:
http://nginx.org/r/location
http://nginx.org/r/rewrite
One of the simple ways to handle this situation is using variables. You can easily import a file to set the current latest version. You will need to reload your nginx config when you update the version with this method.
Create a simple configuration file for setting the latest version
# /path/to/latest.conf
set $latest 15;
Import your latest configuration in the server block, and add a location to proxy to the latest version.
server {
listen 80 default_server;
server_name localhost;
# SET LATEST
import /path/to/latest.conf;
location / {
proxy_pass http://s3host;
expires 30d;
}
# Note the / at the end of the location and the proxy_pass directive
# This will strip the "/latest/" part of the request uri, and pass the
# rest like so: /$version/$remaining_request_uri
location /latest/ {
proxy_pass http://s3host/$latest/;
expires 30d;
}
...
}
Another way to do this dynamically would be to use lua to script this behavior. That is a little more involved though, so I will not get into that for this answer.

gzip compression not working with nginx and centos 7

I want to enable gzip compression on my virtual host with nginx. My control panel is Plesk17 but I have access to server root. I found the vhost nginx config file in this dir:
/etc/nginx/plesk.conf.d/vhosts
and add this codes in server block to enable gzip:
gzip on;
gzip_disable msie6;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_vary on;
After all and restarting the nginx, when I check the gzip status, it looks disabled!
For your information, I also have this comments at the top of my config file:
#ATTENTION!
#
#DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
#SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.
What's wrong? how can I enable the gzip?
To enable gzip compression for particular domain open Domains > example.com > Apache & nginx Settings > Additional nginx directives and add directives to this section.
If you want to enable it server-wide just create new file /etc/nginx/conf.d/gzip.conf add content there and restart nginx.

Nginx and compressing components with gzip

I'm trying to improve page speed on a site and using "Yslow" and "Page Speed" to monitor the speeds. I am being told by both to "compress components with gzip" and given a listing of a number of CSS and JavaScript files, for example
/css/styles.css?v=6.5.5
/jquery.flexslider.js
/4878.js
/6610.js
/homepage.css?v=6.5.5
Our hosting have informed us that nginx is doing the gzip compression on ALL assets, even if it reverse proxies back to Apache and the folllowing values from the nginx site-enable files, which is enabled at a virtual host level, confirms this:
gzip on;
gzip_disable msie6;
gzip_static on;
gzip_comp_level 9;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
Is there a reason these tools are not picking up by the compression or is it in fact they are not being compressed at all and we need to get our hosting to add something extra?
your hosting provider claims that the requests leave nginx compressed that leaves as potential problem causes:
there's a proxy/cache/virusscanner somewhere on the network path between the nginx server and your client that strips out the compression.
your browser saves an uncompressed version of the asset, and yslow/pagespeed ends up using that (if so make sure you trying it with an empty browser-cache should fix it)
you're hosting provider's claim is false (but the posted config bit seems ok to me )
the problem could be a proxy or cache inbetween the nginx server and your browser that strips out the compression.
Some things to try:
Try checking the url's with on online checker for gzip like http://www.whatsmyip.org/http-compression-test/ or http://www.dnsqueries.com/en/check_http_gzip.php
check locally what the result of curl --compressed --head <your-asset-url> is (you should see a Content-Type: gzip if the response coming in is compressed)

Resources