gzip compression not working with nginx and centos 7 - nginx

I want to enable gzip compression on my virtual host with nginx. My control panel is Plesk17 but I have access to server root. I found the vhost nginx config file in this dir:
/etc/nginx/plesk.conf.d/vhosts
and add this codes in server block to enable gzip:
gzip on;
gzip_disable msie6;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_vary on;
After all and restarting the nginx, when I check the gzip status, it looks disabled!
For your information, I also have this comments at the top of my config file:
#ATTENTION!
#
#DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
#SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.
What's wrong? how can I enable the gzip?

To enable gzip compression for particular domain open Domains > example.com > Apache & nginx Settings > Additional nginx directives and add directives to this section.
If you want to enable it server-wide just create new file /etc/nginx/conf.d/gzip.conf add content there and restart nginx.

Related

Azuracast/Docker/413 Request entity too large

I installed Azuracast to my VPS and also installed Wordpress for my website. Both installed very nicely using Docker and the install instructions Azuracasts documentation provided. Now my only problem is that I am unable to upload the theme I purchased for the website. I am a total newbie when it comes to tinkering Docker images/containers. I have tracked the problem down and it is most likely the nginx-proxy. I need a super simplified instructions how to add client_max_body_size to the .conf file where it needs to be put. So any kind of assistance regarding this problem will be greatly appreciated.
Providing you are using the default azuracast docker setup, one of the simplest way of doing it is to overwrite the /etc/nginx/azuracast.conf file in your docker-compose.yml file
version: '2.2'
services:
nginx_proxy:
...
volumes:
...
- ./azuracast.nginx.conf:/etc/nginx/azuracast.conf
And in your azuracast.nginx.conf (you can name it anything you want) change the 2nd line representing the max body size as such:
server_tokens off;
client_max_body_size 2050m;
gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;
Hope this helps, well, hope you found the answer before.
For an official answer go to https://github.com/nginx-proxy/nginx-proxy -- but it's a bit more complicated.
Another tip: docker-compose exec nginx_proxy bash and apk add nano and have fun ;)

Configuring gzip with nginx

I am using nginx for the first time and have some confusions regarding configurations. I have a nginx as load balancer and backends as nginx as well. With my understanding I have configured mod_security module on the load balancer as its the entry point. I have also added required response headers on the load balancer. Now I have to enable the gzip for nginx. Confusion is where it should be configured? Load balancer or the backend nginx servers?
You can configure gzip globally in /etc/nginx/nginx.conf or just for one site in e.g. /etc/nginx/sites-available/your-site.
The configuration could like this:
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
It depends.
For dynamic gzipping (e.g. HTML output of your site/app)
If your load balancer is a powerful machine, then you may want to do gzip on the load balancer, in order to reduce CPU usage elsewhere.
If you have some modsecurity rules that require inspecting the response body, and gzipping is done in the nodes, then that would mean that modsecurity needs to ungzip backend response/inspect/re-gzip (and thus cause processing overhead) or those rules would simply not work. That's another case when you want to gzip in load balancer.
In all other cases, I assume gzipping on the nodes would be better.
For static files
.. it's best to rely on static gzip (pre-compress your assets). However, since you have many backends, it means pre-compressing assets on each.
If your backends are different websites/apps (that means, you're not doing actual load balancing), it's not an issue.
If your backends are actual nodes of the same app, then you can do max gzip on each node, and "proxy cache" results on the load balancer.

Connection Refused with nginx and kubernetes

I trying to deploy my angular application with kubernates inside a container with nginx.
I create my docker file:
FROM node:10-alpine as builder
COPY package.json package-lock.json ./
RUN npm ci && mkdir /ng-app && mv ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
RUN npm run ng build -- --prod --output-path=dist
FROM nginx:1.14.1-alpine
COPY nginx/default.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /ng-app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My nginx config:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass https://my-api;
}
}
If I launch this image locally It works perfectly but when I deploy this container inside a kubernate cluster the site load fine but all api request shows the error ERR_CONNECTION_REFUSED.
I'm trying to deploy in GCP I build the image and then publish my image by GCP dashboard.
Some idea for this ERR_CONNECTION_REFUSED?
I found the solution. The problem was with my requests, I was using localhost at the URL, with that I took the wrong pod IP. I've just changed the request to use straight the service IP and that sort out my problem.
Kubernetes Engine nodes are provisioned as instances in Compute Engine. As such, they adhere to the same stateful firewall mechanism as other instances. Have you configured the firewall rules?
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod#firewalling
Good that you have figured out the issue. But, Did you try using the service_names instead of the Pod IPs? It is suggested method(https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls) of accessing services by their names within the Kubernetes cluster and with NodeIP or LoadBalancerIP outside the cluster.

nginx + gzip_static_module

Scenario: I have two files "style.css" and "style.css.gz". Enabled modules gzip_static and gzip. Everything works properly, NGINX serve compressed "style.css.gz". Both files have the same timestamp. I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
gzip on;
gunzip on;
gzip_vary on;
gzip_static always;
gzip_disable "msie6";
gzip_proxied any;
gzip_comp_level 4;
gzip_buffers 32 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
text/cache-manifest
text/xml
text/css
text/plain
...........
Question: If i edit the "style.css" and change a few CSS rules, is possible to serve edited "style.css" instead of "style.css.gz"? (Based on timestamp or smthing like that) Or pre-compress new "style.css.gz" immediately after i finish editing "style.css"?
It is possible to operate using NGINX? Or what is the best solution?
Thanks
I also have a cronjob that creates pre-compressed files of any file * .css and runs every two hours.
You can just run this script manually after you have made any edit/changes to the css file.
It is possible to operate using NGINX? Or what is the best solution?
Alternatively, another solution I can think of (which is how I roll) is that nginx can actually serve and gzip compress files on the fly. Just make sure to disable use of precompressed files with ngx_http_gzip_static_module otherwise nginx will use the precompressed files instead.
gzip_static off;
Enables (“on”) or disables (“off”) checking the existence of precompressed files.

Nginx and compressing components with gzip

I'm trying to improve page speed on a site and using "Yslow" and "Page Speed" to monitor the speeds. I am being told by both to "compress components with gzip" and given a listing of a number of CSS and JavaScript files, for example
/css/styles.css?v=6.5.5
/jquery.flexslider.js
/4878.js
/6610.js
/homepage.css?v=6.5.5
Our hosting have informed us that nginx is doing the gzip compression on ALL assets, even if it reverse proxies back to Apache and the folllowing values from the nginx site-enable files, which is enabled at a virtual host level, confirms this:
gzip on;
gzip_disable msie6;
gzip_static on;
gzip_comp_level 9;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
Is there a reason these tools are not picking up by the compression or is it in fact they are not being compressed at all and we need to get our hosting to add something extra?
your hosting provider claims that the requests leave nginx compressed that leaves as potential problem causes:
there's a proxy/cache/virusscanner somewhere on the network path between the nginx server and your client that strips out the compression.
your browser saves an uncompressed version of the asset, and yslow/pagespeed ends up using that (if so make sure you trying it with an empty browser-cache should fix it)
you're hosting provider's claim is false (but the posted config bit seems ok to me )
the problem could be a proxy or cache inbetween the nginx server and your browser that strips out the compression.
Some things to try:
Try checking the url's with on online checker for gzip like http://www.whatsmyip.org/http-compression-test/ or http://www.dnsqueries.com/en/check_http_gzip.php
check locally what the result of curl --compressed --head <your-asset-url> is (you should see a Content-Type: gzip if the response coming in is compressed)

Resources