Bigcommerce optimize Issue - nginx

Big-commerce optimize:
Accept-Encoding header Nginx
Leverage browser caching
Specify a cache validator
Optimize the order of styles and scripts
In which file we can change add code to remove all these things use Nginx file. We have Cyber duck Connection access inside https://prnt.sc/nmdfjc inside which file to add code to solve speed load issue.

These items cannot be changed. BigCommerce is a SaaS platform, and web-server configuration is not something you can change. If you require this kind of optimization, place a reverse-proxy like Varnish or CloudFlare in front of your store.
Accept-Encoding header Nginx
Leverage browser caching
Specify a cache validator
Optimize the order of styles and scripts
you have some control over that in /templates/layout/base.html.
some expressions are fixed, and you will not be able to change.
{{{snippet 'htmlhead'}}}
{{{snippet 'footer'}}}
If you require more control of this, you need to rewrite the HTML using a reverse-proxy.

Related

My Cloudflare site is taking hours to load CSS

I created a domain on CloudFlare, a website with HTML and CSS, hosted my bucket with those files on S3, and integrated terraform for deployments. When I kick off terraform apply and run the aws cli command to update the S3 bucket, formatting changes like text show immediately but my CSS changes like font sizes and colors take several hours to be visible. How can I make both types of changes visible quickly?
I tried hard reloading my browser, clearing cache, and setting auto minify on cloudflare. I haven't tried gzipping my css and min.css files, a little afraid I might break something. I'm unfamiliar with front end development. Suggestions?
For a proxied ("orange clouded") DNS record, Cloudflare applies a default caching behaviour. This includes default cached file extensions such as CSS files. The default behaviour, caching and extensions cached are documented here.
When you do a release on your origin (S3 bucket), you could purge the cache so that the old cached versions are discarded and new ones will be pulled and cached. You can also override the cache behaviour by using Page Rules. By the way, Page Rules (and other Cloudflare settings) are also manageable via Terraform.

Putting dynamic CSS URLs in HTTP headers with Fastly CDN

I'm generating dynamic CSS URLs for cache-busting. I.e. they're in the format styles-thisisthecontenthash123.css.
I also want to use HTTP Link headers to load the files slightly faster. I.e. have the header Link: <styles-thisisthecontenthash123.css>; rel=stylesheet
I'm pretty sure it's possible to do this in Fastly using VCL, but I'm not familiar enough with the ecosystem to figure it out. The CSS URL is in index.html, which is cached. I'm thinking I can open index.html and maybe use regex to parse out the CSS URL. How would I do this?
If I'm understanding your question correctly, you want to include a link header for all requests for index.html. You can do that with Fastly, but if the URL for the CSS file is changing you're not going to be able to pull that info out with VCL (you can't inspect the response body).
You could use edge dictionaries and whenever your CSS filename changes, update the reference via the API.
Thing is, if you're going to make an API call whenever the file changes, might as well just keep the filename consistent (styles.css) and whenever you publish a new version send a cache invalidation (purge). Fastly will clear the cache in ~150ms, so you then all you have to do is add the header which is can be done in the Fastly web portal with a condition.

nginx Caching, mechanic, and apostrophe

Besides serving static files directly, does mechanic expose any commands/tools for adding in caching in nginx? Additionally, are there any gotchas with using nginx's built-in caching with Apostrophe or specific configurations I should use to make sure I'm not borking up core functionality?
I'm the lead architect of Apostrophe at P'unk Avenue.
Mechanic doesn't specifically expose any caching options. You should be able to set up caching via the /etc/nginx/mechanic-overrides folder though, which provides places to insert custom rules at various points in the nginx configuration file that mechanic builds.
As for Apostrophe, there is definitely an issue for administrators editing the site. If you cache the pages, then logging in won't change the appearance of the site to include editing controls. If you make an edit and the edit is cached, you won't see your work. This kind of thing would lead to inconsistent and confusing behavior.
So what I would recommend is using mechanic to set up a separate subdomain of your site just for editing purposes, pointing to the same backend port. The only difference will be that you will not enable caching for it.
This works well but you do have to be careful not to paste any absolute links to the editing subdomain when editing links with the rich text editor.
Then you can cache to your heart's content for the primary domain, as long as you are comfortable with the caching rules you're setting.
Naturally, if you cache the home page for up to a day and then edit the home page, that change will not be immediately reflected on the primary domain.
However, also keep in mind that mechanic is already set up to deliver static files such as media and CSS/JS/font assets directly via nginx, bypassing the backend node process for these. So it's really only necessary to consider caching at the nginx level if you are concerned about the performance of the pages themselves under heavy load.
Speaking of which, you should definitely be running Apostrophe in our multicore configuration, to improve both scalability and reliability:
Running Apostrophe on multiple cores and/or servers
Hope this is helpful!

2 Servers for website and media files (Wordpress Plugin Needed)

I am needing to host media files on one server (with a different domain name) and have my website (files) on the other. I have all Wordpress base websites and am needing all current files to be moved to the other domain/server. I cannot do this manually as there are over 10,000 media files all up. Is there any plugin that allows to do this? Or any other way to do this? I am doing this to reduce the average CPU load / memory requirement. Thanks
If you are having performance issues with WordPress, my first recommendation would be to make sure you are using a caching plugin such as WP Super Cache or W3 Total Cache (I happen to use the latter). You will need to use a persistent caching option as well for the best performance, such as Memcached.
I can only speak to W3TC, but it does have an option to server your static content via a CDN such as RackSpace CloudFiles. When configured properly it will move files from your media library to the CDN, and replace the links in your content to the proper URL.
If performance is your main interest, you should also look at serving your site via Nginx and php-cgi, managed through something like spawn-fcgi. There are some caveats to using Nginx vs Apache, but once tuned the performance is far superior. You can find a guide for the configuration on the WordPress site.
Lastly you can set up a reverse proxy from your front end server to point to static files hosted on a different server - the content just passes through your front end server. This can be accomplished using Apache or Nginx, but the performance will be better in the latter. Please see my site for an example of using an Nginx reverse proxy - you would just want to proxy requests for your static files location to a different back-end server.

mod_pagespeed won't combine CSS and JS

I've installed mod_pagespeed on our server but it won't combine my CSS and JS on our website oktoberfest.it. Obviously I've activated combine_css, combine_javascript and PassThrough in filters in pagespeed.conf file.
I've also read that mod_pagespeed can't combine CSS files that contains CSS3 directives, but in my Apache's log file, after enabling LevelLog debug of course, there aren't any error or infos about failures in combining. Neither CSS neither JS.
I've tried to:
Readd CoreFilters
Reboot Apache
Delete mod_pagespeed cache with touch
/var/mod_pagespeed/cache/cache.flush
Deactivate all filters except combine_css and combine_javascript
I've check that folders indicated in .config file are CHMOD 777
I don't know what to do now. I'm done with ideas. I really want this mod_pagespeed features work with our website, we have 40 requests of CSSs and JSs that come from plugins that we can not manage.
What do you suggest me to do?
For CSS Combine
As you are using Wordpress, you need to add a Function in
function.php of Wordpress.
function remove_style_id($link) {
return preg_replace("/id='.*-css'/", "", $link);
}
add_filter('style_loader_tag', 'remove_style_id');
Wordpress writes ID="" Tags into the css link which pagespeed doesn´t like. So it will be ignored.
BUT It "could" cause Problems with a Plugin if a Javascript calls the ID, but regular no one will do it that way. So you´ll be safe.
You can permit IDs for css combining as of version 1.12.34.1, have a look at the documentation.
As wordpress adds -css to any ID, you can just add:
Apache:
ModPagespeedPermitIdsForCssCombining *-css
Nginx:
pagespeed PermitIdsForCssCombining *-css;
There appear to be a few issues preventing mod_pagespeed from combining resources on your site. First of all, many of your CSS files have id attributes, which will prevent the combine_css filter from functioning. HTML generally expects elements to have a single id attribute, and it's not clear what that should be if those CSS files are combined.
That doesn't explain why mod_pagespeed does not seem to be rewriting any CSS or JS resources on your page though. mod_pagespeed is able to rewrite the HTML, for example www.oktoberfest.it/?ModPagespeedFilters=collapse_whitespace is able to remove whitespace from the page. The issue is likely that mod_pagespeed is not able to fetch these resources internally. This can happen for a number of reasons, but look in your apache error_log for messages related to SERF.
The best fix for fetch related failures is to use the ModPagespeedLoadFromFile directive if your environment will allow it. Also have a look at this FAQ entry, which explains the problem. You can also try updating to beta release 1.4.26.1 or later, which includes a workaround for common loopback fetch errors.

Resources