We were trying to optimise the website using google pagespeed and now having some issue:
We're using nginx_pagespeed module
Trying to enable the prioritize_critical_css filter
Since the CSS files are loading from external CDN domain, the critical css filter is not working.
When ran with the ?PageSpeedFilters=debug, the following error is generated in the html source
Summary computation status for CriticalCssBeacon
Resource 0 https://mycdndomain.com/styles/screen-2d470013.css: Cannot create resource: either its domain is unauthorized and InlineUnauthorizedResources is not enabled, or it cannot be fetched (check the server logs)
Where mycdndomain is our CDN domain.
Can someone help me fixing this issue. What nginx pagespeed configuration changes are required ?
Also what is the InlineUnauthorizedResources ?
By default, mod_pagespeed only rewrites resources on the same domain as the HTML, to enable rewriting resources on other domains you need to explicitly authorize rewriting and maybe do some configuration.
Most simply, you can authorize a domain for rewriting with the pagespeed Domain declaration:
pagespeed Domain https://mycdndomain.com;
This will instruct mod_pagespeed to rewrite resources from that domain.
But be careful, this just instructs mod_pagespeed to rewrite the URLs, you will have to make sure that your CDN can serve the rewritten URLs! If it just pulls the content from your server, this should be fine, but if it's a push CDN, it will break when you change the URLs.
See https://developers.google.com/speed/pagespeed/module/domains for a full description of authorizing and mapping domains.
Related
I am needing to host media files on one server (with a different domain name) and have my website (files) on the other. I have all Wordpress base websites and am needing all current files to be moved to the other domain/server. I cannot do this manually as there are over 10,000 media files all up. Is there any plugin that allows to do this? Or any other way to do this? I am doing this to reduce the average CPU load / memory requirement. Thanks
If you are having performance issues with WordPress, my first recommendation would be to make sure you are using a caching plugin such as WP Super Cache or W3 Total Cache (I happen to use the latter). You will need to use a persistent caching option as well for the best performance, such as Memcached.
I can only speak to W3TC, but it does have an option to server your static content via a CDN such as RackSpace CloudFiles. When configured properly it will move files from your media library to the CDN, and replace the links in your content to the proper URL.
If performance is your main interest, you should also look at serving your site via Nginx and php-cgi, managed through something like spawn-fcgi. There are some caveats to using Nginx vs Apache, but once tuned the performance is far superior. You can find a guide for the configuration on the WordPress site.
Lastly you can set up a reverse proxy from your front end server to point to static files hosted on a different server - the content just passes through your front end server. This can be accomplished using Apache or Nginx, but the performance will be better in the latter. Please see my site for an example of using an Nginx reverse proxy - you would just want to proxy requests for your static files location to a different back-end server.
I have a tomcat web application that is using CloudFront, Apache, and S3 for static content delivery. The static content uses a versioned folder structure to allow for zero downtime upgrades:
V1
css
images
javascript
fonts
V2
css
images
javascript
fonts
etc
For everything but css files, these are referenced straight away to the CDN from the application. However, supporting the existing enterprise web application is necessary, so the existing /images and /fonts references cannot be changed.
To combat this, I've set up a custom origin on an apache EC2 that serves as a reverse proxy to the S3 bucket. It uses the referer header from the css file to direct the image reference to the correct directory.
This works fine, except for fonts where firefox reports that a CORS violation is occurring on the EC2. This doesn't make any sense to me since both the all of the content is hosted on the S3 itself.
Nonetheless, I have the CORS configuration set in the S3 bucket, a Header being set in apache for Access-Allow-Control-Origin, and CloudFront passing the Origin header. The css files are served fine with no CORS violations, but the fonts will not be served. Is there a configuration in Apache that I'm missing, or is it something else? I'd appreciate any help I can get.
Thanks
On June 26, 2014 AWS added support for CORS in CloudFront, are you using this support?
This SO answer provides info about enabling it for a particular distribution:
https://stackoverflow.com/a/24459590/3195497
If this is not enabled, then my guess is that CloudFront has cached a response without a Access-Allow-Control-Origin header and is serving that response regardless of what Origin is sent in the request.
I'm using Wordpress HTTPS plugin to force Admin mode to run under HTTPS.Its fine for Admin Panel.
But still, once i'm under HTTPS mode, every front pages are broken because of, it is saying some front-pages Asset Files are coming as normal HTTP (without 'S') which are then getting blocked to load onto page.
Than resulted in rendering the page looking messy.
So to be more clear again,
When i call the site in HTTPS / SSL mode .. some asset files, like:
http://www.my-another-site.com/something.js
http://www.my-another-site.com/something.css
http://www.my-another-site.com/something.jpg
... etc
.. are BROKEN. (Because i'm in https mode and those above files are coming as http)
So how to make Wordpress to FORCE LOAD those whatever files?(I DON'T CARE WHETHER IT IS SECURE OR NOT. Just want the site under https://... to be rendering properly.)
You could try using a protocol relative URL (dropping both the http and https from the URLs) - see this answer.
According to this answer you'll need to be on a recent version of WordPress (I'd assume 3.5) for it to work with wp_enqueue_script.
I have a CDN that serves our static content. From Internap.
To make the CDN urls in my html a bit more palatable, I have a "CNAME" entry in my DNS settings:
cache.mysite.com => CNAME points to Internap
The Internap server is an origin pull server. So my domain has a "/public_html/cache" folder that is pointed to the CDN.
There are files I am putting here that I would like to serve only from my own domains.
Also important is that my site is behind Nginx. That's the front server, and serves all static files like ttf/woff/eot/css/js/gif, etc. Only the PHP needs are proxied in the backend to Apache.
I came across the "access-control-allow-origin" directive. Nginx has a way to do this too (useful ServerFault article and a useful StackOverFlow article too), but I want to limit the access from some domains only, which I own.
The reason I'm a little confused is because I have three layers in serving the fonts and managing access:
CDN
Nginx static server
Apache (probably not needed at all as Nginx serves the file to the CDN, and then the CDN takes over?)
My questions:
How should I specify some select domains in Nginx. The "*" is really not what I need. Will this work for my domains and also covering related subdomains--
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin *.domain1.com,*.domain2.com
}
Where inside Nginx should I specify this block. In the vhost file related to the specific domain from which I'm serving fonts (cache.mysite.com mentioned earlier) or in the overall Nginx config?
Do I need the Apache stuff at all? If Nginx is handling the webfont formats already and controlling access to it.
Thanks!
If the browser downloads the fonts from your CDN, then there is no way of checking the headers. This is because Internap caches the downloaded file, otherwise the CDN would slow things down, and your data-traffic would remain the same. It could be that Internap provides the option to only accept certain referrers.
The Access-Control-Allow-Origin option might work, but you'll have to check to see if the CDN also forwards this header.
You have to check the HTTP_REFERRER header to see if the files are being accessed from your own domain or not.
If not, you can always redirect them (and maybe through in a 403 error as well)
For the past few years, if I've wanted a URL of a page on a site rewritten I've put the rewritten URL into the link on the page.
E.g. If the page is /Product.aspx?filename=ProductA and it's rewritten to /Product/ProductA.aspx then I've put the following in my link:
...
However, with outbound rules I could just put the links in to the actual file paths, and rewrite with an outbound rule.
Is this a bad method? Would it cost the server unnessacery additional resources?
I would not consider this bad practice. Infact it affords you some additional flexibility as your mapping for friendly to real url's is all managed in one central location. If your seo team decide they want to change the url scheme, you dont have to pick through all the links on your site updating them- risking missing one!
One important limitation of the current version of the IIS rewrite module, is you cannot use outbound rewriting in conjunction with Static compression- However you can still use Dynamic compression. Static compression is nice because it will cache the compressed version of the page. See this article for instructions on getting url rewrite working with Dynamic compression: http://forums.iis.net/p/1165899/1950572.aspx