I have a tomcat web application that is using CloudFront, Apache, and S3 for static content delivery. The static content uses a versioned folder structure to allow for zero downtime upgrades:
V1
css
images
javascript
fonts
V2
css
images
javascript
fonts
etc
For everything but css files, these are referenced straight away to the CDN from the application. However, supporting the existing enterprise web application is necessary, so the existing /images and /fonts references cannot be changed.
To combat this, I've set up a custom origin on an apache EC2 that serves as a reverse proxy to the S3 bucket. It uses the referer header from the css file to direct the image reference to the correct directory.
This works fine, except for fonts where firefox reports that a CORS violation is occurring on the EC2. This doesn't make any sense to me since both the all of the content is hosted on the S3 itself.
Nonetheless, I have the CORS configuration set in the S3 bucket, a Header being set in apache for Access-Allow-Control-Origin, and CloudFront passing the Origin header. The css files are served fine with no CORS violations, but the fonts will not be served. Is there a configuration in Apache that I'm missing, or is it something else? I'd appreciate any help I can get.
Thanks
On June 26, 2014 AWS added support for CORS in CloudFront, are you using this support?
This SO answer provides info about enabling it for a particular distribution:
https://stackoverflow.com/a/24459590/3195497
If this is not enabled, then my guess is that CloudFront has cached a response without a Access-Allow-Control-Origin header and is serving that response regardless of what Origin is sent in the request.
Related
I've got a Next.js site where I'm serving some self-hosted fonts using #font-face and locally the fonts come through fine with no errors and the correct content-type, like so:
But once I get it up on a production server, the fonts are being served, but getting 500 errors and coming through with the content-type text/html.
Not sure what is going on here. I've served fonts through the public folder before on other sites without a problem. This is on Digital Ocean with a custom express server handling a few extra things. I've tried different font file types, moving the font directory, etc.
What's even stranger is that I only get the 500 errors on Chrome. Both Safari and Firefox will serve the fonts with status 200, but their content-type is still txt.
Here is the staging server.
It turned out to be a CORS issue. I had to update my CORS policy through Express.js to allow the remote server. This resolved the issue for me.
We were trying to optimise the website using google pagespeed and now having some issue:
We're using nginx_pagespeed module
Trying to enable the prioritize_critical_css filter
Since the CSS files are loading from external CDN domain, the critical css filter is not working.
When ran with the ?PageSpeedFilters=debug, the following error is generated in the html source
Summary computation status for CriticalCssBeacon
Resource 0 https://mycdndomain.com/styles/screen-2d470013.css: Cannot create resource: either its domain is unauthorized and InlineUnauthorizedResources is not enabled, or it cannot be fetched (check the server logs)
Where mycdndomain is our CDN domain.
Can someone help me fixing this issue. What nginx pagespeed configuration changes are required ?
Also what is the InlineUnauthorizedResources ?
By default, mod_pagespeed only rewrites resources on the same domain as the HTML, to enable rewriting resources on other domains you need to explicitly authorize rewriting and maybe do some configuration.
Most simply, you can authorize a domain for rewriting with the pagespeed Domain declaration:
pagespeed Domain https://mycdndomain.com;
This will instruct mod_pagespeed to rewrite resources from that domain.
But be careful, this just instructs mod_pagespeed to rewrite the URLs, you will have to make sure that your CDN can serve the rewritten URLs! If it just pulls the content from your server, this should be fine, but if it's a push CDN, it will break when you change the URLs.
See https://developers.google.com/speed/pagespeed/module/domains for a full description of authorizing and mapping domains.
I am needing to host media files on one server (with a different domain name) and have my website (files) on the other. I have all Wordpress base websites and am needing all current files to be moved to the other domain/server. I cannot do this manually as there are over 10,000 media files all up. Is there any plugin that allows to do this? Or any other way to do this? I am doing this to reduce the average CPU load / memory requirement. Thanks
If you are having performance issues with WordPress, my first recommendation would be to make sure you are using a caching plugin such as WP Super Cache or W3 Total Cache (I happen to use the latter). You will need to use a persistent caching option as well for the best performance, such as Memcached.
I can only speak to W3TC, but it does have an option to server your static content via a CDN such as RackSpace CloudFiles. When configured properly it will move files from your media library to the CDN, and replace the links in your content to the proper URL.
If performance is your main interest, you should also look at serving your site via Nginx and php-cgi, managed through something like spawn-fcgi. There are some caveats to using Nginx vs Apache, but once tuned the performance is far superior. You can find a guide for the configuration on the WordPress site.
Lastly you can set up a reverse proxy from your front end server to point to static files hosted on a different server - the content just passes through your front end server. This can be accomplished using Apache or Nginx, but the performance will be better in the latter. Please see my site for an example of using an Nginx reverse proxy - you would just want to proxy requests for your static files location to a different back-end server.
I'm running a Rails 3.2 app on Heroku. For about a week we were serving assets via CloudFront by setting config.action_controller.asset_host to our CloudFront URL in our config/production.rb file. This was working successfully.
This past weekend, however, I noticed that after a deploy to production our website looked very off, and the reason was that it was serving stale CSS. I looked at the css file it was serving (using inspect element in Chrome), and the CSS was an md5-hashed application.css file coming from CloudFront. I removed the asset_host line (so that assets would be served directly from our app) and deployed again (without changing any css), and noticed that the site, which now looked fine, was serving application.css with a different md5 hash.
So it appears that for some reason, CloudFront was serving an old version of application.css, and I'm guessing this is because our application was telling users' browsers to serve an old version of application.css.
To add one more variable: we do cache the home page and part of our our layouts/application.html.erb file (which contains the stylesheet tag), but on each deploy we clear the cache via Rails.cache.clear.
So my best guess is that Rails.cache.clear might not be properly invalidating the cache. We use the dalli memcache client, if that helps.
Any insights or suggestions would be greatly appreciated!
Update:
I tried moving the CSS out of the cached block and re-enabling CloudFront, but the CSS still appears broken. So it doesn't appear to be related to caching the header.
Update 2:
It looks like this is a CloudFront issue, because when I inspect element and change the CSS URL in the to our root domain (instead of the CDN domain), the CSS renders correctly.
Since a md5-hash collision is extremely unlikely, it seems like CloudFront is serving the wrong CSS file when I'm requesting the correct md5-fingerprinted CSS file. Any ideas?
assets are not automatically synced to your CDN when you deploy an app on heroku. In the deployment script on heroku it does do the rake assets:precompile task but it does not then place them into your CDN. You'll have to create some sort of mechanism to do this on your own on deployment of your app.
Somebody else asked a similar question and you might want to have a look at what the suggestions were there: Rails 3 automatic asset deployment to Amazon CloudFront?
I have a CDN that serves our static content. From Internap.
To make the CDN urls in my html a bit more palatable, I have a "CNAME" entry in my DNS settings:
cache.mysite.com => CNAME points to Internap
The Internap server is an origin pull server. So my domain has a "/public_html/cache" folder that is pointed to the CDN.
There are files I am putting here that I would like to serve only from my own domains.
Also important is that my site is behind Nginx. That's the front server, and serves all static files like ttf/woff/eot/css/js/gif, etc. Only the PHP needs are proxied in the backend to Apache.
I came across the "access-control-allow-origin" directive. Nginx has a way to do this too (useful ServerFault article and a useful StackOverFlow article too), but I want to limit the access from some domains only, which I own.
The reason I'm a little confused is because I have three layers in serving the fonts and managing access:
CDN
Nginx static server
Apache (probably not needed at all as Nginx serves the file to the CDN, and then the CDN takes over?)
My questions:
How should I specify some select domains in Nginx. The "*" is really not what I need. Will this work for my domains and also covering related subdomains--
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin *.domain1.com,*.domain2.com
}
Where inside Nginx should I specify this block. In the vhost file related to the specific domain from which I'm serving fonts (cache.mysite.com mentioned earlier) or in the overall Nginx config?
Do I need the Apache stuff at all? If Nginx is handling the webfont formats already and controlling access to it.
Thanks!
If the browser downloads the fonts from your CDN, then there is no way of checking the headers. This is because Internap caches the downloaded file, otherwise the CDN would slow things down, and your data-traffic would remain the same. It could be that Internap provides the option to only accept certain referrers.
The Access-Control-Allow-Origin option might work, but you'll have to check to see if the CDN also forwards this header.
You have to check the HTTP_REFERRER header to see if the files are being accessed from your own domain or not.
If not, you can always redirect them (and maybe through in a 403 error as well)