AWS CloudFront + CloudFlare: Using Alternate Domain Names (CNAMEs) - wordpress

I have the following setup:
- A WordPress Site hosted on hostgator.com
- DNS managed through cloudflare.com
- Amazon CloudFront used as CDN for Media, CSS & JS files (configured through W3 Total Cache WordPress Plugin)
On the pingdom.com Website speed test I get the message "This page makes 43 parallelizable requests to "mydomain.com". Increase download parallelization by distributing these requests across multiple hostnames." So this is what i'm trying to accomplish.
Amazon CloudFront is already working and serving files like this "1234567.cloudfront.net/wp-content/.../image.png". Now I want to parallelize hostnames and have something like "static1.mydomain.com, static2.mydomain.com etc."
I added those CNAMEs to my CloudFront Distribution and also to CloudFlare like this:
"static1.mydomain.com is an alias of 1234567.cloudfront.net"
I assume it should be working now or am I missing something?

CloudFront should serve files to any domain alias that is setup in the distribution 'alias' list section. Read the fine print to see how to separate between multiple aliases. Once you set it up and save, add the domain aliases to cloudflare and you should get the same file response on both aliases.
Note: in the last 12 months or so, we had significant improvements in how modern browsers fetch files and domain sharding is now considered a thing of the past.

Related

Change server in all URLs

Using PAW and I have setup a bunch of test URLs for my API server. No I want to repeat all those URLs on my dev and staging servers.
Is it possible to do a search and replace on the server domain name to recreate the same tests for each server?
Unfortunately, there's no way yet in Paw to edit requests in batch.
The recommended way for fields that are likely to change (e.g. server hostname / base URL) is to set them in environment variables. See this documentation article: https://paw.cloud/docs/environments/environments-reusable-presets
Note: batch editing is for sure something that will come in future versions.

2 Servers for website and media files (Wordpress Plugin Needed)

I am needing to host media files on one server (with a different domain name) and have my website (files) on the other. I have all Wordpress base websites and am needing all current files to be moved to the other domain/server. I cannot do this manually as there are over 10,000 media files all up. Is there any plugin that allows to do this? Or any other way to do this? I am doing this to reduce the average CPU load / memory requirement. Thanks
If you are having performance issues with WordPress, my first recommendation would be to make sure you are using a caching plugin such as WP Super Cache or W3 Total Cache (I happen to use the latter). You will need to use a persistent caching option as well for the best performance, such as Memcached.
I can only speak to W3TC, but it does have an option to server your static content via a CDN such as RackSpace CloudFiles. When configured properly it will move files from your media library to the CDN, and replace the links in your content to the proper URL.
If performance is your main interest, you should also look at serving your site via Nginx and php-cgi, managed through something like spawn-fcgi. There are some caveats to using Nginx vs Apache, but once tuned the performance is far superior. You can find a guide for the configuration on the WordPress site.
Lastly you can set up a reverse proxy from your front end server to point to static files hosted on a different server - the content just passes through your front end server. This can be accomplished using Apache or Nginx, but the performance will be better in the latter. Please see my site for an example of using an Nginx reverse proxy - you would just want to proxy requests for your static files location to a different back-end server.

How do I purge my root domain from Amazon Cloudfront CDN - and should I?

Just realizing that I may have erred in setting up Amazon Cloudfront (origin pull), not S3 buckets.
When navigating to the homepage, http://www.occupyhln.org (WordPress domain), the browser tries to connect to the A-name I set up, which is http://cdn.occupyhln.org ... and eventually loads as www.occupy in the browser address bar.
However, when I type in http://cdn.occupyhln.org, that loads in the address bar as well. I was under the impression that this isn't recommended either.
Am I correct in assuming this is adding an unnecessary redirect and slowing down page load times? I thought I only wanted my static files to be hosted by Amazon (.js, .css, .jpg, .png, etc.).
What can I do to remedy this error -- assuming it is one -- and prevent it from happening in the future? Any guidance would be appreciated!
I just saw the cdn.occpyhin.org when I went to that page... so your mistake is that you should have created a CNAME aka alias rather than an A Record which resolves to the domain... in this case a sub domain cdn.occpyhin.org. You should use the plugin W3 Total Cache. It takes a lot of guess work out of optimizing your site. In addition, modifications that W3 Total Cache can help you will are just as important (if not more important) as CDN.
If your not using AWS Route 53 for your name servers I would recommend doing that.

Dependencies that must be done away with for using CDN

I wanted to know that, is there some special requirement for a website to make use of CDN ?
i mean is there some special scheme(or atleast considerations) on which your website must be build right from the start to make use of CDN (Content delivery network).
is there anything that can stop a website from making use of CDN, for example the way it references the content files, static file paths or any other thing conceivable.
Thanks
It depends.
You have two kinds of CDN services:
Services like AWS Cloudfront that require you to upload the files in some special place that they read from (eg. AWS S3) - In this case you need have a step in your build process to correctly upload the files and handle the addresses somehow inside your application
Services like Akamai that just need you to change and tweak your DNS records so they will serve the request to your users instead of you - In this case you would have two domains (image.you.com and image2.you.com) and have the image.you.com pointing to Akamai and image2.you.com pointing to the original source of the file. Whenever a user requested an image in Akamai, they would come to you through the "back door", fetch it and starting serving that file always.
If you use the second approach it's really simple to have a CDN supporting your application.
There are a whole bunch of concerns when dealing with CDN solutions.
The first one is that a CDN can't serve a dynamic page - i.e. a page that is unique to every user. Typically, that includes PHP, ASPX, JSP, RubyOnRails etc. - so if you're hoping to support lots of users for a dynamic site, you have to come up with another solution. Some CDN providers support "Edge Side Includes" - this allows you to glue dynamic pages together with cached content on the CDN, but this creates quite a complex application.
Of course, even on a dynamic application, a CDN can still serve static files - images, stylesheets, javascript files, videos etc.
#Tucaz explains the two major options here (actually, Akamai also provides a "filestore" CDN option). If you select the second option - effectively, the CDN becomes a caching reverse proxy in front of your website - it makes sense to tweak the cache headers on your HTTP server, and tell the CDN to honour those. Make sure you set your .ASPX files to not cache!

How to make Drupal's multisite algorithm ignore the domain name part

I currently develop Drupal web sites using its multi-site feature that allows me to have a single code base and support multiple distinct settings per each site.
I set up a dev server and I was quite happy with my arrangement of domains like example.com.local (not that happy because I had to perform a small conversion before entering production, but still quite happy) and the thing used to work well. Too bad I recently started to work at places outside the LAN in which my dev server resides--mostly at clients' places where I need to demo their sites. First of all I set up a dyndns.org account and the server is accessible through the Internet.
Unfortunately the whole domain-based multi-site ungracefully fell down, since I'm now accessing the server via myservername.dyndns.org and Drupal's algorithm takes the domain name into account, so I'm forced to use at least the TLD as part of the directory name (namely sites/local.example.com). So I decided to switch to directory-based multi-site, and now I'm able to access my server from inside the LAN using myservername.local/example.com (having renamed the sites/ subdirectories accordingly). You should easily see why this is suboptimal, since when I browse to myservername.dyndns.org/example.com Drupal looks for sites/org.example.com. I temporarily ended up making a link from sites/org.example.com to sites/local.example.com but again, this does not scale well If and when I'll have to drop dyndns.org for, say, dev.mycorporatesite.com...
Is there any other possibility? I have full access to the server, I can change Apache2's configs, .htaccess and all the stuff.
I would recommend against referencing drupal multisites in folders but instead would set up your server to have a fixed domain name and each site in a subdomain.
So your dev server is at mydevserver.com
and then each site could be
client1.mydevserver.com
client2.mydevserver.com
etc.
If you also at the same time as creating these, you move the files folder from the default to whatever the live site will be i.e.
sites/livesite.com/files
Then when you have to go live, all the references will be correct (if you are drupal 7 this might not be an issue)

Resources