We are thinking of using Cloudfront to host our website images/css/js,
I realise the CDN would have excellent uptime,
but what if the public internet DNS goes down temporarily
(as it does sometimes, here in New Zealand we can't access international sites for a brief period), does Cloudfront rely on such a service?,
This would obviously cause the website to show, but it wouldn't be able to get the CDN assets..
You could probably use Route 53 in conjunction with CloudFront to get high-availability DNS. It's possible that CloudFront already uses Route 53, since they're both Amazon products, but I don't know for sure.
Related
I know by experience that with load balancers L7 we can redirect traffic to others endpoints if applications become unhealthy.
As Firebase Hosting is using CDN to deliver static content, is there a way to redirect traffic if this static content becomes unavailable?
I thought of using DNS rules that uses redirect but I am concerned about the propagation time that may be too much. (and that could even take more time then Google resolving the incident)
Is there a way to manage that with this service or should I switch to another kind of architecture?
This would depend on your DNS manager itself, Firebase Hosting is delivered via Fastly and if Fastly is down, a good ~30% of the internet is down in general. This was seen a few weeks ago when multiple social media sites and apps broke. As such, you should ensure your host is not on Fastly as the first condition.
From there, you would have to manage some basic logic through a 3rd party DNS manager. A popular one is Cloud Flare which allows the configuration of load balancer pools where if your primary pool is unavailable, you can connect to your backup pool (costs do apply)
https://developers.cloudflare.com/load-balancing/create-load-balancer-ui
Here is the situation:
I have a static website hosted by AWS S3(www.mysite.com), however, I want to also attach a blog to a sub-path with in my domain (www.mysite.com/blog) which uses wordpress on an EC2 instance.
How would I go about it?
Yes, this can be done, but you need to understand why the solution works the way it does.
A domain name points to a single logical endpoint, which handles all requests for the domain. If certain paths are handled by one system and other paths are handled by another one, then a single system must be in charge of routing those requests, and routing them through itself.
You cannot configure how paths are handled using DNS.
In my answer to Can I use CloudFront to serve a WordPress blog from the same domain, but a different server? (at Server Fault), I gave an overview of how this can be done in an AWS-centric solution. CloudFront, in addition to providing caching, is a reverse proxy, and that's fundamentally what you need, here: a reverse proxy to examine each request and select the correct back-end server. There are other solutions, such as using HAProxy in EC2 to handle request routing but they are more complex and may not perform as well in all cases.
You configure a CloudFront distribution with two origin servers (your bucket's web site endpoint and the Wordpress server), and use two cache behaviors so that /blog* goes to Wordpress and everything else goes to the bucket. Then you configure your domain name as an alternate domain name on the CloudFront distribution, and point your domain name to CloudFront in DNS.
The slightly tricky part here is that the wordpress site must be rooted at /blog and not at / because CloudFront will actually send the /blog (at the beginning of the path) to the WP machine, so it needs to expect that. CloudFront uses the path prefix to select the origin server (by finding the matching cache behavior for that path), but it can't remove the path prefix.¹
Wordpress is not my specialty, the this appears to require some WP configuration changes that appear to be fairly straightforward.
¹ CloudFront can't remove the path prefix. This will change when Lambda#Edge launches, which is an integration of CloudFront and Lambda functions, allowing server-side examination and manipulation of request and response headers from within the CloudFront infrastructure, but even when that is a available, it will still be advisable to configure WP to not be at the root, for simplicity.
By a different subdomain, I'm assuming that you want the blog to be at blog.mysite.com. I think that is the only way to go (you cannot use a /blog but will have to use a subdomain). In that case, following steps should work:
Create an Elastic IP and attach it to the EC2 instance
Configure the EC2 WP instance to respond to blog.mysite.com
In the DNS provider of www.mysite.com, point the A record of blog to point to the Elastic IP of the EC2 instance
I am just wondering if this is possible/how I could go about doing this.
I work for a company that has their domain name registered on Site A while their hosting is on Site B. This is no issue as we just have the Registrar at Site A point the Name Servers to Site B. Easy.
Where I get a little confused, is say I would like to use a CDN such as CloudFlare (Site C), typically in a basic case, I would go to the registrar/host and just change my Name Servers to the ones given by CloudFlare. However if my Registrar and Domain Host are different, it appears someone could get lost in the mix, as if I go from Site A point to Site C .. how does host at Site B supply Site C with all the information to host and control the CDN for?
Thanks for the insight!
I reached out to CloudFlare and got this very simple, perfect answer.
Here's how it will work for you :
At your domain registrar, you will set your authoritative name servers to the ones that CloudFlare will assign to you when you sign up.
Within CloudFlare, using our dashboard, you will configure your DNS zone file to point your domain to the IP address of the server assigned to you by your hosting provider.
The CloudFlare reverse proxy does the rest!
Hope this helps!
We have a website deployed on AWS EC2 running on ubuntu,Apache, MYSQL. We have been getting continous requestes from below IP
"195.154.105.219"
"88.150.242.243". Requesting for xmlrpc.php file using POST method. As a result our website has become really slow and our clients work has been effected. As of now we have blocked these IP values by dropping them from iptables. We would like to know how to safegaurd our site from any future attacks like this.
The question is very general, and depending to your application's requirements, your budget and other factors, there are several techniques you can use, separately or together to mitigate DDOS and SPAM attacks.
Use Auto Scaling and an Elastic Load Balancer, to let AWS scale your infrastructure depending on traffic : http://aws.amazon.com/autoscaling/
Use S3 to serve static content. S3 is designed is scaling automatically for incoming traffic. All content served by S3 directly allows to offload your EC2 based web server : http://aws.amazon.com/s3/
Use CloudFront to distribute and server your content from AWS' edge location. This mitigates DDOS by distributing attackers' request to the network of edge locations instead of sending the traffic to your web server : http://aws.amazon.com/cloudfront/
All these three options have a cost associated, be sure to understand the pricing structure before deciding to implement any of these.
If you have a relatively short and stable list of IP addresses you want to block, you can customise either your EC2 instance's Security Group (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html) either your VPC Subnet ACL (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) to deny traffic from these IP addresses. This approach is not very scalable and, most of the time, you will play a mouse / cat game trying to catchup with whatever new addresses are used by your attackers
Last but not least, using plain old Apache configuration to block certain URL or restrict access to these by IP Addresses is very effective too (http://httpd.apache.org/docs/current/en/mod/mod_authz_core.html#require and File Directive)
Last but not least, I would encourage everyone to watch this re:invent talk about DDOS resiliency for AWS : https://www.youtube.com/watch?v=V7vTPlV8P3U)
Seb
xmlrpc.php is from wordpress. install the Disable xmlrpc pingback plugin, or better yet , in the wordpress site. .htaccess , deny xmlrpc.php file ;). that will fix it. Also checkup the wp-admin/scripts for any wierd script or just . find /var/www/ -type f -mtime -10 , to find the latest modified files.. check for any wierd php script..
I'm just learning about CDNs, so please forgive if this is a dumb question.
Would implementing a CDN involve moving images and changing paths?
Yes a CDN (Content Delivery Network) is at it basis nothing more that a set of webservers.
If you want to host files on a CDN you must copy your files to the CDN servers and then use the full CDN address that points to those files on those servers on your own webpage.
You can use a CDN on the same server but different URI. For instance, having your page in: www.example.com with cdn: cdn.example.com (with cdn.example.com as a vhost alias) should be faster then getting all data only from www.example.com, i think it's because of the number of http connections related to the address.
Of course it's best if you have it in another server, in this case you have to copy everything.
Not necessarily. You can use a service such as CloudFlare which requires only a modification of some of your DNS settings. In short, the service determines which files being served are static, and caches those in its network, generally reducing overall traffic to your servers. You also get the benefit of any geographical distribution the service provides that your own hosting service might not.