I know how to AutoScale. Now I need to know how to configure a web server so that I can add it to ELB and make it trigger AutoScaling (up and down).
I have an EC2 server running a web server. I have mounted an EBS and am using it as the web root. Now I want to make an AMI based on this server and tell AutoScaling to launch new servers based on this AMI.
Everyday my Wordpress site gets updated with new posts. If I make the AMI today and after two days I have a traffic spike that causes the ELB to scale up to meet demand, how will my EBS data be updated on the AMI?
I want to understand the role of the AMI in AutoScaling. How will the newly launched server in the scaling group have the www data that is on the attached EBS. I know that an EBS volume can only be attached to one server.
Also, when the AMI is used to launch a new server, will it grab the latest data from the source server and update the AMI at the moment it is launched so that the new server will have the most recent changes.
Can someone guide me through this?
Related
I have installed a LAMP server on an EC2 instance. Then I created an AMI so that I can easily spin up instances in the future.
Today I went back to spin up one such instance, and to my surprise the IP in the configuration is wrong. Basically when I first installed the LAMP server, Wordpress detected the IP and configured accordingly. Now on the instance that I launched today the IP is different, but the configuration for the previous IP is still there.
Now, I know how to change Wordpress IP. My question is: How can I make this step automatic when I launch an EC2 instance from an AMI?
Thanks
Instance Metadata will give you a lot of information about the current EC2 instace. You can use that + some hand-crafted shell scripts which will be triggered on boot to update configuration.
An alternative solution is to use some configuration management tool (Chef, Ansible ... ). To help you configure the application.
I've found at that Instagram share their technology implementation with other developers trough their blog. They've some great solutions for the problems they run into. One of those solutions they've is an Elastic Load Balancer on Amazon with 3 nginx instances behind it. What is the task of those nginx servers? And what is the task of the Elastic Load balancers, and what is the relation between them?
Disclaimer: I am no expert on this in any way and am in the process of learning about AWS ecosystem myself.
The ELB (Elastic load balancer) has no functionality on its own except receiving the requests and routing it to the right server. The servers can run Nginx, IIS, Apache, lighthttpd, you name it.
I will give you a real use case.
I had one Nginx server running one WordPress blog. This server was, like I said, powered by Nginx serving static content and "upstreaming" .php requests to phpfpm running on the same server. Everything was going fine until one day. This blog was featured on a tv show. I had a ton of users and the server could not keep up with that much traffic.
My first reaction would be to just use the AMI (Amazon machine image) to spin up a copy of my server on a more powerful instance like m1.heavy. The problem was I knew I would have traffic increasing over time over the next couple of days. Soon I would have to spin an even more powerful machine, which would mean more downtime and trouble.
Instead, I launched an ELB (elastic load balancer) and updated my DNS to point website traffic to the ELB instead of directly to the server. The user doesn’t know server IP or anything, he only sees the ELB, everything else goes on inside amazon’s cloud.
The ELB decides to which server the traffic goes. You can have ELB and only one server on at the time (if your traffic is low at the moment), or hundreds. Servers can be created and added to the server array (server group) at any time, or you can configure auto scaling to spawn new servers and add them to the ELB Server group using amazon command line, all automatically.
Amazon cloud watch (another product and important part of the AWS ecosystem) is always watching your server’s health and decides to which server it will route that user. It also knows when all the servers are becoming too loaded and is the agent that gives the order to spawn another server (using your AMI). When the servers are not under heavy load anymore they are automatically destroyed (or stopped, I don’t recall).
This way I was able to serve all users at all times, and when the load was light, I would have ELB and only one Nginx server. When the load was high I would let it decide how many servers I need (according to server load). Minimal downtime. Of course, you can set limits to how many servers you can afford at the same time and stuff like that so you don’t get billed over what you can pay.
You see, Instagram guys said the following - "we used to run 2 Nginx machines and DNS Round-Robin between them". This is inefficient IMO compared to ELB. DNS Round Robin is DNS routing each request to a different server. So first goes to server one, second goes to server two and on and on.
ELB actually watches the servers' HEALTH (CPU usage, network usage) and decides which server traffic goes based on that. Do you see the difference?
And they say: "The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decommissioned."
DNS Round robin is a form of a load balancer. But if one server goes kaput and you need to update DNS to remove this server from the server group, you will have downtime (DNS takes time to update to the whole world). Some users will get routed to this bad server. With ELB this is automatic - if the server is in bad health it does not receive any more traffic - unless of course the whole group of servers is in bad health and you do not have any kind of auto-scaling setup.
And now the guys at Instagram: "Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check).".
The scenario I illustrated is fictional. It is actually more complex than that but nothing that cannot be solved. For instance, if users upload pictures to your application, how can you keep consistency between all the machines on the server group? You would need to store the images on an external service like Amazon s3. On another post on Instagram engineering – “The photos themselves go straight to Amazon S3, which currently stores several terabytes of photo data for us.”. If they have 3 Nginx servers on the load balancer and all servers serve HTML pages on which the links for images point to S3, you will have no problem. If the image is stored locally on the instance – no way to do it.
All servers on the ELB would also need an external database. For that amazon has RDS – All machines can point to the same database and data consistency would be guaranteed.
On the image above, you can see an RDS "Read replica" - that is RDS way of load balancing. I don't know much about that at this time, sorry.
Try and read this: http://awsadvent.tumblr.com/post/38043683444/using-elb-and-auto-scaling
Can you please point the blog entry out?
Load balancers balance load. They monitor the Web servers health (response time etc) and distribute the load between the Web servers. On more complex implementations it is possible to have new servers spawn automatically if there is a traffic spike. Of course you need to make sure there is a consistency between the servers. THEY CAN share the same databases for instance.
So I believe the load balancer gets hit and decides to which server it will route the traffic according to server health.
.
Nginx is a Web server that is extremely good at serving a lot of static content for simultaneous users.
Requests for dynamic pages can be offloaded to a different server using cgi. Or the same servers that run nginx can also run phpfpm.
.
A lot of possibilities. I am on my cell phone right now. tomorrow I can write a little more.
Best regards.
I am aware that I am late to the party, but I think the use of NGINX instances behind ELB in Istagram blogpost is to provide high available load balancer as described here.
NGINX instances do not seem to be used as web servers in the blogpost.
For that role they mention:
Next up comes the application servers that handle our requests. We run Djangoon Amazon High-CPU Extra-Large machines
So ELB is used just as a replacement for their older solution with DNS Round-Robin between NGINX instances that was not providing high availability.
I have hosted a wordpress blog on AWS using EC2 instance t1.micro (Ubuntu).
I am not an expert on linux administration. However, after going through few tutorials, I was able to manage to have wordpress successfully running.
I noticed a warning on AWS console that "In case if your EC2 instance terminates, you will lose your data including wordpress files and data stored by MySql service."
Does that mean I should use S3 service for storing data to avoid any accidental data loss? Or my data will remain safe in an EBS volume even if my EC2 instance terminates?
By default, the root volume of an EC2 instance will be deleted if the instance is terminated. It can only be terminated automatically if its running as a spot instance. Otherwise it can only be terminated if you do it.
Now with that in mind, EBS volumes are not failure proof. They have a small chance of failing. To recover from this, you should either create regular snapshots of your EBS volume, or back up the contents of your instance to s3 or other storage service.
You can setup snspshot lifecycle policy to create scheduled volume snapshots.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
I am a DevOps guy and presently I am running my Ruby on Rails application on ubuntu ec2 where the app and also the web server reside inside the same box but we are using mysql RDS cluster. I can see lot of spikes due to more traffic to the web site. So I am planning to change the system. I wanna put web server nginx in a separate instance and web app in a separate instance. But this needs a load balancer which should reside in nginx box, but once the traffic goes up, the nginx instance can be configured to auto scale. What about the app server instance? It can be configured to auto scale but it needs to attach itself to the web server and web server needs to discover the new app server which was created. How can achieve this? Kindly help me out to get this done.
When you are using one single web server at the moment, a transition to using nginx as static webserver and proxy for another backend webserver on another instance really makes sense and will give you performance boost.
However I am not sure if you really need autoscaling. Autoscaling mostly makes sense if you want to react on fast traffic spikes etc. If you have a more or less continuous workload that might increase over time, it should be easier to manually launch and add another backend server in the nginx config. If this does not work for you, you can still have a look at Amazon's Elastic Loadbalancers and Autoscaling afterwards.
I'm new to AWS and cloud computing in general. For a personal project I've created a micro instance on amazon ec2 and installed and configured a wordpress multisite site. For the database, I use an RDS instance.
My question is, how can I create a second micro instance that serve the same content and use a load balancer to spread the traffic to these two instances? I want to do this so that so if the first EC2 instance crashes then it will get served from second instance and the site doesn't go down.
Thanks for your help and sorry for any english related error.
As far as the wordpress installation is concerned, there are 2 main components.
Wordpress Database
Wordpress Files (Application Files including plugins,themes etc.)
The Database
For enabling auto scaling setup and to ensure consistency, you will
need to have the database outside the auto-scaling EC2 instance. As mentioned
in your question, this is in RDS and hence it wont be a problem
The second EC2 instance
Step 1: First create an AMI of your Wordpress Instance from the existing one.
Step 2: Launch a new EC2 instance from this AMI which you created from the first one. This will result in 2 EC2 instances. Instance 1 (the original one with Database) and Instance 2 (The copy of Instance 1)
However, any changes that you do in Instance 1 wont reflect in Instance 2.
If you want to get rid of this problem, consider using EFS service to create a shared volume across 2 EC2 instances and configure the wordpress installation to work from that EFS volume. This way, your installation files and other content will be in shared EFS volume commonly accessed by both EC2 instances.
You will have to move your database out of your localhost (I guess that you have it on the same micro instance), either to another ec2 instance or preferably to an RDS instance.
Afterwards you need to create a copy of your ec2 instance to another ec2 micro instance and put the both behind a load balancer.
First create the image of your existing micro ec2 instance on which you have configured word press
Second, create a classic load balancer
Third, create a launch configuration (LC) with above AMI you created.
Fourth, create auto scaling group with above LC, ELB and keep the group size count to 2.
This will make sure you have 2 instances running all time and if at all any instance goes down, ASG will create another new instance from AMI and terminate failed instance. REF-
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-register-lbs-with-asg.html
Or
If you want you can use elastic bean stalk also-
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html
Thanks