I have hosted a wordpress blog on AWS using EC2 instance t1.micro (Ubuntu).
I am not an expert on linux administration. However, after going through few tutorials, I was able to manage to have wordpress successfully running.
I noticed a warning on AWS console that "In case if your EC2 instance terminates, you will lose your data including wordpress files and data stored by MySql service."
Does that mean I should use S3 service for storing data to avoid any accidental data loss? Or my data will remain safe in an EBS volume even if my EC2 instance terminates?
By default, the root volume of an EC2 instance will be deleted if the instance is terminated. It can only be terminated automatically if its running as a spot instance. Otherwise it can only be terminated if you do it.
Now with that in mind, EBS volumes are not failure proof. They have a small chance of failing. To recover from this, you should either create regular snapshots of your EBS volume, or back up the contents of your instance to s3 or other storage service.
You can setup snspshot lifecycle policy to create scheduled volume snapshots.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
Related
I have a WordPress instance running on an EC2 (t2.micro). MySQL and WordPress are both hosted on this EC2 instance and an EBS is attached to it.
The EC2 instance occasionally crashes and becomes inaccessible (accessing it on a browser times out) and sometimes I get the following error, which means the database is somehow not working:
Error establishing a database connection
If you look at the logs you can see that there are a number of peaks on EC2 "CPU Utilization" and also peaks on EBS "Read Bytes" at the same time. These are examples of occasions that it became inaccessible.
What can be the cause of this issue?
Should I increase the memory of the EC2 instance or add more storage on EBS (it has 35G at the moment, I've already increased it)?
I was wondering if the Airflow's scheduler and webserver Daemons could be launched on different server instances ?
And if it's possible, why not use serverless architecture for the flask web server ?
There is a lot of resources about multi nodes cluster for workers but I found nothing about splitting scheduler and webserver.
Has anyone already done this ? And what may be the difficulties I will be facing ?
I would say the minimum requirement would be that both instance should have
Read(-write) access to the same AIRFLOW_HOME directory (for accessing DAG scripts and the shared config file)
Access to the same database backend (for accessing shared metadata)
Exactly the same Airflow version (to prevent any potential incompatibilities)
Then just try it out and report back (I am really curious ;) ).
I have successfully deployed a Flask application to AWS Elastic Beanstalk. The application uses an SQLAlchemy database, and I am using Flask-Security to handle login/registration, etc. I am using Flask-Migrate to handle database migrations.
The problem here is that whenever I use git aws.push it will push my local database to AWS and overwrite the live one. I guess what I'd like to do is only ever "pull" the live one from AWS EB, and only push in rare circumstances.
Will I be able to access the SQLAlchemy database which I have pushed to AWS? Or, is this not possible? Perhaps there is some combination of .gitignore and .elasticbeanstalk settings which could work?
I am using SQLite.
Yes, your database needs to not be in version control, it should live on persistent storage (most likely the Elastic Block Storage service (EBS)), and you should handle schema changes (migrations) using something like Flask-Migrate.
The AWS help article on EBS should get you started, but at a high level, what you are going to do is:
Create an EBS volume
Attach the volume to a running instance
Mount the volume on the instance
Expose the volume to other instances using a Network File System (NFS)
Ensure that when new EBS instances launch, they mount the NFS
Alternatively, you can:
Wait until Elastic File System (EFS) is out of preview (or request access) and mount all of your EB-started instances on the EFS once EB supports EFS.
Switch to the Relational Database Service (RDS) (or run your own database server on EC2) and run an instance of (PostgreSQL|MySQL|Whatever you choose) locally for testing.
The key is hosting your database outside of your Elastic Beanstalk environment. If not, as the load increases different instances of your Flask app will be writing to their own local DB. There won't a "master" database that will contain all the commits.
The easiest solution is using the AWS Relational Database Service (RDS) to host your DB as an outside service. A good tutorial that walks through this exact scenario:
Deploying a Flask Application on AWS using Elastic Beanstalk and RDS
SQLAlchemy/Flask/AWS is definitely not a waste of time! Good luck.
I know how to AutoScale. Now I need to know how to configure a web server so that I can add it to ELB and make it trigger AutoScaling (up and down).
I have an EC2 server running a web server. I have mounted an EBS and am using it as the web root. Now I want to make an AMI based on this server and tell AutoScaling to launch new servers based on this AMI.
Everyday my Wordpress site gets updated with new posts. If I make the AMI today and after two days I have a traffic spike that causes the ELB to scale up to meet demand, how will my EBS data be updated on the AMI?
I want to understand the role of the AMI in AutoScaling. How will the newly launched server in the scaling group have the www data that is on the attached EBS. I know that an EBS volume can only be attached to one server.
Also, when the AMI is used to launch a new server, will it grab the latest data from the source server and update the AMI at the moment it is launched so that the new server will have the most recent changes.
Can someone guide me through this?
I'm new to AWS and cloud computing in general. For a personal project I've created a micro instance on amazon ec2 and installed and configured a wordpress multisite site. For the database, I use an RDS instance.
My question is, how can I create a second micro instance that serve the same content and use a load balancer to spread the traffic to these two instances? I want to do this so that so if the first EC2 instance crashes then it will get served from second instance and the site doesn't go down.
Thanks for your help and sorry for any english related error.
As far as the wordpress installation is concerned, there are 2 main components.
Wordpress Database
Wordpress Files (Application Files including plugins,themes etc.)
The Database
For enabling auto scaling setup and to ensure consistency, you will
need to have the database outside the auto-scaling EC2 instance. As mentioned
in your question, this is in RDS and hence it wont be a problem
The second EC2 instance
Step 1: First create an AMI of your Wordpress Instance from the existing one.
Step 2: Launch a new EC2 instance from this AMI which you created from the first one. This will result in 2 EC2 instances. Instance 1 (the original one with Database) and Instance 2 (The copy of Instance 1)
However, any changes that you do in Instance 1 wont reflect in Instance 2.
If you want to get rid of this problem, consider using EFS service to create a shared volume across 2 EC2 instances and configure the wordpress installation to work from that EFS volume. This way, your installation files and other content will be in shared EFS volume commonly accessed by both EC2 instances.
You will have to move your database out of your localhost (I guess that you have it on the same micro instance), either to another ec2 instance or preferably to an RDS instance.
Afterwards you need to create a copy of your ec2 instance to another ec2 micro instance and put the both behind a load balancer.
First create the image of your existing micro ec2 instance on which you have configured word press
Second, create a classic load balancer
Third, create a launch configuration (LC) with above AMI you created.
Fourth, create auto scaling group with above LC, ELB and keep the group size count to 2.
This will make sure you have 2 instances running all time and if at all any instance goes down, ASG will create another new instance from AMI and terminate failed instance. REF-
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-register-lbs-with-asg.html
Or
If you want you can use elastic bean stalk also-
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html
Thanks