I have a site running Wordpress on an EC2 instance. It is now down and AWS is telling me the instance will be retired in a couple weeks. The instance retirement docs say that I need to create an AMI from my instance and restore from that AMI to another instance. This process has failed me so far on three attempts (with the three AMI creation attempts still pending after 24 hours).
While backup via AMI creation is recommended in this situation, is it necessary? If I just stop/start my instance, will my whole Wordpress installation (including posts, content, and other stuff store in MySQL) come right back up once it's started on a healthy host?
Yes. Stopping the instance and Starting it again will work fine.
Any data stored on an EBS volume will be preserved. Data on an Instance Store device will be lost. (It is unlikely you would be using instance store, but worth checking.)
When started, the instance will be provisioned on a different host.
Related
I'm learning Kubernetes, and trying to setup a cluster that could handle a single Wordpress site with high traffic. From reading multiple examples online from both Google Cloud and Kubernetes.io - they all set the "accessMode" - "readWriteOnce" when creating the PVCs.
Does this mean if I scaled the Wordpress Deployment to use multiple replicas, they all use the same single PVC to store persistent data - read/write data. (Just like they use the single DB instance?)
The google example here only uses a single-replica, single-db instance - https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
My question is how do you handle persistent storage on a multiple-replica instance?
ReadWriteOnce means all replicas will use the same volume and therefore they will all run on one node. This can be suboptimal.
You can set up ReadWriteMany volume (NFS, GlusterFS, CephFS and others) storage class that will allow multiple nodes to mount one volume.
Alternatively you can run your application as StatefulSet with volumeClaimTemplate which ensures that each replica will mount its own ReadWriteOnce volume.
If on AWS (and therefore blocked by the limitation of EBS volumes only mountable on a single instance at a time), another option here is setting up the Pod Affinity to schedule on the same node. Not ideal from an HA standpoint, but it is an option.
If after setting that up you start running into any wonky issues (e.g. unable to login to the admin, redirect loops, disappearing media), I wrote a guide on some of the more common issues people run into when running Wordpress on Kubernetes, might be worth having a look!
I have wordpress application running on my EC2 - AWS. I haven't decide which one is Amazon RDS or my own database on different hosting. Which one is the cheapest to use? Let's say I have my own MySQL database from Lunarpages or Bluehost hosting, to allow my wordpress on EC2 instance to connect/remote to my database on Lunarpages not allow my wordpress to connect remote to Amazon RDS. Which one is the cheapest to use? I heard people saying when you use Amazon RDS is very expensive, so I thought maybe to save costs to allow my wordpress to connect to my own database not Amazon RDS for wordpress. I don't know it is true or not. I don't know how it performance well. Which one is the best one. Any suggestion appreciated. Thank you.
I don't agree with that. In amazon AWS, the first thing you do is set up a virtual private network and create the corresponding network access interfaces. My experience working with heavy CMSs is that the architecture is much more stable with EC2 + RDS, each in one instance. In addition RDS has automated version maintenance and it is much more difficult to fail or suffer a crash, as opposed to a mysql or similar, running on the same virtual machine.
Also in terms of speed and performance, working with this scheme for example in wordpress, the system flies, the speed is much higher and appreciable even with small machines.
Running on a different hosting will cause extra latency.
Let's do the math on AWS RDS for the smallest instances (taking eu-west-1 region as example)
Running on RDS: db.t2.micro $0.018 per hour, or $12.96 per month for RDS. Free the first year under AWS free tier.
Running on EC2: t2.micro (You configure MySQL and backups, ...), $0.0126 per hour, or $9.07 per month. Free the first year under AWS free tier
If your application is small enough, you could host both your database and your application on the same machine (solution 2)
Performance wise is not good to have database on a totally different network of the website hosted itself. It'll delay. Imagine if you have a lot of calls, it'll multiply the delay.
You can host a local database on the EC2 it'self, this would be the best choice.
This is probably an inane question for those of you experienced in AWS but I've been googling a few hours and really need a straightforward guide. I have configured my site running bitnami wordpress on one T2 Micro EC2 instance.
I'm going to launch the site soon but would like it to elastically scale with demand. This might be an oversimplified question, but how do I set this up? Do I make a second instance from the same EBS volume and balance load between them? I'm just a little lost. Any guidance on where to start configuring scalability for a single EC2 instance would be very very helpful. Thank you.
To load balance your traffic you will need to configure ELB. To auto-scale your instance you will need to configure auto-scaling group. Once you are done with the wordpress configuration on your EC2 instance follow the below steps :
Create an AMI of your EC2 instance. Ref :
https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create-ami-from-instance.html
Create a classic load balancer Ref : https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-getting-started.html
Create launch configuration using the AMI created in step 1. Ref :
https://docs.aws.amazon.com/autoscaling/latest/userguide/create-launch-config.html
Create an auto-scaling group using launch configuration created in step 3. Ref :
https://docs.aws.amazon.com/autoscaling/latest/userguide/create-asg.html
(At step 6.f. in this link you will need to use ELB that you created
in step 2).
All the AWS docs/guides provide step by step explanation of how to configure things and hence I haven't provided more details in above steps but the sequence of steps that you need to perform.
I am running a Wordpress blog on an AWS t2.micro EC2 instance running the AWS Linux. However most days I wake to an email saying that my blog is offline. When this happens I cannot SSH into the EC2 instance, however on the AWS dashboard it is shown as being online and none of the metrics look too suspicious.
The time I was notified about the blog being down was just after the start of the first plateau on the CPU Utilization graph - 4:31am.
A restart from the AWS control panel/app fixes things for a day or two, however I would like to have a more permanent fix.
Can anyone suggest any changes I can make to my instance to get it running more reliably?
[Edit - February 2018]
This has started happening again, after being fine for a few months. Each morning this week I have woken up to an alert that my blog offline - a reboot of the server brings it back online. This morning I was able to investigate it and was able to SSH in. Running top gave the following (I noticed the lack of http/mysqld):
My CloudWatch metrics for the last 72 hours are:
The bigger spikes are where I rebooted the instance. As you can see from the CPU balance, although there are spikes, they aren't huge spikes, as the CPU Credit Balance metric barely dips.
As this question has had so many views, I thought I would post about the workaround I have used to overcome this issue.
I still do not know why my blog goes offline, but knowing that rebooting the EC2 instance recovered it, I decided to automate that reboot.
There are three parts to this solution:
Detect the "blog offline" email from Jetpack and send it to AWS. I created a rule on my Gmail to handle this, forwarding the email to an address monitored by AWS SES.
The SNS triggers an AWS Lambda function to run.
The Lambda function reboots the EC2 instance.
Now I usually get a "blog back online" email within a few minutes of the original "blog offline" email.
After successful installation of devstack and launching instances,but once reboot machine, need to start all over again and lose all the instances which were launched back then.I tried rejoin-stack but did not worked,How can i get the instances back after reboot ?
You might set resume_guests_state_on_host_boot = True in nova.conf. The file should be located at /etc/nova/nova.conf
I've found some old discussion http://www.gossamer-threads.com/lists/openstack/dev/8772
AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot. The instances will be there (virsh domains), but even if you start them manually or using nova flags I'm not sure whether other facilities will handle this correctly (e.g. neutron will correctly configure all L3 rules according to DB records, etc.) Honestly I'm pretty sure they won't...
The answer depends of what you need to achieve:
If you need a template environment (e.g. similar set of instances and networks each time after reboot) you may just script everything. In other words just make a bash script creating everything you need and run it each time after stack.sh. Make sure you're starting with clean environment since OpenStack DB state remains between ./unstack - ./stack.sh or ./rejoin-stack.sh (you might try to just clean DB, or delete it. stack.sh will build it back).
If you need a persistent environment (e.g. you don't want to loose VM's and whole infrastructure state after reboot) I'm not aware how to do this using OpenStack. F.e. neutron agents (they configure iptables, dhcp etc) do not save state and driven by events from Neutron service. They will not restore after reboot, so the network will be dead. I'll be very glad if someone will share a method to do such recovery.
In general I think OpenStack is not focusing on this and will not focus during the nearest release cycles. Common approach is to have multi-node environment where each node is replaceable.
See http://docs.openstack.org/high-availability-guide/content/ch-intro.html for reference
devstack is an ephemeral environment. it is not supposed to survive a reboot. this is not a supported behavior.
that being said you might find success in re-initializing the environment by running
./unstack.sh
follower by
./stack.sh
again.
Again, devstack is an ephemeral environment. It's primary purpose for existing is to run gate testing for openstack's CI infrastructure.
or try ./rejoin-stack.sh to re-join previous screens.