EC2 Instance Requires Daily Restart - wordpress

I am running a Wordpress blog on an AWS t2.micro EC2 instance running the AWS Linux. However most days I wake to an email saying that my blog is offline. When this happens I cannot SSH into the EC2 instance, however on the AWS dashboard it is shown as being online and none of the metrics look too suspicious.
The time I was notified about the blog being down was just after the start of the first plateau on the CPU Utilization graph - 4:31am.
A restart from the AWS control panel/app fixes things for a day or two, however I would like to have a more permanent fix.
Can anyone suggest any changes I can make to my instance to get it running more reliably?
[Edit - February 2018]
This has started happening again, after being fine for a few months. Each morning this week I have woken up to an alert that my blog offline - a reboot of the server brings it back online. This morning I was able to investigate it and was able to SSH in. Running top gave the following (I noticed the lack of http/mysqld):
My CloudWatch metrics for the last 72 hours are:
The bigger spikes are where I rebooted the instance. As you can see from the CPU balance, although there are spikes, they aren't huge spikes, as the CPU Credit Balance metric barely dips.

As this question has had so many views, I thought I would post about the workaround I have used to overcome this issue.
I still do not know why my blog goes offline, but knowing that rebooting the EC2 instance recovered it, I decided to automate that reboot.
There are three parts to this solution:
Detect the "blog offline" email from Jetpack and send it to AWS. I created a rule on my Gmail to handle this, forwarding the email to an address monitored by AWS SES.
The SNS triggers an AWS Lambda function to run.
The Lambda function reboots the EC2 instance.
Now I usually get a "blog back online" email within a few minutes of the original "blog offline" email.

Related

Netflix/Prime not being able to login/connect after sometime

When I just start the pfSense, both Netflix and Prime works fine, I can login and watch contents, but after one day or so of my pfSense being online, I just can't login to those video streaming services anymore.
For Netflix I get the "NW-2-5" error and for Prime I receive a message saying there are connectivity issues. Then I have to reboot my pfSense and after that everything is working fine for, again, one day or so.
My guess is this has nothing to do with the firewall rules, as it works for one day or so, but just in case I took a screenshot of it, in the Block separator I isolated my other VLANs (Home, VPN BR and Guest):
I'm still learning about how to configure my pfSense correctly and I hope this is just a silly configuration mistake.
Any suggestion about what I should change or check in the configuration?

Will my Wordpress installation persist across an EC2 retirement?

I have a site running Wordpress on an EC2 instance. It is now down and AWS is telling me the instance will be retired in a couple weeks. The instance retirement docs say that I need to create an AMI from my instance and restore from that AMI to another instance. This process has failed me so far on three attempts (with the three AMI creation attempts still pending after 24 hours).
While backup via AMI creation is recommended in this situation, is it necessary? If I just stop/start my instance, will my whole Wordpress installation (including posts, content, and other stuff store in MySQL) come right back up once it's started on a healthy host?
Yes. Stopping the instance and Starting it again will work fine.
Any data stored on an EBS volume will be preserved. Data on an Instance Store device will be lost. (It is unlikely you would be using instance store, but worth checking.)
When started, the instance will be provisioned on a different host.

AWS EC2 Instance w/ WordPress keeps crashing from 25% CPU utilization spikes

I have an EC2 t2.medium instance i-0bf4623a779064e0a with a WordPress installation which keeps crashing (can't be accessed via http or SSH). It seems whenever CPU utilization gets to about 25% or more (which I would think isn't very much), the server crashes. I have an alert setup to restart the server whenever Network Out is <=50,000 bytes for 5 minutes and tonight it's had to restart 10 times. It has been doing this nearly everyday for weeks. Here is a screenshot of the monitoring http://i.imgur.com/zQQ4oiy.png
What can I do to stop this crashing? Can I do some sort of server config optimization? I hope I do not need a larger instance, since I am already paying quite a bit for AWS and previously using a $10/mo shared hosting which rarely went down.

How come my app dies on AppHarbor?

I have a web app that will run forever (at least for a few days) on my local machine using the technique (hack?) described in Jeff Atwood's post: https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
However when I run it on App Harbor my app doesn't run for more than an hour or so (I'm not sure when it dies) as long as I hit the site it stays up so I'm assuming it is being killed after an idle period, but I'm not sure why.
My app doesn't save any state or persist anything. It makes web service calls and survives errors in any calls.
I turned on a ping service to keep my app alive but I'm curious why this works on my local machine but not on App Harbor?
The guys behind App Harbor pays for EC2 instances for all running apps, so they naturally want to limit the cpu usage as much as possible. One way to achieve this is to shut down unused applications very fast and only restart them when someone actually try to access them. Paid hosting should not be limited in this way.
(As far as I have been informed they are able to host around 100k sites on less than twenty medium instances which is certainly quite impressive and calls for a very economic use of resources.)
To overcome the limitation you would need a cron job to ping your app harbour site. But this of course a quite recursive problem since you need app harbour to act as a cron job ;)
AppHarbor recycles the Application Pool frequently to keep sleeping websites from using idle CPU time. This is simply the price you pay of using a shared website hosting plan.
If you really want to run a background job then you should be using AppHarbor's background workers, since this is exactly the type of task they were built to run.
http://support.appharbor.com/kb/getting-started/background-workers
Simply build a new console application that runs your logic and include it in your solution. When you push the code the workers will be started automatically. If you happen to already have other exe's in your solution make sure to edit the app.config and set the 'deploy background worker' value to false.

Need help with Drupal bulk mail low open rate for legitimate mailing list

I've moved from constant contact to Drupal Simplenews/Mimemail/SMTP. Previously the open rate was around 50% for constant contact, but now it's 4-5% for the same list via the mentioned setup. Mail is getting out from the server, but it's having an issue anyway.
Here's the setup:
-The e-mail list consists of approximately 80,000 addresses which is queued at 10,000 e-mails per cron run (which runs hourly).
-The server is a Dual Core2Quad machine with 2GB of RAM.
-When mail is being sent, the mail queue will usually go up to ~1000 at the beginning of the hour before reducing to ~250 by the time the next cron occurs.
-Newsletter is themed to display custom style for newsletter on send
-Newsletter is received by some, but appears to be bounced by many (based on low open rate_
-I've added SPF, domain keys, and a PTR record to the DNS
-Server hostname (listed in ptr) is different from hosted domain
-Very low spam number via Spamassassin
-IP and domain are not blacklisted
-Mail goes out via SMTP module on delivery.
Any ideas?
The best way to satisfy your needs is by making the download of a module. I recommend this one: http://drupal.org/sandbox/E-goi/1110712
I've never had a good experience trying to rely on Simplenews. The issue with Drupal cron here is that you're also competing with everything else in Drupal during cron trying to process.
I moved everything to Campaign Monitor.

Resources