Wordpress on Lightsail working slow when uploading through FTP - wordpress

I recently migrated my site to lightsail. My db has about 2mm records and slightly under 1Gb. I connected do the DB through an external client. While I was connected but not running any queries, site became slow.
Then I tried uploading some images through FTP, at that point, the site came to a halt again, would even open.
Upon looking at metrics, I got into burstable zone here and there, but its not sustained.
Any there tools I can use to diagnose what is the problem.

What size instance did you deploy? Also, is this Linux or Windows. It would be good to look at metrics from the lightsail dashboard but also be good to know what's running inside of your instance. I'd be curious to know if you are overburdening your instance (undersized) or not.

SFTP is generally slower than FTP due to the security built into the protocol. The data is encrypted, which takes time, but perhaps more importantly, the protocol itself functions differently; it's not "streamed" like FTP.

Related

Amazon Aws EC2 - Bitnami - wordpress configuration - extremely slow loading assets

I am trying to test out the feasibility of moving my website from godaddy to AWS.
I used a wordpress migrate plugin which seems to have moved the complete site and at least peripherally appears to be moved properly.
However, when I try to access the site, it is extremely slow. Upon using developer tools, I can tell that some of the css and jpg images are sort of acting as blocking threads.
However, I cannot tell why this is the case. The site loads in less than 3 seconds in godaddy, however, it takes over a minute to load it fully on AWS and there are at least a few requests that timeout. Waterfall view on chrome developer tools show a lot of waiting on multiple requests and I cannot seem to figure out what or why these requests are sort of waiting forever and timing out.
Any guidance is appreciated.
I have pointed the current instance to www. blind beliefs .com
I cannot seem to figure out if it is an issue with the bitnami wordpress AMI or if I am doing something wrong. May be I should go the traditional route of spinning up EC2 instance , run a server on it, connect it to a db and then install wordpress on my server. I just felt the AMI available took care of all of that tailoring without me having to manually doing it.
However, it is difficult to debug though as to why certain assets get blocked/are extremely slow and timeout without loading.
Thank you.
Some more details:
The domain is still at godaddy and I have not moved it to AWS yet, not sure if that is sort of having an impact.
I still feel it has to do with the AMI though - cannot prove it.
Your issue sounds like you have a free memory problem. You did not go into details on the instance size, if MySQL is installed on the instance, etc.
This article will show you how to determine memory usage on your instance. When free memory is low OR you start using SWAP space, your machine will become very slow. Your goal should be 0 bytes used in SWAP space and at least 25% free memory during normal operations.
Other factors to check is percent CPU utilization and free disk space on your file systems.
Linux Memory Check Commands
If you have a free memory problem, increase the instance size. If you have a CPU usage problem, either change the instance size or switch to another instance type. If you have a free disk space problem, create a new instance with a larger EBS volume OR move your website, etc to a new EBS volume sized correctly.

Amazon EC2 Bitnami Wordpress Extremely Slow

I've encountered several issues with Amazon EC2 & Bitnami Wordpress AMI (RedHat) on small instance.. and honestly I don't know who to ask :) I'm not a SysAdmin/Linux expert, but I've learned basic SSH commands and other things required to keep going for a basic start.
So here's what is happening:
Wordpress website is loading extremely slow - PageSpeed & YSlow score is 27 of 100.
I think this is caused by memory_limit in php.ini. When I installed Bitnami Wordpress AMI, imported WP Users, set the theme and other basic things, I wasn't able to even access wordpress website - just a blank page showed up. After few solutions, I've tried increasing php.ini memory_limit from 32M to 128M (Max). And I've increased WP memory limit to 64M.
Website loaded properly and users were able to access it - but it's extremely slow.
When I try decreasing php.ini memory limit to 64M, website shows up a blank page again.
Only thing that I can think of currently is increasing EC2 instance from .small to .large or similar. Please let me know your thoughts on this issue.. and many thanks!
We had a similar problem with a Php/MYSQL Application which we moved to an EC2 instance connecting to an RDS database instance. Pages were taking 10x longer to load than on our previous server, even though all specs were the same, i.e. number of CPUs, RAM, clock speed, and the versions of Php/Apache were identical.
We finally found the cause of the problem, the default setting for an RDS database for the Cache query size is 0. This causes the database to run extremely slowly. We changed the query_cache_size to be 1000000000 (1G) (as the RDS instance had 4G of RAM) and immediately the application performance was as good as our previous (non-AWS) server.
Secondarily, we found that an EC2 server with MySQL installed locally on the server did not perform well, on the Amazon Linux build. We tried the same thing on an EC2 instnace running Ubuntu, and with a local MySQL database the performance was great.
Obviously for scalability reasons we went with using an RDS instance but we found it interesting that moving the MySQL database onto the EC2 instance radically improved the performance for an Ubuntu linux EC2 server but made no difference with the Amazon Build of Linux.
Since you have not received an answer yet, allow me to summarize my comments into something that is hopefully useful:
Profile your application to understand where the time is being spent.
Some areas you can affect are:
PHP needs RAM, but so does your database (I know nothing about Bitnami, but Wordpress uses a SQL database for storage).
Allocate enough RAM to PHP. Seems like that's somewhere between 64MB and 128MB.
If you are using MySQL, edit my.ini. If you're using a default configuration file for MySQL, the memory allocation parameters are dialed way too low. If you post your my.ini file, I can give suggestions (or if you're using a different database, state which that is).
Consider striping multiple EBS volumes for your data partition.
Use an EBS backed instance if you are not already.
You can make a more informed decision about where to tune if you have profiling results in hand.
I would suggest to use a Cache tool. The first one that you can try is APC (Alternative PHP cache). It is easy to install in Red Hat: yum install php-pecl-apc. You can get much better results with a WordPress specific cache plugin like W3 Total Cache or Super Cache. I use the last one and it is easy to install in WordPress application:
Install Super Cache from the WordPress admin panel
Change the .htaccess permissions: sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/.htaccess
Enable the plugin and follow the configuration steps. You can see how this plugin modifies the .htaccess file
Configures the cache options according to your preferences and test it. You can do performance tests using a service like blitz.io
Change the .htaccess permissions to 600 when everything is ok.
I hope it helps.
We saw something similar. For us, the opportunity cost of our time fiddling with optimization settings was much higher than just going with a dedicated Wordpress hosting provider.
The leaders in this space (dedicated Wordpress hosting) appear to be WP-Engine and a few others like Synthesis
http://trends.builtwith.com/hosting/wordpress-hosting
I had my personal site on dreamhost but they ended up becoming worse and worse over the years so I moved to bluehost, which has been ok.
Overall, I think EC2 is great but it requires a lot of fiddling. Depending on the cost of your time and area of expertise, you might choose to switch to a more specialized provider.
I have no affiliation with any of these companies other than my personal experience being an individual shared hosting customer at both dreamhost and bluehost.

Replacement for Hamachi for SVN access

My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management.
This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds.
So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source.
Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?
Perhaps it's a bit much to ask of your team, but if you have a distributed team then you could switch to a distributed version control system (eg. Mercurial or Git). These don't need to use the network so much and you won't suffer from latency problems. It is an entirely new paradigm though and your team's development processes will have to change, so you might not consider it appropriate in your case.
First I should ask why you need a VPN in the first place. Subversion can operate over HTTPS, so as long as you open the proper port on the server there shouldn't be any security or connectivity issues.
Assuming that you do need a VPN, I find it difficult to believe that an administrator uncomfortable with Apache would be more comfortable installing a whole new VPN system (much more complicated and tricky, in my estimation).

Is OpenAtrium really slow?

I am a user of an OpenAtrium site, but not the admin. On average it takes anywhere from 7 - 11 seconds for the front page to load. Going from page-to-page takes about 7 seconds.
I am not really a Drupal admin and I definitely do not have access to the host control panel or anything for this particular site.
The admin has mentioned something about cache and has cleared the cache to make it faster, but it still is very slow (see above). This site is not on it's own dedicated server and probably won't be moved to one in the near future. That being said, is there anything that can be done (i.e. anything I can recommend to the admin) that would improve it's speed in the near future?
If you're running it on a shared host with no memory, don't have a PHP code cache (e.g. APC or similar), and don't have Apache tuned, then it's probably slow.
If, on the other hand, you are running it on a Mercury optimized VPS image, it's going to be fast.
We also have it on an internal CentOS server, LAMP, with XCache and it's an improvement over Amazon CentOS LAMP VPS with no XCache.
http://www.cyberciti.biz/faq/howto-rhel-install-xcahce-php-opcode-cacher/
We have it installed on our internal server (CentOS) on a tweaked LAMP setup. It's rather nice. I have not experienced any slowness in it.

How to avoid pauses when editing code on a network drive?

I'm planning on doing more coding from home but in order to do so, I need to be able to edit files on a Samba drive on our dev server. The problem I've run into with several editors is that the network latency causes the editor to lock up for long periods of time (Eclipse, TextMate). Some editors cope with this a lot better than others, but are there any file system or other tweaks I can make to minimize the impact of lag?
A few additional points:
There's a policy against having company data on personal machines, so I'd like to avoid checking out the code locally.
The mount is over a PPTP VPN connection.
Mounting to Linux or OS X client
Use a source control system — Subversion, Perforce, Git, Mercurial, Bazaar, etc. — so you're never editing code on a shared server. Instead you should be editing a local work area and committing changes to a repository located on the network.
Also, convince your company to adapt their policy such that company code is allowed on personal machines if it's on an encrypted volume. Encrypted disk images that you can use for this are trivial to create using Disk Utility, and can use strong cryptography. You can get even more security by not storing your encryption passphrase in your keychain, and instead typing it every time you mount the encrypted volume; this means that even if your local user account is compromised, as long as you don't have the volume mounted, nobody else will be able to mount it.
I did this all the time when I was consulting and none of my clients — some of whom had similar rules about company code — ever had a problem with it once I explained how things worked. (I think some of them even started using encrypted disk images even within their offices.)
Remate plugin simply disables this dreadful refresh-on-focus feature.
Download, unpack, doubleclick and choose "Disable Refresh on Regaining Focus" from "Window" menu (you can refresh manually by right-clicking project in drawer). Voila!
If you are accessing the data from your personal computer, it is in your RAM, so we will assume that you just can't store it on your hard drive, floppy, USB stick, etc.
Your solution is a RAM drive. Copy the files you need to edit there using whatever method you prefer (I would suggest source control) and then you can edit them without lag. When you are done commit them back to the server.
As was pointed out your editor may be caching changes to your temp directory, or maybe even your swap file (if it is in memory, then it can get swapped out). The solution to that is get a much larger RAM drive and run a Virtual Machine in the RAM drive. Not sure what OS you are running, but you can get a pretty slim install of most OS's if all you are doing is editing source code.
If you don't have enough RAM, then get a Gigabyte i-RAM solid state drive and remove the battery, that way it will lose everything when you power down.
Set your VMWare to not allow the OS to swap any of the virtual machine. Keep a baseline VM on your hard drive and copy it to your RAM drive before booting it up. Then you can use the hard drive in the VM like a hard drive, even though it is RAM.
Might be a good idea to run a secure erase on your RAM drive before powering down. Also keep in mind that they have found if you super cool a RAM chip before removing it from a functioning computer, and place it in a new computer quick enough, the data may still be intact.
I guess it all comes down to how detailed that policy is, and how it is interpreted.
Good luck!
Short answer: you can do no trick. CIFS is really geared towards LAN with a reasonably calm trafic, so you have zero chance to not suffer intermittent lag accessing a share through a VPN. The editor at some point needs to access the file in blocking IO, because it makes no real sense to do otherwise.
You could switch editor and use Emacs + TRAMP which is geared to work on remote files.

Resources