Wordpress average RAM usage - wordpress

On last few days, I'm facing some issues with Wordpress that simply uses too much memory(running on local network). Too much is an subjective word because the usage also depends on specifications of the web server. But because I couldn't find any information about average WP RAM usage, I want to ask about recomendations, any useful informations about this.

Most sites should have more than enough running at 128MB some run fine on less
you can change this by adding
define('WP_MEMORY_LIMIT', '128M');
to the wp-config.php file

Related

How do I upload changed files via FTP? (wordpress)

I'm used to git and command line stuff, but working on a wordpress site freelancing. I have FTP access, but the site I'm working on has like 16,000 files just in wp-content. Is there a way to automatically only upload changed files? I'm using Filezilla and there's an option to do that, but going through 16,000 files takes hours anyway. I know I could use git and do things manually, but that's a pain.
I'm open to suggestions outside of FTP if there's any easier way in general for wordpress dev.
Since you're bound to FTP¹, your options are quite limited. There are free (in limited capacity) services to deploy to SFTP² via git. Some examples: DeployBot, Buddy.works, DeployHQ, etc. There is also Beanstalk, which I've used in the past and it worked rather well, but the free account is limited to 100MB (which would obviously not work for your situation, and it sounds like the client is too cheap to buy a paid account). It is a bit odd to me to store media library in git, but that is another topic and I understand your dilemma.
¹I would highly recommend using the insecurities of FTP as an argument to try to convince the client to switch to... literally anything else.
²Not certain if these services support FTP (as opposed to SFTP). You would probably need to ask, but they may not given the insecurity of FTP.
EDIT - There may also be some open source options like this (albeit old) solution: https://github.com/mehedi101/ftploy (purely as an example; there are others, but they appear to vary in complexity and I have not tried them)

Amazon Aws EC2 - Bitnami - wordpress configuration - extremely slow loading assets

I am trying to test out the feasibility of moving my website from godaddy to AWS.
I used a wordpress migrate plugin which seems to have moved the complete site and at least peripherally appears to be moved properly.
However, when I try to access the site, it is extremely slow. Upon using developer tools, I can tell that some of the css and jpg images are sort of acting as blocking threads.
However, I cannot tell why this is the case. The site loads in less than 3 seconds in godaddy, however, it takes over a minute to load it fully on AWS and there are at least a few requests that timeout. Waterfall view on chrome developer tools show a lot of waiting on multiple requests and I cannot seem to figure out what or why these requests are sort of waiting forever and timing out.
Any guidance is appreciated.
I have pointed the current instance to www. blind beliefs .com
I cannot seem to figure out if it is an issue with the bitnami wordpress AMI or if I am doing something wrong. May be I should go the traditional route of spinning up EC2 instance , run a server on it, connect it to a db and then install wordpress on my server. I just felt the AMI available took care of all of that tailoring without me having to manually doing it.
However, it is difficult to debug though as to why certain assets get blocked/are extremely slow and timeout without loading.
Thank you.
Some more details:
The domain is still at godaddy and I have not moved it to AWS yet, not sure if that is sort of having an impact.
I still feel it has to do with the AMI though - cannot prove it.
Your issue sounds like you have a free memory problem. You did not go into details on the instance size, if MySQL is installed on the instance, etc.
This article will show you how to determine memory usage on your instance. When free memory is low OR you start using SWAP space, your machine will become very slow. Your goal should be 0 bytes used in SWAP space and at least 25% free memory during normal operations.
Other factors to check is percent CPU utilization and free disk space on your file systems.
Linux Memory Check Commands
If you have a free memory problem, increase the instance size. If you have a CPU usage problem, either change the instance size or switch to another instance type. If you have a free disk space problem, create a new instance with a larger EBS volume OR move your website, etc to a new EBS volume sized correctly.

Hostgater shared hosting cpu usage problems

I have shared webhosting and sometimes i go over the max allowed cpu usage once a day, sometimes two or three times. but i cant really narrow it down to anything specific.
I have the following scripts installed:
wordpress joomla owncloud dokuwiki fengoffice
before i was just running joomla on this hosting package and everything was fine, but i upgraded to have more domains available and also hosted other scripts. now like wordpress, owncloud and so on.
but no site has high traffic or hits. most of the stuff is anyway only used by me.
i talked to the hostgator support team and they told me there is a ssh command to monitor or watch the server and see whats causing the problem.
the high cpu load just happesn for a very short peak, because everytime i check the percentage of cpu usage in the cpanel its super low. the graph shows me the spike, but it looks worse than it really is, because the graph gets updated only every hour, and that makes it hard to narrow it down...
i am new to all this. can somebody help me to figure this out?
BTW:
I hope this question is fine now here, kinda dont really understand this plattform yet...
Just so you have more information, I to host many websites with HostGator using a reseller/shared account. The performance of your site is most likely not an issue, and is related more to HostGator's new servers and it's poor MySQL performance. None of my WordPress sites had issues for years, despite high traffic/plugins etc. Fast forward to late 2013 after EIG purchased HostGator (and others like BlueHost) and the performance on the "new more powerful" servers is anything but. Limits on CPU and processes are more aggressive, and while outright downtime isn't an issue, the performance during peak hours is exceedingly poor. Sites which rely on MySQL databases all suffer from poor performance and no amount of caching or plugin optimization will help (I should know as I spent months reviewing my sites trying many optimizations).
My advice: Find another web host and/or upgrade your hosting to a VPS that can scale based on your needs.
I moved my higher traffic/important clients to WPEngine. The speed difference and quality support is massive.

Amazon EC2 Bitnami Wordpress Extremely Slow

I've encountered several issues with Amazon EC2 & Bitnami Wordpress AMI (RedHat) on small instance.. and honestly I don't know who to ask :) I'm not a SysAdmin/Linux expert, but I've learned basic SSH commands and other things required to keep going for a basic start.
So here's what is happening:
Wordpress website is loading extremely slow - PageSpeed & YSlow score is 27 of 100.
I think this is caused by memory_limit in php.ini. When I installed Bitnami Wordpress AMI, imported WP Users, set the theme and other basic things, I wasn't able to even access wordpress website - just a blank page showed up. After few solutions, I've tried increasing php.ini memory_limit from 32M to 128M (Max). And I've increased WP memory limit to 64M.
Website loaded properly and users were able to access it - but it's extremely slow.
When I try decreasing php.ini memory limit to 64M, website shows up a blank page again.
Only thing that I can think of currently is increasing EC2 instance from .small to .large or similar. Please let me know your thoughts on this issue.. and many thanks!
We had a similar problem with a Php/MYSQL Application which we moved to an EC2 instance connecting to an RDS database instance. Pages were taking 10x longer to load than on our previous server, even though all specs were the same, i.e. number of CPUs, RAM, clock speed, and the versions of Php/Apache were identical.
We finally found the cause of the problem, the default setting for an RDS database for the Cache query size is 0. This causes the database to run extremely slowly. We changed the query_cache_size to be 1000000000 (1G) (as the RDS instance had 4G of RAM) and immediately the application performance was as good as our previous (non-AWS) server.
Secondarily, we found that an EC2 server with MySQL installed locally on the server did not perform well, on the Amazon Linux build. We tried the same thing on an EC2 instnace running Ubuntu, and with a local MySQL database the performance was great.
Obviously for scalability reasons we went with using an RDS instance but we found it interesting that moving the MySQL database onto the EC2 instance radically improved the performance for an Ubuntu linux EC2 server but made no difference with the Amazon Build of Linux.
Since you have not received an answer yet, allow me to summarize my comments into something that is hopefully useful:
Profile your application to understand where the time is being spent.
Some areas you can affect are:
PHP needs RAM, but so does your database (I know nothing about Bitnami, but Wordpress uses a SQL database for storage).
Allocate enough RAM to PHP. Seems like that's somewhere between 64MB and 128MB.
If you are using MySQL, edit my.ini. If you're using a default configuration file for MySQL, the memory allocation parameters are dialed way too low. If you post your my.ini file, I can give suggestions (or if you're using a different database, state which that is).
Consider striping multiple EBS volumes for your data partition.
Use an EBS backed instance if you are not already.
You can make a more informed decision about where to tune if you have profiling results in hand.
I would suggest to use a Cache tool. The first one that you can try is APC (Alternative PHP cache). It is easy to install in Red Hat: yum install php-pecl-apc. You can get much better results with a WordPress specific cache plugin like W3 Total Cache or Super Cache. I use the last one and it is easy to install in WordPress application:
Install Super Cache from the WordPress admin panel
Change the .htaccess permissions: sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/.htaccess
Enable the plugin and follow the configuration steps. You can see how this plugin modifies the .htaccess file
Configures the cache options according to your preferences and test it. You can do performance tests using a service like blitz.io
Change the .htaccess permissions to 600 when everything is ok.
I hope it helps.
We saw something similar. For us, the opportunity cost of our time fiddling with optimization settings was much higher than just going with a dedicated Wordpress hosting provider.
The leaders in this space (dedicated Wordpress hosting) appear to be WP-Engine and a few others like Synthesis
http://trends.builtwith.com/hosting/wordpress-hosting
I had my personal site on dreamhost but they ended up becoming worse and worse over the years so I moved to bluehost, which has been ok.
Overall, I think EC2 is great but it requires a lot of fiddling. Depending on the cost of your time and area of expertise, you might choose to switch to a more specialized provider.
I have no affiliation with any of these companies other than my personal experience being an individual shared hosting customer at both dreamhost and bluehost.

How do you set up a large scale Alfresco CIFS server?

Alfresco provides a CIFS connector so it can act just a normal file-server in your intranet.
Compared with a "normal" (windows/samba) based fileserver, certain operations can really hurt the system, e.g. listing a folder with a few thousand files using windows explorer. Not quite sure, but I think permission checking is the primary reason for this case. Anyways, now assume you have a big filesystem hierarchy exposed and many users using CIFS, stressing the system, effectively "knocking it down".
What is the suggested approach to scale / improve performance ?
In my experience Windows Explorer is part of the CIFS performance issue. I don't have exact numbers, but I remember working on an instance with roughly 500GB data, mostly composed of small images and a few texts in a not well balanced folder tree, for which listing a folder with a thousand children was taking in Explorer around a minute to display. The same operation was taking around 3s on Chrome browser.
We never had time to investigate the issue thoroughly, but we saw an impressive amount of traffic generated by Explorer due to prefetch of information of the subfolders of the currently open folder.
Been revisiting the issue a little, and I guess the best answer I can give for now is: Tweak the cache(s).
I used a 5k children space, default cache values and benchmarked executing "ls -alrt" on the CIFS mount running alfresco 4.0.d.
The first execution took roughly two minutes bombarding the (lightning fast) mysql database with approx 200k queries.
The second execution took "only" around 40 seconds, but the amount of queries did not change significantly.
Increasing the CIFS fileinfo cache, I got the second time down to 30 seconds, but I still see 160k DB queries firing. I'm fairly sure this lions share has to do with permissions/ACLs and it should be possible improve the situation a lot.
PS: Windows Explorer definitely behaves a little unexpected, but I cannot confirm that it makes a significant difference regarding user experience.
PPS: https://issues.alfresco.com/jira/browse/ALFCOM-2951
PPPS: I'll look into this further when I find the time - should be this year. ;)
Update: The massive amount of queries is no permission issue.
Permission checks definitely IS a part of the problem. I can't link to anything specific, but browsing alfresco forums and the net for the last few years I've learned that permissions can hurt the performance.
I've read (and experienced) in several scenarios that alfresco spaces with large numbers of children (1000+) can be painfully slow. One part you noticed yourself: it takes a while to go through 100-200k queries. But hook up something into alfresco to watch what's it doing and you'll see that massive amounts of time go on serialization/deserialization (e.g.webscripts for share) and also node traversal (hence the thousands of queries and averages of 400-500 qps when nobody is logged on).
So you're on the right way with your cache optimizations.
Do you have dedicated hardware for your installation? I've had big issues with performance, but I've moved the MySQL server to a separate box (server-grade hardware - 4 cores, 8GB ram, SSD for myqsl server and SAS for tomcat server etc) and I gained a lot. So, get on with begging for the new hardware too :)
I think you're on the right path here.

Resources