I have a WordPress install with godaddy. Often times I see that the site has high memory usage and i/o usage.
I am not an expert when it comes to web servers but I do get by with some level of knowledge.
I have not installed any new plugins that might have caused this.
I have the following questions:
Is there a way I can monitor what is consuming memory and i/o with cpanel?
I do have google authenticator plugin installed that will block more than three failed password attempts. Is the plugin sufficient in preventing brute force attacks?
I am sure that not good for you.This good tips for you. Cloudflarethat's good for your (free or premium), That will be reduce your problems (Free SSL, Improve speed, Security, manage DNS, Caching (will reduce your problem), Blacklist/ Whitelist IP, etc) .
and other tips is:
Excessive load times can harm your website in more ways than one. There are quite a few ways to improve your site’s speed but caching has the greatest impact.
With the above in mind, going to evaluate the performance of the top five caching solutions for WordPress to help you determine which one is truly the best (not just the fastest).
Here’s the lineup: WP Rocket, W3 Total Cache, WP Super Cache, WP Fastest Cache, and ZenCache.
The last solutions: Upgrade your hosting
I hope this help full
Related
My company is considering installing wordpress as intranet blogging platform - nothing really complex, just clean installation without any plugins and layouts (we have a small team of web developers who will be handling layout customization). Wordpress will be installed on one of our servers that will not be exposed to the internet.
You can read a lot about vulnerabilities of the Wordpress platform, but do they really matter if the platform itself is available only within company's intranet? What would be the potential dangers?
Risk is limited quite a bit (as you won't be subject to the random drive-by attacks from bots scanning the entire public internet that anyone with a public server sees many times daily), but not non-existent.
A malicious employee, or a piece of malware designed to attack WordPress would still be possible.
You wouldn't want to neglect patching the WordPress install and its plugins/themes.
If the hacker is also in the intranet yes. But I think in general no. For a good or a Bad just install workfence.
What setup would you recommend for Wordpress website with average daily traffic ~250,000 sessions per day (~130K unique users). In peak hours we can get ~25K users in hour, and non peak ~10-17k per hour.
Monthly bandwidth is ~14TB.
I'll be happy to hear suggestions on what is the best setup:
Note: it should be cpanel server (apache)
Server - cloud or dedicated (all except google cloud and amazon)
CPU/Memory/etc ?
CDN ?
Apache/MySQL specific setup?
High availability?
Any other suggestion
Very appreciated for any advice
It depends what type of traffic do you have?
Is this just one page traffic (bringing referrals from sources like social media, forums, blogs, etc..). Why i'm asking this?
Yes! it really matters.....
Traffic::
Usually traffic brought from sources, browse a landing page there's wouldn't be any unique counts, so in that case your cache plugins can't spend more effort in terms of performance. If users are giving you nice no of pageviews in that case your cache plugin will manage the performance and will give you the best result.
Hosting:
Definitely that you cannot run your website through any shared hosting OR WORDPRESS HOSTING if you are going to have this much of volume. Don't consider having a VPS/Dedicated through any hosting company, it doesn't matter how big that hosting company is. Third party hosting companies will never give you prompt support and will never even guarantee you that if you bring that much of traffic, it will remain as stable as in fully working condition. so consider having VPS/Dedicated hosted in Data Center not through any third party vendors. Try if you could get Cloud VPS OR cloud solution as a service part.
CDN:
If you have good budget then consider using Amazon, Avg. budget use Cloudflare OR MaxCDN.
Hardware: 16GB Ram, 8 Core CPU, 60GB (If you are not planning much updates on your website), 20Gbps Network, 25TB Bandwidth. VPS would do your job and can manage the traffic you considering. I don't think so you should go for dedicated.
Setup & Configuration:
Install Debian 8, Virtualmin (Free) + Nginx and optimize it to use for high traffic. Do not install WHM, don't do this mistake, if you do then you might need premium support to fix issues every single day. Virtualmin is light panel and wordpress is it's specialty. Nginx has ability to deliver high traffic website, mysql optimization, cache management and it can deliver what you looking at.
Themes & Plugins:
Try to go with light wordpress theme, install minimal plugins. Must have plugins are Nginx Helper & W3C Total cache.
There's lot of things on this to talk about, but i think these are important once and should be helpful. Hope my explanation helps you to understand! If you have any doubt feel free to ask...
Attached is the proof of what i explained. This server has configuration of 4GB Ram, 4 Core CPU & Cloud VPS
I have shared webhosting and sometimes i go over the max allowed cpu usage once a day, sometimes two or three times. but i cant really narrow it down to anything specific.
I have the following scripts installed:
wordpress joomla owncloud dokuwiki fengoffice
before i was just running joomla on this hosting package and everything was fine, but i upgraded to have more domains available and also hosted other scripts. now like wordpress, owncloud and so on.
but no site has high traffic or hits. most of the stuff is anyway only used by me.
i talked to the hostgator support team and they told me there is a ssh command to monitor or watch the server and see whats causing the problem.
the high cpu load just happesn for a very short peak, because everytime i check the percentage of cpu usage in the cpanel its super low. the graph shows me the spike, but it looks worse than it really is, because the graph gets updated only every hour, and that makes it hard to narrow it down...
i am new to all this. can somebody help me to figure this out?
BTW:
I hope this question is fine now here, kinda dont really understand this plattform yet...
Just so you have more information, I to host many websites with HostGator using a reseller/shared account. The performance of your site is most likely not an issue, and is related more to HostGator's new servers and it's poor MySQL performance. None of my WordPress sites had issues for years, despite high traffic/plugins etc. Fast forward to late 2013 after EIG purchased HostGator (and others like BlueHost) and the performance on the "new more powerful" servers is anything but. Limits on CPU and processes are more aggressive, and while outright downtime isn't an issue, the performance during peak hours is exceedingly poor. Sites which rely on MySQL databases all suffer from poor performance and no amount of caching or plugin optimization will help (I should know as I spent months reviewing my sites trying many optimizations).
My advice: Find another web host and/or upgrade your hosting to a VPS that can scale based on your needs.
I moved my higher traffic/important clients to WPEngine. The speed difference and quality support is massive.
I have a site that runs off of dotNetNuke with much customization. In production, this site runs fine, speed is relatively optimal, etc. In development, it's PAINFULLY slow (anywhere from 10-30 seconds per action slow). Can anyone recommend any tools/ideas on how to diagnose this issue? The environments are very similar (the dev database is not as powerful as the production one, but it's not enough to warrant this type of delay). I'm looking for something that can help determine all the points of contact for the requests, etc.
Any thoughts?
Try out the following tools:
YSlow: YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites
PageSpeed: The PageSpeed family of tools is designed to help you optimize the performance of your website. PageSpeed Insights products will help you identify performance best practices that can be applied to your site, and PageSpeed optimization tools can help you automate the process.
Firebug and Network Monitoring: Look at detailed measurements of your site's network activity.
Fiddler
YSlow, PageSpeed, and Firebug are great tools you should definitely use but the fact that you're only seeing the issue in the development environment seems to imply it's not the site that's the problem but something with the development environment. Generally, I find most slowness in these cases is related to Disk and/or RAM issues. Use Task Manager to verify the machine has enough RAM for it's current load. Make sure there's sufficient available disk space for proper caching to occur. You may need a faster hard drive.
Run the site lokal in release mode and see if it changes something.
If you can run the live site in debug mode and see if it slows down as much as in the lokal environment.
I've encountered several issues with Amazon EC2 & Bitnami Wordpress AMI (RedHat) on small instance.. and honestly I don't know who to ask :) I'm not a SysAdmin/Linux expert, but I've learned basic SSH commands and other things required to keep going for a basic start.
So here's what is happening:
Wordpress website is loading extremely slow - PageSpeed & YSlow score is 27 of 100.
I think this is caused by memory_limit in php.ini. When I installed Bitnami Wordpress AMI, imported WP Users, set the theme and other basic things, I wasn't able to even access wordpress website - just a blank page showed up. After few solutions, I've tried increasing php.ini memory_limit from 32M to 128M (Max). And I've increased WP memory limit to 64M.
Website loaded properly and users were able to access it - but it's extremely slow.
When I try decreasing php.ini memory limit to 64M, website shows up a blank page again.
Only thing that I can think of currently is increasing EC2 instance from .small to .large or similar. Please let me know your thoughts on this issue.. and many thanks!
We had a similar problem with a Php/MYSQL Application which we moved to an EC2 instance connecting to an RDS database instance. Pages were taking 10x longer to load than on our previous server, even though all specs were the same, i.e. number of CPUs, RAM, clock speed, and the versions of Php/Apache were identical.
We finally found the cause of the problem, the default setting for an RDS database for the Cache query size is 0. This causes the database to run extremely slowly. We changed the query_cache_size to be 1000000000 (1G) (as the RDS instance had 4G of RAM) and immediately the application performance was as good as our previous (non-AWS) server.
Secondarily, we found that an EC2 server with MySQL installed locally on the server did not perform well, on the Amazon Linux build. We tried the same thing on an EC2 instnace running Ubuntu, and with a local MySQL database the performance was great.
Obviously for scalability reasons we went with using an RDS instance but we found it interesting that moving the MySQL database onto the EC2 instance radically improved the performance for an Ubuntu linux EC2 server but made no difference with the Amazon Build of Linux.
Since you have not received an answer yet, allow me to summarize my comments into something that is hopefully useful:
Profile your application to understand where the time is being spent.
Some areas you can affect are:
PHP needs RAM, but so does your database (I know nothing about Bitnami, but Wordpress uses a SQL database for storage).
Allocate enough RAM to PHP. Seems like that's somewhere between 64MB and 128MB.
If you are using MySQL, edit my.ini. If you're using a default configuration file for MySQL, the memory allocation parameters are dialed way too low. If you post your my.ini file, I can give suggestions (or if you're using a different database, state which that is).
Consider striping multiple EBS volumes for your data partition.
Use an EBS backed instance if you are not already.
You can make a more informed decision about where to tune if you have profiling results in hand.
I would suggest to use a Cache tool. The first one that you can try is APC (Alternative PHP cache). It is easy to install in Red Hat: yum install php-pecl-apc. You can get much better results with a WordPress specific cache plugin like W3 Total Cache or Super Cache. I use the last one and it is easy to install in WordPress application:
Install Super Cache from the WordPress admin panel
Change the .htaccess permissions: sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/.htaccess
Enable the plugin and follow the configuration steps. You can see how this plugin modifies the .htaccess file
Configures the cache options according to your preferences and test it. You can do performance tests using a service like blitz.io
Change the .htaccess permissions to 600 when everything is ok.
I hope it helps.
We saw something similar. For us, the opportunity cost of our time fiddling with optimization settings was much higher than just going with a dedicated Wordpress hosting provider.
The leaders in this space (dedicated Wordpress hosting) appear to be WP-Engine and a few others like Synthesis
http://trends.builtwith.com/hosting/wordpress-hosting
I had my personal site on dreamhost but they ended up becoming worse and worse over the years so I moved to bluehost, which has been ok.
Overall, I think EC2 is great but it requires a lot of fiddling. Depending on the cost of your time and area of expertise, you might choose to switch to a more specialized provider.
I have no affiliation with any of these companies other than my personal experience being an individual shared hosting customer at both dreamhost and bluehost.