I am using PHP 7.4 and it was working well for weeks. Suddenly each PHP-FPM child started taking too much memory. In initial weeks each PHP-FPM child used to take 40% cpu now few children are taking 90%cpu. Can anybody guide on this?
i allowed max 5 children process but nothing
Related
We are running a regional news website (https://www.galwaydaily.com/) on an AWS EC2 instance (t3 medium).
The problem is that the page load speed over the past few months has gone up and up and a few days ago the site stopped working altogether for a few hours. In the past, we would just have scaled up the instance, but I'm not sure this is best practice.
Here is a screenshot of our CPU utilization for the past 2 weeks at 1 hours intervals:
I'd love some advice on how best to host and serve this site!
At a quick glance, your best option for the least amount of effort is to add a CDN. Your top 7 longest loading assets are a couple js/css files and then some images - none of which seem large enough to be taking as long as they do. Use a tool like GTmetrix.com to see if you are utilizing the resources you have effectively before resizing your instance and/or DB.
Other options include utilizing AWS features like memcache (I tend to use redis), Autoscaling groups and RDS.
I am trying to test out the feasibility of moving my website from godaddy to AWS.
I used a wordpress migrate plugin which seems to have moved the complete site and at least peripherally appears to be moved properly.
However, when I try to access the site, it is extremely slow. Upon using developer tools, I can tell that some of the css and jpg images are sort of acting as blocking threads.
However, I cannot tell why this is the case. The site loads in less than 3 seconds in godaddy, however, it takes over a minute to load it fully on AWS and there are at least a few requests that timeout. Waterfall view on chrome developer tools show a lot of waiting on multiple requests and I cannot seem to figure out what or why these requests are sort of waiting forever and timing out.
Any guidance is appreciated.
I have pointed the current instance to www. blind beliefs .com
I cannot seem to figure out if it is an issue with the bitnami wordpress AMI or if I am doing something wrong. May be I should go the traditional route of spinning up EC2 instance , run a server on it, connect it to a db and then install wordpress on my server. I just felt the AMI available took care of all of that tailoring without me having to manually doing it.
However, it is difficult to debug though as to why certain assets get blocked/are extremely slow and timeout without loading.
Thank you.
Some more details:
The domain is still at godaddy and I have not moved it to AWS yet, not sure if that is sort of having an impact.
I still feel it has to do with the AMI though - cannot prove it.
Your issue sounds like you have a free memory problem. You did not go into details on the instance size, if MySQL is installed on the instance, etc.
This article will show you how to determine memory usage on your instance. When free memory is low OR you start using SWAP space, your machine will become very slow. Your goal should be 0 bytes used in SWAP space and at least 25% free memory during normal operations.
Other factors to check is percent CPU utilization and free disk space on your file systems.
Linux Memory Check Commands
If you have a free memory problem, increase the instance size. If you have a CPU usage problem, either change the instance size or switch to another instance type. If you have a free disk space problem, create a new instance with a larger EBS volume OR move your website, etc to a new EBS volume sized correctly.
I have a godaddy VPS box with 4 GB RAM. It has only 1 website hosted with nearly 800-1000k hits perday. upon investigation i found that my cpu usage is some times goes above 100%. The below mysql process is the culprit.
Can some one help me. I have tried increasing open file limits but no luck
Enable page support for the VPS instance:
http://dev.mysql.com/doc/refman/5.5/en/large-page-support.html
We are moving about 100 websites from a IIS 7 based solution to an IIS 8.5 solution. Both are Load Balanced, and based around 2 shared Virtual Machines running IIS.
All the sites were running fine on the old solution, a page load taking anywhere from 0.2 to 2 seconds. We upgraded to a supposed faster solution so that we had more room to expand and add more sites, but now the load is on the new solution we are having massive performance issues. We have only started having issues once repointing about 40 of the sites to the new servers (and we didn't repoint the remaining sites).
After a site IIS reset or changing an app pool setting all the sites run nice and fast for about 10-20 minutes, but then hit a brick wall and requests intermittently take 10-20 times as long for any page on a site (but still return with no errors). We have a mix of .NET 3.5 webforms and MVC .net 4.5 sites. All sites are affected, but it seems to affect the MVC sites more frequently.
I think it's load related, but the CPU shows hardly anything (barely goes over 30%), ram is around 80% with 3GB free, and even if there is nothing else running on the server sites re still slow after the initial period. It's like the app pools are blocking or running out of memory or something, but we can't see any actual blocks anywhere and it all seems random.
Things we've tried which don't make any difference after the 20 minute reset period:
Bypassing Load balancer
Checking differences between the new web servers (both have the same issue, but can have it at different times)
Checked Perfmon, active threads count hovers between 0 and 5, sometimes spikes at 10, requests/sec peaks at 20.
Increased MaxPoolThreads to 20 and MaxConcurrentRequestsPerCPU to 5000 as per https://msdn.microsoft.com/en-us/library/ee377050%28v=bts.70%29.aspx (though hard to tell if that actually worked)
Event viewer shows no unexpected app pool recycling
changed worker processes to 4 (we have a session state server)
occurs even when "%windir%\system32\inetsrv\appcmd list request" shows no blocking items, and appcmd can show blocking on .css and .jpg files as well as any web page (a small jpg can show response time of up to 60 seconds)
Turned off file indexing for C drive and shared IIS files.
Change session state back to InProc for sessions.
FailedReqLogFiles on applications just show time taken of (for example) 2969ms but only significant time on breakdown is 719ms for VIRTUAL_MODULE_UNRESOLVED on FormsAuthentication
Other info
- There are no memory limits of request limits on app pools, which recycle every day at 3am
- One site was running fine, we added another new website/app pool and the previous site slowed down 20x and the new site also ran slowly.
- Is there a limit to number of websites or app pools? We are currently up to ID 109 on the websites, but some have been removed so there are about 90 sites on there, with about 40 of them actually running
Any ideas on what to do or where else to look would be greatly appreciated! We are programmer by trade but we are lumbered with trying to sort this out as the host is saying it's not their problem!
The screenshot below shows the change in cache related state of my zope instance over time (3 months so far).
We've increased the cache size several times over the period from 3000 all the way up to 6000000. With the exception of one recent blip, we have hit a ceiling of 30 Million (not sure what parameter that is) (see the 'by year' graph). This happened at a cache size of about 1000000, after which, changes to the cache size seemed to have no effect on the cached objects or memory usage of zope.
The zope/plone process moved from using about 500 MB of memory to using 3GB (we have 8GB on this server).
What I expected was that sliding the cache size upwards would allow zope to take advantage of more of the available server memory, but it is stuck at 3GB (out of a potential 8GB on the server).
Is there another setting that might be "capping" things at 3GB?
At a guess, your OS is limiting per-process memory size.
In a bash shell, check ulimit -v to see if a virtual memory limit is set. See man bash for the full list of options for that command.
See Limit memory usage for a single Linux process for more information on how to use ulimit.
I don't know what's going on with the memory of your server but you are doing this the wrong way: you simply can't have 6 million objects on memory, that's impossible: on a typical installation in Plone 4.x you should need somewhere in between 50 GB and 150 GB of memory for that.
to solve this we need more information: which Plone version are you using? how many objects do you have in your database? how many threads? what's the architecture of your server, 32 or 64-bit?
second, be sure to install the latest version of the munin.zope plugin to collect reliable information about your server (hat tip to Leo Rochael).
third, read this thread on the core developers list to understand how to calculate a more realistic number for your cache size (hat tip to Hanno Schlichting).
fourth, move the number up slowly and take time to monitor the results; check for total number of objects in memory and avoid memory swaps at any cost. You can stop increasing cache size if you see the number of objects is below your goal value. remember: you're never gonna have all the objects in memory, that's quite difficult because people tend to visit mostly only a subset of your content.
fifth, if you are in Plone 4.x test DateTime 3.0.3 (on a staging server before put it in production) this could decrease further your memory consumption by up to 40% (somebody told me it now works also in Plone 3.x, but I haven't check it my self).
sixth, share your result on the Plone setup list!
good night and good luck!
A 32-bit platform -- don't know if this is limited to Intel -- is limited to 3GB per process. That is because it can only address up to 4GB per process, and the bottom 1GB is used by the kernel. Of course PAE allows you to access up to 64GB, but there are certain per process limitations which you are running into here. You really cannot run a high-traffic plone site on a 32-bit platform anymore. Quite often the simplest solution is to upgrade your OS to the 64-bit version, because unless you have seriously ancient hardware, it should already be capable of running x86-64.