How to decrease the memory consumption of a WordPress site? - wordpress

The test site requires a lot of memory when viewing some category/archive pages.
I met with the following error messages yesterday:
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to
allocate 48 bytes) in
/var/www/t/wp-includes/load.php on
line 552
Fatal error: Allowed memory size of 209715200 bytes exhausted (tried to
allocate 40 bytes) in
/var/www/t/wp-includes/meta.php on
line 307
The problem was solved by adding the "define('WP_MEMORY_LIMIT', '210M');" line to the wp-config.php file.
But this is not good enough. The production site will have much more data than the test site, which means I have to add the "define('WP_MEMORY_LIMIT', '2100M');" line to the wp-config.php file. And 2100M may not be large enough as the time goes by.
How to decrease the memory consumption of the WordPress site dramatically? Any help is appreciated.

Memory consumption can be reduced by many things. Instead of posting everything here, I'll just point out some good resources.
http://codex.wordpress.org/WordPress_Optimization/
http://storecrowd.com/blog/wordpress-optimisation/
http://www.earnersblog.com/digproof-your-wordpress/
http://beerpla.net/2009/06/09/how-to-make-your-site-lightning-fast-by-compressing-deflategzip-your-html-javascript-css-xml-etc-in-apache/
Also check out this Plugin to monitor your memory usage.

Related

Allowed memory size exhausted in FlattenException Symfony

i got this error at least 20 times each minute on my production server logs.
My website is getting down when visitors number arrives to ~50.
Any suggestion?
[Fri Dec 14 23:52:32.339692 2018] [:error] [pid 12588] [client
81.39.153.171:55104] PHP Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 32 bytes) in
/vendor/symfony/symfony/src/Symfony/Component/Debug/Exception/FlattenException.php
on line 269
In your production, you don't need to debug component, for reducing memory use composer with --no-dev --no-interaction --optimize-autoloader.
If you can access your server via ssh, check memory consuming.
My suggestion if you have 50 visitors at the same time, This is a good time to upgrade the server.
Also, you can try to reduce max_execution_time to open some more memory.
The question is very vague, so this won't be accurate...
Your limit is 512 Mb, and still not enough, so that only leaves a few possibilities.
First check the logs to see if these errors are tied to any specific URL.
(If you don't have adequate logging, I recommend using Rollbar, it has a monolog handler, and takes only a few minutes to wire up. It is also free.)
You mentioned the visitor count... I'm not sure if it has anything to do with this. What kind of webserver are you using?
Check for the usual suspects:
Infinite loops, recursions without an exit condition.
Large files (upload and download mostly)
Statistical modules with complex queries and a high limit are also a good place to check.

ORA-04030: out of process memory when trying to allocate 2024 bytes (kxs-heap-c,kghsstk)

I went for the solution from the ORA-01795 Error through this link, the answer of d-live.
After successfully forming sql statement I triggered in sql developer, I found this issue like
ORA-04030: out of process memory when trying to allocate 2024 bytes (kxs-heap-c,kghsstk)
04030. 00000 - "out of process memory when trying to allocate %s bytes (%s,%s)"
*Cause: Operating system process private memory has been exhausted
*Action:
please help me for this issue, I should handle and execute it anyway.
Note: total records i am processing is 5341 in join statement instead of in statement based on the link solution
This ORA-04030 can be caused by a shortage of RAM on a dedicated (non shared server) environment, a too small PGA, and by not setting kernel parameters large enough to allow enough RAM. The ORA-04030 is also common when running an import.

write the uploaded files on the disk

look at this page of web.py:
http://webpy.org/cookbook/storeupload/
pay attention to how it write the file on the disk.
The current situation is:
I launched a server in virtualbox with 256 mb memory and 512 swap.
Just when I upload a file larger than 200 mb I get an error("the page is not available temporary").
I think that the python file-write function reads the whole file into the memory, then it crashed due to the limited memory.
Am I right?
If so, is there any solution?
Thank you for your time.
Try not to read the whole file in memory, create a loop and transfer the file by 1024 bytes chunks.
I take it you have set up nginx correctly, especially the client_max_body_size directive.
I think you're right, your problem is linked to bad memory usage : it probably comes from the read() method.
Used without a size argument, the entire contents of the file will be read and returned. Since the file is almost as large as the machine's memory, the program's running out of it and crashes.
What you should do is investigate on better ways to copy a file in Python.

eheap_alloc: Cannot allocate 8414160 bytes of memory (of type "heap") in windows system?

While load testing of my erlang server with increasing number(100, 200, 300,....) of clients, which also in erlang, I got a some message on windows console if the number of clients exceeds 200. The message is
*"Crash dump was written to: erl_crash.dump.
eheap_alloc: Cannot allocate 8414160 bytes of memory (of type "heap"). Abnormal termination"*.
This is the problem with windows. But if I test server load on Linux system, it can work for any number of clients until the system load reach to saturation state.
can any one help me to override this problem ?
Thank you.
Simply put, your app ran out of memory. Probably the easiest way to monitor this is to check out which process is eating up the memory. You can check up with os_mon, or easier still:
etop:start()

aspnet_wp keeps recycling because of high memory consumption. How can I fix it?

I have a small WCF service which is executed on an XP box with 256 megs of RAM running in VM.
When I make a request (with a request size of approximately 5mbs) to that service I always get the following message in the event log:
aspnet_wp.exe was recycled because memory consumption exceeded the 153 MB (60 percent of available RAM).
and the call fails with error 500.
I've tried to increase memory limit to 95% but it still takes up all the available memory and fails in the same manner.
It looks like something is wrong with my app (I do not reuse byte[] buffers and maybe something else) but I cannot find root cause of such memory overuse.
Profiling showed that all CLR objects that I have in memory together do not take up that much space.
Doing a dump analysis with windbg showed same situation - nothing that big in object heap.
How can I find out what is contributing to such memory overuse?
Is there any way to make a dump right before process is recycled (during peak mem usage)?
Tess Ferrandez's blog "If broken it is, fix it you should" has lots of hints, tips and recommendations for sorting out exactly this sort of problem.
Of particular use to you would be Lab 3: Memory, where she walks you through working out what has caused all the memory on your machine to disappear.
Could be a lot of things, hard to diagnose this one. Have you watched perfmon to see if the memory usage does peak on aspnet process or on the server itself? 256MB is pretty low, but it should still be able to handle it. Do you have a SWAP file on this machine? AT what point do you take the memory dump? Have you stepped though the code, and does it work on other machines? Perhaps it is getting stuck in a loop and leaking memory until it crashes?

Resources