Allowed memory size exhausted in FlattenException Symfony - symfony

i got this error at least 20 times each minute on my production server logs.
My website is getting down when visitors number arrives to ~50.
Any suggestion?
[Fri Dec 14 23:52:32.339692 2018] [:error] [pid 12588] [client
81.39.153.171:55104] PHP Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 32 bytes) in
/vendor/symfony/symfony/src/Symfony/Component/Debug/Exception/FlattenException.php
on line 269

In your production, you don't need to debug component, for reducing memory use composer with --no-dev --no-interaction --optimize-autoloader.
If you can access your server via ssh, check memory consuming.
My suggestion if you have 50 visitors at the same time, This is a good time to upgrade the server.
Also, you can try to reduce max_execution_time to open some more memory.

The question is very vague, so this won't be accurate...
Your limit is 512 Mb, and still not enough, so that only leaves a few possibilities.
First check the logs to see if these errors are tied to any specific URL.
(If you don't have adequate logging, I recommend using Rollbar, it has a monolog handler, and takes only a few minutes to wire up. It is also free.)
You mentioned the visitor count... I'm not sure if it has anything to do with this. What kind of webserver are you using?
Check for the usual suspects:
Infinite loops, recursions without an exit condition.
Large files (upload and download mostly)
Statistical modules with complex queries and a high limit are also a good place to check.

Related

MariaDB has stopped responding - [ERROR] mysqld got signal 6

MariaDB service was stopped responding all of a sudden. It was running for more than 5 months continuously without any issues. When we check the MariaDB service status at the time of the incident, it showed as active (running) ( service mariadb status ). But we could not log into the MariaDB server, each logging attempt was just hanged without any response. All our web applications were also failed to communicate with the MariaDB service. Also, we checked the max_used_connections, and it was below the maximum value.
When we going through the logs, we saw the below error (this had been triggered at the time of the incident).
210623 2:00:19 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.2.34-MariaDB-log
key_buffer_size=67108864
read_buffer_size=1048576
max_used_connections=139
max_threads=752
thread_count=72
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1621655 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7f4c008501e8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f4c458a7d30 thread_stack 0x49000
2021-06-23 2:04:20 139966788486912 [Warning] InnoDB: A long semaphore wait:
--Thread 139966780094208 has waited at btr0sea.cc line 1145 for 241.00 seconds the semaphore:
S-lock on RW-latch at 0x55e1838d5ab0 created in file btr0sea.cc line 191
a writer (thread id 139966610978560) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file btr0sea.cc line 1145
Last time write locked in file btr0sea.cc line 1218
We could not even stop the MariaDB service using general stopping commands ( service MariaDB stop). But we were able to forcefully kill the MariaDB process and then we could get the MariaDB service back online.
What could be the reason for this failure. If you have already faced similar issues please share your experience, what actions you got to prevent such failures (in the future). Your feedback is much much appreciated.
Our Environment Details are as follows
Operating system: Red Hat Enterprise Linux 7
Mariadb version: 10.2.34-MariaDB-log MariaDB Server
I also face this issue on an aws instance (c5a.4xlarge) hosting my database.
Server version: 10.5.11-MariaDB-1:10.5.11+maria~focal
It happened already 3 times occasionnaly. Like you, no possibility to stop the service but reboot the machine to get it working again.
Logs at restart suggest some tables crashed and should be repaired.

Is OpenJ9 gc log asynchronous?

Is OpenJ9 write gc log asynchronously?
When use Eclipse OpenJ9 in docker container, can i put gc.log to NFS or Ceph?
I've read that OpenJDK write gc log synchronously: Is gc.log writing asynchronous? safe to put gc.log on NFS mount?.
Verbose GC logs could be directed to a file. The option is -Xverbosegclog (mentioned here https://www.eclipse.org/openj9/docs/gc/, although ATM most of Verbose GC documentation is still only on IBM website).
If you suspect that storage medium may block I/O operations, you can try using -Xgc:bufferedLogging. This isn't really documented option (no strong interest), but you are welcome to try it and let us know if you find it valuable.
Note however, with buffered logging, there will be a delay - with a sudden termination of JVM process, the logs may miss a few lines that were still in the internal buffer but not flushed to the file.

ORA-04030: out of process memory when trying to allocate 2024 bytes (kxs-heap-c,kghsstk)

I went for the solution from the ORA-01795 Error through this link, the answer of d-live.
After successfully forming sql statement I triggered in sql developer, I found this issue like
ORA-04030: out of process memory when trying to allocate 2024 bytes (kxs-heap-c,kghsstk)
04030. 00000 - "out of process memory when trying to allocate %s bytes (%s,%s)"
*Cause: Operating system process private memory has been exhausted
*Action:
please help me for this issue, I should handle and execute it anyway.
Note: total records i am processing is 5341 in join statement instead of in statement based on the link solution
This ORA-04030 can be caused by a shortage of RAM on a dedicated (non shared server) environment, a too small PGA, and by not setting kernel parameters large enough to allow enough RAM. The ORA-04030 is also common when running an import.

How to decrease the memory consumption of a WordPress site?

The test site requires a lot of memory when viewing some category/archive pages.
I met with the following error messages yesterday:
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to
allocate 48 bytes) in
/var/www/t/wp-includes/load.php on
line 552
Fatal error: Allowed memory size of 209715200 bytes exhausted (tried to
allocate 40 bytes) in
/var/www/t/wp-includes/meta.php on
line 307
The problem was solved by adding the "define('WP_MEMORY_LIMIT', '210M');" line to the wp-config.php file.
But this is not good enough. The production site will have much more data than the test site, which means I have to add the "define('WP_MEMORY_LIMIT', '2100M');" line to the wp-config.php file. And 2100M may not be large enough as the time goes by.
How to decrease the memory consumption of the WordPress site dramatically? Any help is appreciated.
Memory consumption can be reduced by many things. Instead of posting everything here, I'll just point out some good resources.
http://codex.wordpress.org/WordPress_Optimization/
http://storecrowd.com/blog/wordpress-optimisation/
http://www.earnersblog.com/digproof-your-wordpress/
http://beerpla.net/2009/06/09/how-to-make-your-site-lightning-fast-by-compressing-deflategzip-your-html-javascript-css-xml-etc-in-apache/
Also check out this Plugin to monitor your memory usage.

aspnet_wp keeps recycling because of high memory consumption. How can I fix it?

I have a small WCF service which is executed on an XP box with 256 megs of RAM running in VM.
When I make a request (with a request size of approximately 5mbs) to that service I always get the following message in the event log:
aspnet_wp.exe was recycled because memory consumption exceeded the 153 MB (60 percent of available RAM).
and the call fails with error 500.
I've tried to increase memory limit to 95% but it still takes up all the available memory and fails in the same manner.
It looks like something is wrong with my app (I do not reuse byte[] buffers and maybe something else) but I cannot find root cause of such memory overuse.
Profiling showed that all CLR objects that I have in memory together do not take up that much space.
Doing a dump analysis with windbg showed same situation - nothing that big in object heap.
How can I find out what is contributing to such memory overuse?
Is there any way to make a dump right before process is recycled (during peak mem usage)?
Tess Ferrandez's blog "If broken it is, fix it you should" has lots of hints, tips and recommendations for sorting out exactly this sort of problem.
Of particular use to you would be Lab 3: Memory, where she walks you through working out what has caused all the memory on your machine to disappear.
Could be a lot of things, hard to diagnose this one. Have you watched perfmon to see if the memory usage does peak on aspnet process or on the server itself? 256MB is pretty low, but it should still be able to handle it. Do you have a SWAP file on this machine? AT what point do you take the memory dump? Have you stepped though the code, and does it work on other machines? Perhaps it is getting stuck in a loop and leaking memory until it crashes?

Resources