Fatal memory error in Sylius production environment - symfony

Trying to access my Sylius site in production environment but I'm running in a memory error:
FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in directoryname/app/bootstrap.php.cache on line 2798"
I found I can bypass the problem by setting AppKernel('prod', true), but I read that this is bad practice for some reason.
Anyone run into a similar problem?

There can be multiple reasons as 128MB of RAM is not that much to exhaust.
1. Cache is not warmed up
If your cache is not warmed up, and the app needs to generate all the files, it can eat up lot of RAM on the first request.
Most of the time this shouldn't be the case in production mode, if you run composer install or use some deploy mechanism (capistrano, deployer) that calls > php app\console cache:clear during deployment as this will also prime the cache.
Try running composer install manually on the server as this should generate cache after building autoload files.
2. GD image resize on images with big dimensions (not size)
If you use image resizing with GD it will always uncompress the image. That means that jpeg image of 3000x3000 with size 1KB, where every single pixel is #fff will need around 35-54MB or RAM
(3000 x 3000 x 4(rgba)) / 1024^2 = 34.3MB
Add to this some requirements for processing, and it can easily reach over 128MB. Also people tend to upload images with dimensions up to 10k these days.
solution: switch to imagick / gmagick or enormously increase RAM
3. Your app logic really uses up that much RAM
Profile your code, see how much memory is used
See if you don't load/build somewhere thousand of objects
Just simple increase your RAM to 256 / 512 and see it helps
4. You got hidden infinite loop because of
old session data - clear sessions
configuration difference on your prod and dev machine (or config file)

Related

Tar Compaction activity on Adobe AEM repository is stuck

I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,

What could be preventing SQLite WAL files from being cleared?

I've had three cases of WAL files growing to massive sizes of ~3 GB on two different machines with the same hardware and software setup. The database file itself is ~7 GB. Under normal runtime, WAL is either non-existent or a few kilobytes.
I know it's normal for WAL to grow as long as there's constantly a connection open, but even closing the application doesn't get rid of the WAL file or the shared memory file. Even restarting the machine, without ever starting my application, and attempting to open the database with DB Browser for SQLite doesn't help. It takes ages to launch (I don't know how long exactly, but definitely over 5 minutes) and then WAL and shared memory files remain, even after closing the browser.
Bizarrely, here's what did help. I zipped up the three files to investigate locally, but then on a hunch I deleted the originals and extracted the files back again. Launching the app took seconds and the extra files were gone just like that. The same thing happened on my laptop with the unzipped files. I ran the integrity check on my machine and the database seems to be fine.
Here's the rough description of the app, if it means anything.
EF Core 2.1.14 (it uses SQLite 3.28.0)
.Net Framework 4.7.1
except for some rare short-lived overlap with read-only access, only one thread accesses the database
entire connection (DbContext) gets closed roughly every 2 seconds max
shared memory is used with PRAGMA mmap_size = 268435456 (256 MiB)
synchronous mode is NORMAL
the OS is Windows 7
Any ideas what might be causing this... quirk?
I temporarily switched to journal mode, which helpfully reported errors instead of silently failing.
The error I was having was SQLITE_IOERR_WRITE. After more digging, it turned out the root Windows error causing all this was 665 - hitting into an NTFS file system limitation. An answer here then led me to the actual cause: extreme file fragmentation of the database.
Copying the file reduces fragmentation, which is why the bizarre fix I mentioned, copying the file, temporarily worked. The actual fix was to schedule defragmentation using Sysinternals' Contig.

Required space for symfony cache - system crash

Servus,
I just got a big server crash because the cache folder needed more and more disc space, at least around 400 GiB.
Is that normal? How can I set a limit? I just tested the system after removing the old folder - now the folder is growing each 5 minutes and needs round 5 MiB more space.
Any Ideas?

Symfony2 slow page loads despite quick initialization/query/render time?

I'm working on a Symfony2 project that is experiencing a slow load time for one particular page. The page in question does run a pretty large query that includes 16 joins, and I was expecting that to be the culprit. And maybe it is, and I am just struggling to interpret the figures in the debug toolbar properly. Here are the basic stats:
Peak Memory: 15.2 MB
Database Queries: 2
Initialization Time: 6ms
Render Time: 19ms
Query Time: 490.57 ms
TOTAL TIME: 21530 ms
I get the same basic results, more or less, in three different environments:
php 5.4.43 + Symfony 2.3
php 5.4.43 + Symfony 2.8
php 5.6.11 + Symfony 2.8
Given that the initialization + query + render time is nowhere near the TOTAL TIME figure, I'm wondering what else comes into play, and other method I could go about identifying the bottle neck. Currently, the query is set up to pull ->getQuery()->getResult(). From what I've read, this can present huge overhead, as returning full result objects means that each of the X objects needs to be hydrated. (For the sake of context, we are talking about less than 50 top-level/parent objects in this case). Consequently, many folks suggest using ->getQuery()->getArrayResult() instead, to return simple arrays as opposed to hydrated objects to drastically reduce the overhead. This sounded reasonable enough to me so, despite it requiring some template changes in order for the page to render the alternate type of result, I gave it a shot. It did reduce the TOTAL TIME, but by a generally unnoticeable amount (reducing from 21530ms to 20670 ms).
I have been playing with Docker as well, and decided to spin up a minimal docker environment that uses the original getResult() query in Symfony 2.8 code running on php 7. This environment is using the internal php webserver, as opposed to Apache, and I am not sure if that should/could have any affect. While the page load is still slow, it seems to be markedly improved on php 7. The other interesting part is that, while the TOTAL TIME was reduced a good deal, most of the other developer toolbar figured went up:
Peak Memory: 235.9 MB
Queries: 2
Initialization Time: 6 ms
Render Time: 53 ms
Query Time: 2015 ms
TOTAL TIME: 7584 ms
So, the page loads on php 7 in 35% of the amount of time that it takes to load on php 5.4/5.6. This is good to know, and provides a compelling argument for why we should upgrade. That being said, I am still interested in figuring out what are the common factors that explain large discrepancies between TOTAL TIME and the sum of [initialization time + query time + render time]. I'm guessing that I shouldn't expect these numbers to line up exactly, but I notice that, while still off, they are significantly closer in the php 7 Docker environment than they are in the php 5.4/5.6 environments.
For the sake of clarity, the docker container naturally spins up with a php.ini memory_limit setting of -1. The other environments were using 256M, and I even dialed that up to 1024M, but saw no noticeable change in performance, and the "Peak Memory" figure stayed low. I tried re-creating the Docker environment with 1024M and also did not notice a difference there.
Thanks in advance for any advice.
EDIT
I've tested loading the page via the php 5.6 / Symfony 2.8 environment via php's internal webserver, and it loads in about half the time. Still not as good a php 7 + internal server, but at least it gives me a solid lead that something about with my Apache setup is at least significantly related (though not necessarily the sole culprit). Any/all advice/suggestions welcome!

MySQL keeps running out of memory with Wordpress, how much memory do I need?

I have been experiencing MySQL crashing recently and really need to figure out what I need to do to get this to stop.
I have a 2GB Digital Ocean server running the following:
Ubuntu 14.04
PHP v5.5.9
Apache v20120211
MySQL v5.5.43
Wordpress v4.2
I also have 2GB of swap.
The last time MySQL crashed this was in my error log
http://laravel.io/bin/E304E
The important part seems (to me) to be this
InnoDB: Fatal error: cannot allocate memory for the buffer pool
I am getting about 2000 page views per day. I thought this should be easily enough memory to run the site.
Can anyone give me some ideas what I can do or what I definitely need to do to stop this happening?
Thanks
2000 page views per day is well within the range of what your server can handle. It's possible you're getting hit by bots and/or Apache isn't configured well for your server size.
Apache2Buddy is a quick diagnostic tool to help with your Apache configurations. $ curl -L http://apache2buddy.pl/ | perl. It'll print out a report with suggested configuration adjustments given your RAM available and application size. My guess is that you'll need to update MaxRequestWorkers (located at /etc/apache2/mods-available/mpm_prefork.conf) to something smaller.
I'm also guessing that you have bots hitting your site, which is causing the huge volume of traffic that is crashing Apache. Check your access logs $ cat /var/log/apache2/access.log.
I wrote an article on this situation if you want a deeper explanation, a method to stress test, or ideas on how to block some of the bot traffic: http://brunzino.github.io/blog/2016/05/21/solution-how-to-debug-intermittent-error-establishing-database-connection/

Resources