Required space for symfony cache - system crash - symfony

Servus,
I just got a big server crash because the cache folder needed more and more disc space, at least around 400 GiB.
Is that normal? How can I set a limit? I just tested the system after removing the old folder - now the folder is growing each 5 minutes and needs round 5 MiB more space.
Any Ideas?

Related

My Apache2 server slow to serve some css and js files. Any idea why?

I'm not sure if this is a server or programming problem. I searched for a similar problem, but coudn't find anything like this.
I have a server running Debian Buster, serving sites on Apache2.
This week, one of my sites turned veeery slow, taking more than 25 seconds to renderize a page that usually took between 2 and 4 seconds.
At first, I checked the PHP program, but it completes processing everything in less than 1 second, sometimes it takes 2 seconds.
As I have to place some menus depending on the size of the page, I save everything in a PHP variable, then decide if I add extra menus or not.
In the end, I "echo" the variable to the browser.
That said, after checking a lot, I found that:
when I open a page, it takes no time to process the html in PHP, and after writing it to the browser, the browser starts "waiting for www.mydomainname.tld" for 20+ seconds.
by running top, I see 2 or 3 Apache processes running at 100% CPU on the server during that time.
One of my css files is missing on the server. By replacing it, one of the Apache processes at 100% disappeared (probably ran and closed);
Another CSS file is in the server, but with that file called in the html page, the same (100% CPU running) problem appear. If I disable it in the html, everything is running quickly as expected. Renderize in the browser in less than 4 seconds top.
I know this is not easily reproducible, and for now I disabled that second CSS file, so my site is running, but it would be great if anyone could give me an idea on where should I look for this solution, if any?
I even checked the hard disk, the SMART flags seems OK. Didn't stop the server to check the disks yet.
I have a copy of this server in a virtualbox running the same system, and locally it is amazingly fast.
My worry is if there is any problem in my server that I should get some maintenance for?
Just to add: The server is an AMD octa core with 32GB of RAM and 3 TB x 2 disks in a RAID1 set, so the server specs is not a culprit (I think)
I ran a badblocks check in the RAID, and found no errors.
Then I asked the support staff to replace the older disk by its manufacturing date and surprisingly (since I had no proof of damage to that disk) they accepted.
After rebuilding the RAID set, the slowness disappeared somehow.
So, this is not a good answer, but may serve as a hint for anyone having this problem, too.

What could be preventing SQLite WAL files from being cleared?

I've had three cases of WAL files growing to massive sizes of ~3 GB on two different machines with the same hardware and software setup. The database file itself is ~7 GB. Under normal runtime, WAL is either non-existent or a few kilobytes.
I know it's normal for WAL to grow as long as there's constantly a connection open, but even closing the application doesn't get rid of the WAL file or the shared memory file. Even restarting the machine, without ever starting my application, and attempting to open the database with DB Browser for SQLite doesn't help. It takes ages to launch (I don't know how long exactly, but definitely over 5 minutes) and then WAL and shared memory files remain, even after closing the browser.
Bizarrely, here's what did help. I zipped up the three files to investigate locally, but then on a hunch I deleted the originals and extracted the files back again. Launching the app took seconds and the extra files were gone just like that. The same thing happened on my laptop with the unzipped files. I ran the integrity check on my machine and the database seems to be fine.
Here's the rough description of the app, if it means anything.
EF Core 2.1.14 (it uses SQLite 3.28.0)
.Net Framework 4.7.1
except for some rare short-lived overlap with read-only access, only one thread accesses the database
entire connection (DbContext) gets closed roughly every 2 seconds max
shared memory is used with PRAGMA mmap_size = 268435456 (256 MiB)
synchronous mode is NORMAL
the OS is Windows 7
Any ideas what might be causing this... quirk?
I temporarily switched to journal mode, which helpfully reported errors instead of silently failing.
The error I was having was SQLITE_IOERR_WRITE. After more digging, it turned out the root Windows error causing all this was 665 - hitting into an NTFS file system limitation. An answer here then led me to the actual cause: extreme file fragmentation of the database.
Copying the file reduces fragmentation, which is why the bizarre fix I mentioned, copying the file, temporarily worked. The actual fix was to schedule defragmentation using Sysinternals' Contig.

Symfony2 slow page loads despite quick initialization/query/render time?

I'm working on a Symfony2 project that is experiencing a slow load time for one particular page. The page in question does run a pretty large query that includes 16 joins, and I was expecting that to be the culprit. And maybe it is, and I am just struggling to interpret the figures in the debug toolbar properly. Here are the basic stats:
Peak Memory: 15.2 MB
Database Queries: 2
Initialization Time: 6ms
Render Time: 19ms
Query Time: 490.57 ms
TOTAL TIME: 21530 ms
I get the same basic results, more or less, in three different environments:
php 5.4.43 + Symfony 2.3
php 5.4.43 + Symfony 2.8
php 5.6.11 + Symfony 2.8
Given that the initialization + query + render time is nowhere near the TOTAL TIME figure, I'm wondering what else comes into play, and other method I could go about identifying the bottle neck. Currently, the query is set up to pull ->getQuery()->getResult(). From what I've read, this can present huge overhead, as returning full result objects means that each of the X objects needs to be hydrated. (For the sake of context, we are talking about less than 50 top-level/parent objects in this case). Consequently, many folks suggest using ->getQuery()->getArrayResult() instead, to return simple arrays as opposed to hydrated objects to drastically reduce the overhead. This sounded reasonable enough to me so, despite it requiring some template changes in order for the page to render the alternate type of result, I gave it a shot. It did reduce the TOTAL TIME, but by a generally unnoticeable amount (reducing from 21530ms to 20670 ms).
I have been playing with Docker as well, and decided to spin up a minimal docker environment that uses the original getResult() query in Symfony 2.8 code running on php 7. This environment is using the internal php webserver, as opposed to Apache, and I am not sure if that should/could have any affect. While the page load is still slow, it seems to be markedly improved on php 7. The other interesting part is that, while the TOTAL TIME was reduced a good deal, most of the other developer toolbar figured went up:
Peak Memory: 235.9 MB
Queries: 2
Initialization Time: 6 ms
Render Time: 53 ms
Query Time: 2015 ms
TOTAL TIME: 7584 ms
So, the page loads on php 7 in 35% of the amount of time that it takes to load on php 5.4/5.6. This is good to know, and provides a compelling argument for why we should upgrade. That being said, I am still interested in figuring out what are the common factors that explain large discrepancies between TOTAL TIME and the sum of [initialization time + query time + render time]. I'm guessing that I shouldn't expect these numbers to line up exactly, but I notice that, while still off, they are significantly closer in the php 7 Docker environment than they are in the php 5.4/5.6 environments.
For the sake of clarity, the docker container naturally spins up with a php.ini memory_limit setting of -1. The other environments were using 256M, and I even dialed that up to 1024M, but saw no noticeable change in performance, and the "Peak Memory" figure stayed low. I tried re-creating the Docker environment with 1024M and also did not notice a difference there.
Thanks in advance for any advice.
EDIT
I've tested loading the page via the php 5.6 / Symfony 2.8 environment via php's internal webserver, and it loads in about half the time. Still not as good a php 7 + internal server, but at least it gives me a solid lead that something about with my Apache setup is at least significantly related (though not necessarily the sole culprit). Any/all advice/suggestions welcome!

Fatal memory error in Sylius production environment

Trying to access my Sylius site in production environment but I'm running in a memory error:
FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in directoryname/app/bootstrap.php.cache on line 2798"
I found I can bypass the problem by setting AppKernel('prod', true), but I read that this is bad practice for some reason.
Anyone run into a similar problem?
There can be multiple reasons as 128MB of RAM is not that much to exhaust.
1. Cache is not warmed up
If your cache is not warmed up, and the app needs to generate all the files, it can eat up lot of RAM on the first request.
Most of the time this shouldn't be the case in production mode, if you run composer install or use some deploy mechanism (capistrano, deployer) that calls > php app\console cache:clear during deployment as this will also prime the cache.
Try running composer install manually on the server as this should generate cache after building autoload files.
2. GD image resize on images with big dimensions (not size)
If you use image resizing with GD it will always uncompress the image. That means that jpeg image of 3000x3000 with size 1KB, where every single pixel is #fff will need around 35-54MB or RAM
(3000 x 3000 x 4(rgba)) / 1024^2 = 34.3MB
Add to this some requirements for processing, and it can easily reach over 128MB. Also people tend to upload images with dimensions up to 10k these days.
solution: switch to imagick / gmagick or enormously increase RAM
3. Your app logic really uses up that much RAM
Profile your code, see how much memory is used
See if you don't load/build somewhere thousand of objects
Just simple increase your RAM to 256 / 512 and see it helps
4. You got hidden infinite loop because of
old session data - clear sessions
configuration difference on your prod and dev machine (or config file)

PgAdmin III not responding when trying to restore a database

I'm trying to restore a database (via a backup file), I'm on PgAdmin III (Postgresql 9.1).
After choosing the backup file, a window indicating that pg_restore.exe is running then PgAdmin is not responding ,it has been few hours (It is not a RAM shortage issue)
It might be due the backup file size (500 MB), but i have already restored a database with a 300 MB backup file few days ago, and it was done smoothly.
By the way the format of the backup file ( created via pg_dump)is the "tar" format.
Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. Thanks in advance.
I have the same problem and I solved looking this web site tutorial
File has been generated in my backup with the size of 78 MB, I have generated it again using
Format:Custom
Ratio:
Enconding: UTF8
Rolename:postgres
I try to restore again and then works fine.
On OS X I had similar issues. After selecting the .backup file and clicking restore in pgAdmin I got the spinning beachball. Everything however was working in the background, the UI was just hanging.
One way you can check that everything is working is by looking at Activity Monitor on OS X or Resource Monitor on Windows. If you look at the 'Disk' tab you should see activity from postgres and you should see the value in the 'Bytes Read' column slowly going up. I was restoring a ~15G backup and the counter in the 'Bytes Read' column slowly counted up and up. When it got to 15G it finished without errors and the pgAdmin UI became active again.
(BTW, my 15G restore took about 1.5 hours on a Mac with an i7 and 8G of RAM)

Resources