Symfony2 slow page loads despite quick initialization/query/render time? - symfony

I'm working on a Symfony2 project that is experiencing a slow load time for one particular page. The page in question does run a pretty large query that includes 16 joins, and I was expecting that to be the culprit. And maybe it is, and I am just struggling to interpret the figures in the debug toolbar properly. Here are the basic stats:
Peak Memory: 15.2 MB
Database Queries: 2
Initialization Time: 6ms
Render Time: 19ms
Query Time: 490.57 ms
TOTAL TIME: 21530 ms
I get the same basic results, more or less, in three different environments:
php 5.4.43 + Symfony 2.3
php 5.4.43 + Symfony 2.8
php 5.6.11 + Symfony 2.8
Given that the initialization + query + render time is nowhere near the TOTAL TIME figure, I'm wondering what else comes into play, and other method I could go about identifying the bottle neck. Currently, the query is set up to pull ->getQuery()->getResult(). From what I've read, this can present huge overhead, as returning full result objects means that each of the X objects needs to be hydrated. (For the sake of context, we are talking about less than 50 top-level/parent objects in this case). Consequently, many folks suggest using ->getQuery()->getArrayResult() instead, to return simple arrays as opposed to hydrated objects to drastically reduce the overhead. This sounded reasonable enough to me so, despite it requiring some template changes in order for the page to render the alternate type of result, I gave it a shot. It did reduce the TOTAL TIME, but by a generally unnoticeable amount (reducing from 21530ms to 20670 ms).
I have been playing with Docker as well, and decided to spin up a minimal docker environment that uses the original getResult() query in Symfony 2.8 code running on php 7. This environment is using the internal php webserver, as opposed to Apache, and I am not sure if that should/could have any affect. While the page load is still slow, it seems to be markedly improved on php 7. The other interesting part is that, while the TOTAL TIME was reduced a good deal, most of the other developer toolbar figured went up:
Peak Memory: 235.9 MB
Queries: 2
Initialization Time: 6 ms
Render Time: 53 ms
Query Time: 2015 ms
TOTAL TIME: 7584 ms
So, the page loads on php 7 in 35% of the amount of time that it takes to load on php 5.4/5.6. This is good to know, and provides a compelling argument for why we should upgrade. That being said, I am still interested in figuring out what are the common factors that explain large discrepancies between TOTAL TIME and the sum of [initialization time + query time + render time]. I'm guessing that I shouldn't expect these numbers to line up exactly, but I notice that, while still off, they are significantly closer in the php 7 Docker environment than they are in the php 5.4/5.6 environments.
For the sake of clarity, the docker container naturally spins up with a php.ini memory_limit setting of -1. The other environments were using 256M, and I even dialed that up to 1024M, but saw no noticeable change in performance, and the "Peak Memory" figure stayed low. I tried re-creating the Docker environment with 1024M and also did not notice a difference there.
Thanks in advance for any advice.
EDIT
I've tested loading the page via the php 5.6 / Symfony 2.8 environment via php's internal webserver, and it loads in about half the time. Still not as good a php 7 + internal server, but at least it gives me a solid lead that something about with my Apache setup is at least significantly related (though not necessarily the sole culprit). Any/all advice/suggestions welcome!

Related

My Apache2 server slow to serve some css and js files. Any idea why?

I'm not sure if this is a server or programming problem. I searched for a similar problem, but coudn't find anything like this.
I have a server running Debian Buster, serving sites on Apache2.
This week, one of my sites turned veeery slow, taking more than 25 seconds to renderize a page that usually took between 2 and 4 seconds.
At first, I checked the PHP program, but it completes processing everything in less than 1 second, sometimes it takes 2 seconds.
As I have to place some menus depending on the size of the page, I save everything in a PHP variable, then decide if I add extra menus or not.
In the end, I "echo" the variable to the browser.
That said, after checking a lot, I found that:
when I open a page, it takes no time to process the html in PHP, and after writing it to the browser, the browser starts "waiting for www.mydomainname.tld" for 20+ seconds.
by running top, I see 2 or 3 Apache processes running at 100% CPU on the server during that time.
One of my css files is missing on the server. By replacing it, one of the Apache processes at 100% disappeared (probably ran and closed);
Another CSS file is in the server, but with that file called in the html page, the same (100% CPU running) problem appear. If I disable it in the html, everything is running quickly as expected. Renderize in the browser in less than 4 seconds top.
I know this is not easily reproducible, and for now I disabled that second CSS file, so my site is running, but it would be great if anyone could give me an idea on where should I look for this solution, if any?
I even checked the hard disk, the SMART flags seems OK. Didn't stop the server to check the disks yet.
I have a copy of this server in a virtualbox running the same system, and locally it is amazingly fast.
My worry is if there is any problem in my server that I should get some maintenance for?
Just to add: The server is an AMD octa core with 32GB of RAM and 3 TB x 2 disks in a RAID1 set, so the server specs is not a culprit (I think)
I ran a badblocks check in the RAID, and found no errors.
Then I asked the support staff to replace the older disk by its manufacturing date and surprisingly (since I had no proof of damage to that disk) they accepted.
After rebuilding the RAID set, the slowness disappeared somehow.
So, this is not a good answer, but may serve as a hint for anyone having this problem, too.

Tar Compaction activity on Adobe AEM repository is stuck

I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,

How to track a process time and disk usage of that particular process in Unix?

I'm trying to compile and run simulations using tcsh shell in unix. How can i track time when the compilation has started and stopped, and what is the disk usage?
Using time you can track the internal CPU time spent on a process. You also can view what top (or similar tools like ps) displays for your process. Typically the CPU minutes spent on that process are shown. You can also use date before and after the process, maybe with option +%s to show the date as seconds since 1970-01-01T00:00:00Z to allow easy arithmetic with it (difference). Keep in mind that CPU time can be larger (more than one CPU used) or smaller (CPU also working on other tasks) than real time. Using date will always show real time. time will try to show both.
Disk usage, however, is more complex. By design, files are not dedicated to a specific process. You can, however, run df before and after the process and compare the two values. You cannot be sure that the difference was created by the process, but if you do several runs, that might help. Also you can use du to find out how much storage size is used in a particular path. This only works if you happen to know (or have a good guess) where the process stores its results in files. In your case, this might be the best way to go as you talk about a compilation job.
You can also have a look at the /proc/<PID>/fd/ directory to see the currently held open file descriptors of a running process.

Fatal memory error in Sylius production environment

Trying to access my Sylius site in production environment but I'm running in a memory error:
FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 72 bytes) in directoryname/app/bootstrap.php.cache on line 2798"
I found I can bypass the problem by setting AppKernel('prod', true), but I read that this is bad practice for some reason.
Anyone run into a similar problem?
There can be multiple reasons as 128MB of RAM is not that much to exhaust.
1. Cache is not warmed up
If your cache is not warmed up, and the app needs to generate all the files, it can eat up lot of RAM on the first request.
Most of the time this shouldn't be the case in production mode, if you run composer install or use some deploy mechanism (capistrano, deployer) that calls > php app\console cache:clear during deployment as this will also prime the cache.
Try running composer install manually on the server as this should generate cache after building autoload files.
2. GD image resize on images with big dimensions (not size)
If you use image resizing with GD it will always uncompress the image. That means that jpeg image of 3000x3000 with size 1KB, where every single pixel is #fff will need around 35-54MB or RAM
(3000 x 3000 x 4(rgba)) / 1024^2 = 34.3MB
Add to this some requirements for processing, and it can easily reach over 128MB. Also people tend to upload images with dimensions up to 10k these days.
solution: switch to imagick / gmagick or enormously increase RAM
3. Your app logic really uses up that much RAM
Profile your code, see how much memory is used
See if you don't load/build somewhere thousand of objects
Just simple increase your RAM to 256 / 512 and see it helps
4. You got hidden infinite loop because of
old session data - clear sessions
configuration difference on your prod and dev machine (or config file)

What causes high-latency page loads after a build?

As we trying to make more frequent builds (and as our site traffic has increased), we've noticed that during higher traffic, the initial post-build page loads spike to 20 or 30 seconds. Our builds are published by copying DLL's + PDB's to a single server which is sync'd to one other server. That file-copy generally takes a few seconds.
What are some contributing factors to this sort of initial latency spike? Are there any commonly taken steps to avoid this problem? (I can't imagine high-traffic site that perform multiple builds/day, if not multiple builds/hour, tolerate this sort of thing.)
The main cause of this delay is ASP.Net doing compilation on the pages during first load, transforming the aspx markup into code.
You can solve this (and is actually listed as the first advantage on this link) by doing a pre-compile during your build. Of course, the trade off to this is longer build times. More information is here: http://msdn.microsoft.com/en-us/library/bb398860(v=vs.100).aspx
If you're using MSBuild to handle your CI builds by using the AspNetCompiler task in MSBuild: http://msdn.microsoft.com/en-us/library/ms164291.aspx
Another advantage this has (And why I tend to use this even in development builds), is if you integrate this into your build process, and you end up with syntax errors on a page, the build will fail, instead of your users being the first one to catch it.
In response to your comment (my response was getting too long for a comment):
Actually, I wasn't even aware of the batch settings myself until now. It looks like setting batch to false makes sense during development to reduce initial load times there. But, it seems that doing so would make asp.net function in assembly-per-page mode, which can slow things down on larger apps. So, probably the best compromise would be in development environments, set batch to false to speed up development time, then use the web.config transforms to set it back to true for production, and use the pre-compiler during the production build. Then you'll only pay the pre-compilation costs once for both servers, and in a way that's not visible to users.

Resources