Symfony 4 rebuild all the cache after translation change - symfony

I'm using Symfony4 with translations feature in an open source project : https://gitlab.com/Seballot/gogocarto
In developement env, everytime I change one of my translations file (or actually just press ctrl+S without changing anything), it takes quite a lot of time to load the next request. It seems symfony is rebuilding the whole cache. I'm thinking of the whole cache because if I do another operation like add a new service to a Controller using autowiring (which I assume will reload the cache), I got same performance issue with same metrics
From symfony toolbar, for a specific page I got on first load after a translation change (or autowiring change)
Total time 14992 ms
Initialization time 14316 ms
Peak memory usage45.0 MiB
Cache Calls 4055
Total time 195.93 ms
Cache hits 4054 / 4055 (99.98%)
Cache writes 0
And when reloading the page
Total time 713 ms
Initialization time 133 ms
Peak memory usage4.5 MiB
Cache Calls 2325
Total time 115.04 ms
Cache hits 2324 / 2325 (99.96%)
Cache writes 0
Do you think my idea that a translation change make symfony recalculate the whole thing make sense? because I cannot understand how reading 10 poor translation yaml file could take so much time !!
Any help will be welcome, thanks a lot :)

Related

Azure Cosmos DB Emulator slow (100 ms / request)

I am trying to set up the Azure Cosmos DB Emulator to work locally with integration tests but I found that it is very slow.
I am reading a ~1KB JSON document with the container.ReadItemAsync<T> method, and awaiting the answer. I am calling this method in a loop, for 100 times.
The execution time is consistently around 9.5-10 seconds, so one request takes around 100 milliseconds which is very slow compared to the fact that this service is running locally.
Why is this so slow and how can I make it faster?
I expect at most 1 ms / request considering it is all disk I/O.
I tried the following but they didn't work:
Turning Rate Limiting on/off
creating the database/collection with various provisioning settings, it has zero effect on performance (even 100k RU)
creating the db and collection manually vs with the client SDK
"Reset Data" menu in the emulator tray menu
Further information:
The emulator version is 2.14.6.0 (68d4ca59)
I start the emulator from the start menu, but starting it from the command line doesn't change anything
I am using the Microsoft.Azure.Cosmos nuget package, version 3.22.1
my CPU is i7-8565U, but it isn't even fully used while the test is running
my system has 16 GB RAM
my system is running on a fast enough SSD: "NVMe SK hynix BC501 H", but while running the test the SSD usage is between 0 and 2%.
the performance is the same if I increase the document size to 100 KB or even 1 MB.
Creating your CosmosClientOptions with the AllowBulkExecution = true setting can cause this.
the SDK will construct batches and group operations, when the batch is full, it will get dispatched, but if the batch doesn’t fill up, there is a timer that will dispatch it to make sure they complete. This timer currently is 100 milliseconds. So if the batch does not get filled up (for example, you are just sending 50 concurrent operations), then the overall latency might be affected.
Source: Introducing Bulk support in the .NET SDK

Why Symfony3 so slow?

I installed Symfony3 framework-standard-edition. I'm trying to open the home page(app.php prod) and it is loaded 300-400ms.
This is my profiler information:
also I use php7.
Why it is so long?
You can try to optimize Zend OPCache.
Here are some recommended settings
opcache.revalidate_freq
Basically put, how often (in seconds) should the code cache expire and check if your code has changed. 0 means it checks your PHP code every single request (which adds lots of stat syscalls). Set it to 0 in your development environment. Production doesn't matter because of the next setting.
opcache.validate_timestamps
When this is enabled, PHP will check the file timestamp per your opcache.revalidate_freq value.
When it's disabled, opcache.revaliate_freq is ignored and PHP files are NEVER checked for updated code. So, if you modify your code, the changes won't actually run until you restart or reload PHP (you force a reload with kill -SIGUSR2).
Yes, this is a pain in the ass, but you should use it. Why? While you're updating or deploying code, new code files can get mixed with old ones— the results are unknown. It's unsafe as hell
opcache.max_accelerated_files
Controls how many PHP files, at most, can be held in memory at once. It's important that your project has LESS FILES than whatever you set this at. For a codebase at ~6000 files, I use the prime number 8000 for maxacceleratedfiles.
You can run find . -type f -print | grep php | wc -l to quickly calculate the number of files in your codebase.
opcache.memory_consumption
The default is 64MB. You can use the function opcachegetstatus() to tell how much memory opcache is consuming and if you need to increase the amount.
opcache.interned_strings_buffer
A pretty neat setting with like 0 documentation. PHP uses a technique called string interning to improve performance— so, for example, if you have the string "foobar" 1000 times in your code, internally PHP will store 1 immutable variable for this string and just use a pointer to it for the other 999 times you use it. Cool.
This setting takes it to the next level— instead of having a pool of these immutable string for each SINGLE php-fpm process, this setting shares it across ALL of your php-fpm processes. It saves memory and improves performance, especially in big applications.
The value is set in megabytes, so set it to "16" for 16MB. The default is low, 4MB.
opcache.fast_shutdown
Another interesting setting with no useful documentation. "Allows for faster shutdown".
Oh okay. Like that helps me. What this actually does is provide a faster mechanism for calling the destructors in your code at the end of a single request to speed up the response and recycle php workers so they're ready for the next incoming request faster.
Set it to 1 and turn it on.
opcache=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=8000
opcache.validate_timestamps=0
opcache.revalidate_freq=0
opcache.fast_shutdown=1
I hope it will help improve your performances
[EDIT]
You might also want to look at this answer:
Are Doctrine relations affecting application performance?
TheMrbikus, try some optimization with the following elements:
Use APC
Use Bootstrap files
Reference: http://symfony.com/doc/current/performance.html
Use the OPCache PHP7
Use Apache PHP-FPM.
E-mail sending process, and may slow down during the form rendering operations. Create a blank test Controller.

Symfony2 - Random Failed to start the session

We have a site with Symfony2 with some traffic.
Every day the site begins to fail with this error for 1 or 2 minutes (15-20 errors). This occurs at random hours, could not find a pattern. It does not fit even to peak hours.
2015-10-09 02:23:57.635 [2015-10-09 06:23:38] request.CRITICAL: Uncaught PHP Exception RuntimeException: "Failed to start the session" at /var/www/thing.com/httpdocs/app/cache/prod/classes.php line 121 {"exception":"[object] (RuntimeException(code: 0): Failed to start the session at /var/www/thing.com/httpdocs/app/cache/prod/classes.php:121)"} []
Doesn't seem to be a double header problem or double start problem.
Site does not interact with any PHP legacy code that could be messing with the sessions.
Sessions are stored in the database so a file problem is discarded.
Lowered the session duration so the session table does not get too big and the problem persists.
Think It could be a problem with HWIOAuthBundle and it's facebook login, but cannot find where is the conflict.
Also the site uses a lot of render_esi for caching with Symfony2 internal cache system.
Update -------------------------------------------------
Emptied the /var/lib/php/sessions folder of older session files than were not being used.
Lowered the session lifespan. Sql entries in the sessions table went from ~3 Million to ~1.3 Million.
Seems that the problem is gone but this is not a real solution.
My guess is that the pdo_handler in symfony2 has a performance problem.
Maybe someone with more knowledge in this matter (pdo_handler, table optimization) can point a real solution for high traffic.
Where does your PHP installation save sessions to?
[You can find this in your php.ini file in the session.save_path setting, assuming you have CLI access]
It is very likely PHP uses your servers /tmp folder. If this folder is full at any point, then PHP can't create new sessions.
You can see the current size of your /tmp folder with:
du -ch /tmp/ |grep total
If, as is common, the /tmp folder is on its own partition, you can see its maximum size with :
df -h
Some programs can suddenly guzzle Gbs of this folder for their purposes.

ASP.NET MVC why does my app keep restarting?

I have an ASP.NET MVC website that gets about 6500 hits a day, on a shared hosting platform at Server Intellect. I keep seeing app restarts in the logs and I cannot figure out why.
I've read Scott Gu's article here: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx
and implemented the technique, and here's what shows up in my log:
Application Shutdown:
_shutDownMessage=HostingEnvironment initiated shutdown
HostingEnvironment caused shutdown
_shutDownStack=at
System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at
System.Environment.get_StackTrace() at
System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at
System.Web.Hosting.HostingEnvironment.InitiateShutdown() at
System.Web.Hosting.PipelineRuntime.StopProcessing()
It seems to occur about every five minutes.
Are there any other ways to debug this?
UPDATE: Here are the application pool settings mentioned by Softion:
CPU
Limit : 0
Limit Action : no action
Limit Interval : 5 Minutes
Process Model
Idle Timeout : 20 Minutes
Ping Maximum Response Time : 90 Seconds
Startup Time Limit : 90 Seconds
Rapid-Fail Protection
Enabled : True
Failure Interval : 5 Minutes
Recycling
Private Memory Limit : 100 MB
Regular Time Interval : 1740 Minutes (29 Hours)
Request Limit : 0
Specific Times : none
Virtual Memory Limit : 0
You can easily grab the reason of the shutdown by HostingEnvironment.
You read Scott Gu article, but you missed its comments.
var shutdownReason = HostingEnvironment.ShutdownReason;
If the reason is HostingEnvironment, check the IIS application pool parameters controlling recycling. I've put a red dot near each one. Check the description in the bottom help box in your own copy for full info.
You can ask your provider to give you the applicationHost.config file where all these parameters are set. They find it in C:\Windows\System32\inetsrv\config. I'm sure you can also get them using some .NET api.
For 6500 hits a day, which is a very low hit rate, i'm betting the "Idle time-out" is set to 5mn.
Update (moved comments to here //jgauffin)
CPU Limit 0 = disabled.
Process Model Idle Timeout : 20 Minutes (20mn without a request recycles your app).
Rapid-Fail Protection enabled (5mn). You need to know the maximum failures count. If your app throws more than this exception count in 5mn it will we recycled.
Private Memory Limit : 100 MB. Yes you should profile, this is a low limit.
Regular Time Interval : 1740 Minutes (29 Hours): it will recycle every 29h.
Request Limit : 0 (disabled).
Virtual Memory Limit : 0 (disabled).
Rapid-Fail Protection enabled (5mn). You need the maximum failures count. If your app throws more than this exception count in 5mn it recycles. If it recycles every 5mn this should be the thing to check. There should be 0 unhandled exception in secondary worker threads. Wrap your code into a try catch there.
re update:
The settings asked to the provider help, but is way better to ask for information on the reason of the restarts like I mentioned on my original answer i.e. the actual log entries of the restarts like I mentioned on my orig answer. From those you can know specifically what was triggered, I've seen happen one hitting different limits.
You really have to:
profile your application with a
realistic amount of test data
My money is on hitting resource limits set by your hosting provider.
Before going crazy with optimization without a target, contact your provider and ask them to give you information on the restarts.
Typical recycles:
idle x amount of time / like 15 mins
more than x amount of memory / like 200 MB
more than x % processor over y time / like 70 over 1 minute
a daily recycle
Once you know the case, you have to find out what's taking those resources. For this you have to profile your application with a realistic amount of test data. Knowing if it is memory or processor can help on knowing what to look for.
Is IIS set to recycle the app pool frequently?
Is there some kind of runaway memory leak in the app pool?
It requires a bit of know how on what your app does here's a list of things that can cause the app to restart/reset or even shut down
StackOverflowException
OutOfMemoryException
Any unhandled exception that crashes a thread
CodeContracts use Environment.FailFast when a contract violation occurs
Exceptions are quite easy to track if you can reproduce the issue with a debugger attached you can go into Visual Studio and enable all exceptions when they are thrown not caught by user code. It will sometimes reveal intresting stuff that otherwise is hidden away.

Optimize APC Caching

here is a link to how my APC is running : [removed]
As you can see, it fills up pretty quickly and my Cache Full Count goes over 1000 sometimes
My website uses Wordpress.
I notice that every time I make a new post or edit a post, 2 things happen.
1) APC Memory "USED" resets
2) I get a whole lot of Fragments
I've tried giving more Memory to APC (512 mb) but then it crashes sometimes, it seems 384 is best. I also have a Cron job that restarts apache, clearing all APC of fragments and used memory, every 4 hours. Again, my apache crashes if APC is running for a long period of time, I think due to the fragment buildup.
Should I use the apc.Filters and filter out some stuff that should not be cached?
I am really beginner at this sort of stuff, so if someone can explain with full instructions, Thank you very much !!!
I work as a Linux Systems Admin, the wordpress server runs 5 different WordPress installs. If you are running just one, I will comment the configurations to consider.
APC / PHP Versions, 3.1.9 / 5.3.7
Here is my complete apc.conf,
apc.enabled=1
apc.shm_segments=1
; I would try 32M per WP install, go from there
apc.shm_size=128M
; Relative to approx cached PHP files,
apc.num_files_hint=512
; Relative to approx WP size W/ APC Object Cache Backend,
apc.user_entries_hint=4096
apc.ttl=7200
apc.use_request_time=1
apc.user_ttl=7200
apc.gc_ttl=3600
apc.cache_by_default=1
apc.filters
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.file_update_protection=2
apc.enable_cli=0
apc.max_file_size=2M
;This should be used when you are finished with PHP file changes.
;As you must clear the APC cache to recompile already cached files.
;If you are still developing, set this to 1.
apc.stat=0
apc.stat_ctime=0
apc.canonicalize=1
apc.write_lock=1
apc.report_autofilter=0
apc.rfc1867=0
apc.rfc1867_prefix =upload_
apc.rfc1867_name=APC_UPLOAD_PROGRESS
apc.rfc1867_freq=0
apc.rfc1867_ttl=3600
;This MUST be 0, WP can have errors otherwise!
apc.include_once_override=0
apc.lazy_classes=0
apc.lazy_functions=0
apc.coredump_unmap=0
apc.file_md5=0
apc.preload_path
#Chris_O, your configuration is not optimal in a few aspects.
1. apc.shm_segments=3
If you run a modern Linux Distro, your SHM should be sufficiantly large enough.
If it is too small search how to set sysctl.conf entries, You can check like this.
#Check Max Segment size
cat /proc/sys/kernel/shmmax
Exception when running on certain BSD's, or Other Unix's, Or managed hosts you don't control. There is disadvantages to not having a contiguous segment, read details of APC for that info.
2. apc.enable_cli=1
BAD BAD BAD, this is for debug only! Every time you run php-cli, it clears the APC cache.
3. apc.max_file_size=10M
Unnecessary and ridiculous! If you had a file that big, it would eat 1/3rd of that small 32M SHM. Even though you specify 3, they don't just act like one big segment in three pieces. Regardless WP doesn't even have single PHP files even close to that size.
'hope I helped people with their apc.conf.
The APC ttl should take care of fragment build up. I usually set it at 7200. I am running it on a small VPS with WordPress and my settings are:
apc.enabled=1
apc.shm_segments=3
apc.shm_size=32
apc.ttl=7200
apc.user_ttl=7200
apc.num_files_hint=2048
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1
apc.max_file_size=10M
You will also get a lot more benefit from it by using WordPress's built in object cache and Mark Jaquith wrote a really good drop in plugin that should also help with some of your fragmentation issues when saving or editing a post.
You really should set apc.stat=0 on your production server and it will prevent APC from actually going to the IO to check if the file has been changed.
Check out documentation first: http://php.net/manual/en/apc.configuration.php

Resources