I am running a drupal commerce site, where the first request of the site is taking around 3 seconds to load. Its too high. on profiling i found out its because form_set_cache is taking around 2.5 Seconds to insert.
How to solve this, i read that it is not prescribed to move form_set_cache to memcache also since it is not static, how we can improve this.
Related
I'm testing my pagespeed everyday several times. My page often receives a grade between 94 to 98, with the main problems being:
Eliminate render-blocking resources - Save 0.33
Defer unused CSS - Save 0.15 s
And in Lab data, all values are green.
Since yesterday, suddenly page speed has dropped, to about 80-91 range,
with the problems being:
Eliminate render-blocking resources - Save ~0.33
Defer unused CSS - Save ~0.60 s
And it is also saying my First CPU idle is slow ~4,5ms
And so is time to interactive , ~4.7
And sometimes speed index is slow as well.
It also started to show Minimize main-thread work advice, which didn't show earlier.
The thing is, I did not change anything in the page. Still same HTML, CSS and JS. This also not a server issue, I don't have a CPU overuse problem.
On Gtmetrix I'm still getting the same 100% score and same 87% Yslow score, with the page being fully loaded somewhere between 1.1s to 1.7s, making 22 HTTP requests in total size of 259kb, just like before.
On Pingdom I also get the same 91 grade as before, with page load speed around 622ms to 750ms.
Therefore, I can't understand this sudden change in the way Google analyzes my page.
I'm worried of course it will affect my rankings.
Any idea what is causing this?
it seems that this is a problem of PageSpeed Insights web itself as it is reported now on some pagespeed insights discuss google groups:
https://groups.google.com/forum/#!topic/pagespeed-insights-discuss/luQUtDOnoik
The point is that if you try to test your performnce direclty from another lighthouse web test, for example:
https://www.webpagetest.org/lighthouse
You will see your previous rates
In our case, in this site we always had 90+ on mobile but now google page rate has been reduced to 65+
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.test-english.com%2F&tab=mobile
but it still remains 90+ in webpagetest.org: https://www.webpagetest.org/result/190204_2G_db325d7f8c9cddede3262d5b3624e069/
This bug was acknowledged by Google and now has been fixed. Refer to https://groups.google.com/forum/#!topic/pagespeed-insights-discuss/by9-TbqdlBM
From Feb 1 to Feb 4 2019, PSI had a bug that led to lower performance
scores. This bug is now resolved.
The headless Chrome infrastructure used by PageSpeed Insights had a
bug that reported uncompressed (post-gzip) sizes as if they were the
compressed transfer sizes. This led to incorrect calculations of the
performance metrics and ultimately a lower score. The mailing list
thread titled [BUG] [compression] Avoid enormous network payloads /
Defer unused CSS doesn't consider compression covered this issue in
greater detail. Thanks for Raul and David for their help.
As of Monday Feb 4, 5pm PST, the bug is completely addressed via a new
production rollout.
We topped 20 million pages/hour and I truly appreciate the speed; however I'm afraid I may be putting too much pressure on target sites, is there any way we can decrease the speed at which websites are crawled?
Not sure why you'd want to decrease the speed as the documentation clearly states that :
There is a minimum delay of 10 seconds between requests sent to the same website. If robots.txt directives of a website require a longer delay, Mixnode will follow the delay duration specified by the robots.txt directives.
My website loads most elements within 1-2 secs.
However, some of the images will take 6 seconds (or even up to 20 secs) to load. There seems to be no pattern to which images will take a long time to load. Most will load in under 1 sec but 2 or 3 will "wait" for 6 secs plus. This is obviously reducing the page load time.
I have served the images from a CDN (I know they dont show that here) but that still makes no difference and sometimes the images that take the longest to load are <1kb in size.
The website is hosted on an AWS EC2 t2.micro instance. I am using W3TC and Cloudfront CDN. The images have been optimised.
I have included my CPU Credit Balance. This is also low. Might this be a problem?
Any ideas as to why random images will take a long time to be served?
The web site that I have run for long time, sometimes it will have some speed issues, but after we clean up the MSSQL data, it will work fine.
But this time, it doesn't work any more, we always got Timeout error, and the IIS causes the CPU runs very high.
We took out some features, and the site runs back OK, but slow without error.
For example, when we do a search, if we have less than 10 results, the page/output is really fast.
When we have more than 200 results, the page is very slow, almost take about 15 to 20 secs to output the whole page
I know if you have more data to output, of course it takes more time to run, but we used to have more than 500 results, it ran/output very fast also.
Do you know anywhere I should look at to solve this speed problem?
You need to look at the code to see what is executed with displaying those results. Implement some logging or step through the execution of a result with a debugger.
If you have a source control system, now is the time to review what changes have been made between fast code and now slow code.
10 results could be taking 1 second to display which is barely tolerable but as you say 200 results takes 20 seconds. So the problem is some bad code somewhere. I think someone has made a code change.
I would start with breaking down the issue. For example sql server times and iis times. You can separate different parts of code and measure execution times, etc.
SQL Server Profiler is good tool to start with and for ASP.NET You can start with simple trace and page tracing.
Some more info about testing and performance
I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active.
Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website.
Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading?
UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.
Could be an overflow, but my money's on an underflow. I think that the program started with 0 people, someone logged off, and then the number of sessions went negative.
Well, 2^32 = 4,294,967,296, so sounds like there's some kind of overflow occurring. Can't say exactly why.
We have the same problem. It looks like MS has a Hotfix available: http://support.microsoft.com/kb/969722
Update 9/10/2009: Our IT department contacted MS for the Hotfix. It fixed our issue. We are running .NET 2.0 if it matters any.
I am also showing a high number, currently 4,294,967,268.
Every time I abandon a Session, the Sessions abandoned count goes up by 1, and the Sessions Active count decreases by 1. Currently my abandoned session count = 16, so this number probably started at 4,294,967,84.
Is there a fix for this?
My counters were working fine, but one morning I logged in remotely to the production server, and the counter was on this huge number (which is as somebody mentioned very close to 2^32 indicating an underflow). But the only difference from the day before when everything worked was the fact that during the night, windows had installed updates.
So for some reason these updates caused this pretty annoying error.
Observing the counter a little more, I found out that whenever the application is restarted - after some time with no traffic, the counter starts correctly at zero. When users start logging on, it increments fine. When they start logging off again, it still decrements fine until it reaches what is supposed to be zero. At that point it goes bananas...
Sigh...
If you have to use your existing statistics, I opened the log file in Excel and used a formula to bring a more accurate value. I cannot guarantee its accuracy, but the results did look okay:
If B2 is the (aspnet_wp)\Sessions Active value , and the formula sits in C2
/* This one is quicker as it doesn't have to do the extra calculations */
=IF(B2>1073741824,4294967296-B2,B2)
Or
/* This one is clearer what is going on */
=IF(B2>power(2,30),(4*power(2,30))-B2,B2)
P.S. (I feel your pain - I have to explain why they have 4.2 billion sessions opening whereas a second earlier they had 0!)