Why is load time for images so slow and variable? - wordpress

My website loads most elements within 1-2 secs.
However, some of the images will take 6 seconds (or even up to 20 secs) to load. There seems to be no pattern to which images will take a long time to load. Most will load in under 1 sec but 2 or 3 will "wait" for 6 secs plus. This is obviously reducing the page load time.
I have served the images from a CDN (I know they dont show that here) but that still makes no difference and sometimes the images that take the longest to load are <1kb in size.
The website is hosted on an AWS EC2 t2.micro instance. I am using W3TC and Cloudfront CDN. The images have been optimised.
I have included my CPU Credit Balance. This is also low. Might this be a problem?
Any ideas as to why random images will take a long time to be served?

Related

Google Page Speed Drop, says I can save more loading time, even though nothing changed

I'm testing my pagespeed everyday several times. My page often receives a grade between 94 to 98, with the main problems being:
Eliminate render-blocking resources - Save 0.33
Defer unused CSS - Save 0.15 s
And in Lab data, all values are green.
Since yesterday, suddenly page speed has dropped, to about 80-91 range,
with the problems being:
Eliminate render-blocking resources - Save ~0.33
Defer unused CSS - Save ~0.60 s
And it is also saying my First CPU idle is slow ~4,5ms
And so is time to interactive , ~4.7
And sometimes speed index is slow as well.
It also started to show Minimize main-thread work advice, which didn't show earlier.
The thing is, I did not change anything in the page. Still same HTML, CSS and JS. This also not a server issue, I don't have a CPU overuse problem.
On Gtmetrix I'm still getting the same 100% score and same 87% Yslow score, with the page being fully loaded somewhere between 1.1s to 1.7s, making 22 HTTP requests in total size of 259kb, just like before.
On Pingdom I also get the same 91 grade as before, with page load speed around 622ms to 750ms.
Therefore, I can't understand this sudden change in the way Google analyzes my page.
I'm worried of course it will affect my rankings.
Any idea what is causing this?
it seems that this is a problem of PageSpeed Insights web itself as it is reported now on some pagespeed insights discuss google groups:
https://groups.google.com/forum/#!topic/pagespeed-insights-discuss/luQUtDOnoik
The point is that if you try to test your performnce direclty from another lighthouse web test, for example:
https://www.webpagetest.org/lighthouse
You will see your previous rates
In our case, in this site we always had 90+ on mobile but now google page rate has been reduced to 65+
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.test-english.com%2F&tab=mobile
but it still remains 90+ in webpagetest.org: https://www.webpagetest.org/result/190204_2G_db325d7f8c9cddede3262d5b3624e069/
This bug was acknowledged by Google and now has been fixed. Refer to https://groups.google.com/forum/#!topic/pagespeed-insights-discuss/by9-TbqdlBM
From Feb 1 to Feb 4 2019, PSI had a bug that led to lower performance
scores. This bug is now resolved.
The headless Chrome infrastructure used by PageSpeed Insights had a
bug that reported uncompressed (post-gzip) sizes as if they were the
compressed transfer sizes. This led to incorrect calculations of the
performance metrics and ultimately a lower score. The mailing list
thread titled [BUG] [compression] Avoid enormous network payloads /
Defer unused CSS doesn't consider compression covered this issue in
greater detail. Thanks for Raul and David for their help.
As of Monday Feb 4, 5pm PST, the bug is completely addressed via a new
production rollout.

How do I make the Mixnode Crawler to crawl slower?

We topped 20 million pages/hour and I truly appreciate the speed; however I'm afraid I may be putting too much pressure on target sites, is there any way we can decrease the speed at which websites are crawled?
Not sure why you'd want to decrease the speed as the documentation clearly states that :
There is a minimum delay of 10 seconds between requests sent to the same website. If robots.txt directives of a website require a longer delay, Mixnode will follow the delay duration specified by the robots.txt directives.

Sudden degradation of network file-transfer rate

I've been using jcifs-1.3.17 for over a year now, transferring thousands of files from place A to place B, without problems. The file transfers have always taken from 15-30ms. Yesterday (7/13/16) something changed and the transfers are now taking about 90seconds - not milliseconds, but seconds. I can identify a 15-minute window in which the change happened. Operations insists that nothing in the network or on the servers changed. My code code didn't change, nor the character/size of the files.
Has anyone else experienced something like this? Any ideas on what I might look at to determine root cause?

Drupal form_set_cache Too Slow

I am running a drupal commerce site, where the first request of the site is taking around 3 seconds to load. Its too high. on profiling i found out its because form_set_cache is taking around 2.5 Seconds to insert.
How to solve this, i read that it is not prescribed to move form_set_cache to memcache also since it is not static, how we can improve this.

scaling an azure website

I have a Standard website in Azure with a small instance, (1 core and 1.75 GB memory). It seems to be coping fine and handling the requests smoothly, although I am expecting a lot more within the week.
It is unclear though under what circumstances I should be looking to scale the instance size to the next level ie to Medium. (Besides MemoryWorkingSet of course, rather obvious :))
ie. Will moving up to a Medium instance resolve high CPU time ?
What other telltales should I be watching for ?
I am NOT comfortable scaling the number of instances to more than one at the moment until I resolve some cache issues.
I think the key point I am trying to understand is the link between the metrics provided and the means of scaling available regardless of it being scaled horizontally or vertically.
I am trying to keep the average response time as low as possible as the number of users that interact with the website increase.
Which of the other metrics will alert me when the load on the server is getting to its limits & I will need to scale Vertically ?
The idea behind scaling in Azure is to scale horizontally, i.e. add more instances. Azure can do this for you automatically. If you can't add more instances, Azure can't do the scaling for you automatically.
You can move to Medium instance, overall capacity will increase, but it is impossible to say what your application will require under heavy load. I suggest you run profiler and load test to find out the weak parts of your app and improve these before you have an actual increase in useage.

Resources