Does the pagespeed test score take CDN into consideration? - cdn

I set up CDN for Global acceleration, after checking, I find the speed score of the mobile device is still low. so I wonder if the score takes CDN into consideration?

Yes, it takes CDN into consideration.
Firstly, you need to understand that for Google PageSpeed mobile test has network limits and probably slowed down CPU power. The network limits for mobile tests are something like this:
Latency: 150ms
Throughput: 1.6Mbps down / 750 Kbps up.
Packet loss: none
These exact figures are used as Lighthouse's throttling default and
represent roughly the bottom 25% of 4G connections and top 25% of 3G
connections. They are identical to the WebPageTest "Mobile 3G - Fast"
preset and, due to a lower latency, slightly faster for some pages
than the WebPageTest "4G" preset.
Source: https://github.com/GoogleChrome/lighthouse/blob/master/docs/throttling.md
As there are these network limits, if your CDN is a bit faster in general then your server, you probably won't notice much difference in speed in the Google Page Speed mobile as the tests already are done with network limits to make the tests more real for real mobile world.
Secondly, if you looking at Field Data, you should give a time for these data, as these data are from real users and are based on the previous 30 days of data. If you have these data available for your website, I would recommend taking a screenshot and comparing these data after 30 days to see does it helped to you.
In meantime, there are many other things to do with higher priority.
Bonus tip to improve CDN speed: Use Resource Hints: DNS Prefetch and/or Preconnect
As CDN is a 3rd party domain and host, it's recommended to pre-connect to the CDN server, so your recourses could be loaded sooner after your basic HTML is loaded and the client is starting to load the first recourse from your CDN network.
The code what you should add on your head as high as possible should look like this:
<!-- Prefetch DNS for external assets -->
<link rel="dns-prefetch" href="//cdn.example.com">
<!-- Preconnect for external assets -->
<link rel="preconnect" href="//cdn.example.com" crossorigin>

Related

Do I need a CDN or can I just go with ngnix loadbalancer (cache)

I have a system that will generate image optimization and resizing for a client who has a news portal with lots of pageviews. We will provide only the images to this portal, but users are all on the same country as the our server. The question is, whats the best strategy thinking about cost-benefit:
Route all(most) image traffic via some paid CDN
Setup an internal image server using nginx and a loadbalancer
Monthly we estimate a bandwidth of 11TB, with millions of requests. (images only)
It is not a questions if it is possible or what is more cost efficient.
You need to calculate the costs based on many factors: Actual sizing of your servers. Amount of servers. Bandwith. Where are the servers located and much more.
It will be a lot of work to setup and maintain / monitor your own CDN probaly but sure you can do it.
I dont think that anybody can create this calculation for you. See the comment fro Rob. It is not realy a question for SO.

Is there any change in the Browserpaint if you use http2?

We are thinking about moving a server with many websites to http2. Now one concerns was that if you use http2 and download all ressources parallel that it could take longer for the browser to begin with painting / rendering the page as with only http since it is waiting for all ressources to be downloaded instead of just beginning with what is already there and continue to repaint stuff as it gets downloaded.
I think this is wrong but i found no article or good explaining so i could prove it to the ones that think this could be the case.
The browser will paint when it has the resources needed to paint and this will mostly not change under HTTP/2.
I am not sure why you think a browser would wait to download all the resources under HTTP/2 but not under HTTP/1.1?
Certain resources (e.g. CSS and Javascript unless set with async attribute) are render blocking and they must be downloaded before the initial paint will happen. In theory HTTP/2 is faster for multiple downloads so all that should happen if you move to HTTP/2 is these will download sooner and so it will paint earlier.
Now the limited number of connections that browsers used under HTTP/1.1 (typically 6-8) created a natural queuing mechanism and the browser had to prioritize these critical resources over non-critical resources like images and send them first. With HTTP/2 there is a much higher limit (typically 100-120 parallel downloads depending on the server), so the browser no longer prioritizes and there is a concern that if all the resources are downloaded in parallel then they could slow each other down. For example downloading 50 large print-quality images will use up a lot of bandwidth and might make a more critical CSS resource downloading at the same time take longer to download. In fact some early movers to HTTP/2 saw this scenario.
This is addressed with prioritization and dependencies in HTTP/2 - where the server can send some resource types (e.g. CSS, JavaScript) with a higher priority than others (e.g. images) rather than send everything with same priority. So even though all 51 resources are in flight at the same time the CSS data should be sent first, with the images after. The client can also suggest a prioritization but it's the server that ultimately decides. This does depend on the server implementation to have a good prioritization strategy so it is good to test before switching over.
The other thing worth bearing in mind is that how to measure this changes under HTTP/2. If a low priority image is queued for 4 seconds under HTTP/1 waiting for one of the limited number of HTTP/1 connections to become free and then downloads in 2 seconds you may have previously measured that as a 2 second download time (which is technically not correct as you weren't including the queuing time so it was actually 6 seconds). So if this shows as the 5 seconds under HTTP/2 as it is sent immediately you may think it is 3 seconds slower when in fact it's a full second faster. Just something to be aware of when analysis the impact of any move to HTTP/2. It's much better to look at the overall key metrics (first paint, document complete...etc.) rather than individual requests when measuring the impact because of this.
Incidentally this is a very interesting topic that goes beyond what can reasonably be expected to be covered in a StackOverflow answer. It's a shameless plug, but I cover a lot of this in a book I am writing on the topic if interested in finding out more on this.
What you mentioned should ideally not happen if the web server obeys the priorities that browser requests with. On http2, browser typically requests css with highest priority and async js, images with lower priority. This should ensure that even if your images, js and css are requested at the same time - the server sends css back first.
The only case this should not happen is if browser is not configured correctly.
You can watch priority of various resources for any page within chrome devtools.

How can you get a fast video live stream from your iPhone to a server?

I've tried the Wowza streaming engine but even at low video quality there's a 3 second delay. Are there any standard way to set this up with minimal delay?
The technology and standards certainly exist. Look at video conferencing: minimal delay, perfect A/V sync and great for changing network condition.
Apple's FaceTime is a prime example.
I doubt the delay is in the up-link phone to Wowza - more likely in the transcoding and packaging.
Every re-streamer (Wowza and others) use DASH or HLS to get to the client which makes your video stream look like lot's of little files with a 1-3 second duration. This leverages existing cache and CDN infrastructure but introduces seconds of delay.
If your target delay is below a few 100 milliseconds you have to do something like RTP/RTSP.

Traffic Performance Testing Webpages Under Specified Conditions

As the title implies, I would like to be able to simulate traffic to a collection of webpages that I have created for loadbalancing and bottleneck issues. I would like to mimic typical HTTP requests relative to the upload/download speed of the user. Furthermore, I would like to be able to perform extreme tests assuming a certain amount of storage and bandwidth on a server(s).
How I should go about doing this?
Look at Apache Flood: hhttp://httpd.apache.org/test/flood/
Good description: http://www.clove.org/flood-presentation/flood.pdf

Which is the fastest way to load images on a webpage?

I'm building a new site, and during the foundation stage I'm trying to assess the best way to load images. Browsers have a limit of 2-6 items it can load concurrently (images/css/js). Through the grapevine I've heard various different methods, but no definitive answer on which is actually faster.
Relative URLs:
background-image: url(images/image.jpg);
Absolute URLs:
background-image: url(http://site.com/images/image.jpg);
Absolute URLs (with sub-domains):
background-image: url(http://fakecdn.site.com/images/image.jpg);
Will a browser recognize my "fakecdn" subdomain as a different domain and load images from it concurrently in a separate thread?
Do images referenced in a #import CSS file load in a separate thread?
The HTTP 1.1 spec suggests that browsers do not open more than two connections to a given domain.
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy.
So, if you are loading many medium sized images, then it may make sense to put them on separate FQDNs so that the 2 connection limit is not the bottleneck. For small images, the need of a new socket connection to each FQDN may outweigh the benefits. Similarly, for large images, the client network bandwith may be the limiting factor.
If the images are always displayed, then using a data uri may be fastester, since no separate connection is required, and the images can be included in the stream in the order they are needed.
However, as always with optimizing for performance, profile first!
See
Wikipedia - data uri
For lots of small images, social media icons being a good example, you'll also want to look into combining them into a single sprite map. That way they'll all load in the same request, and you just have to do some background-positioning when using them.

Resources