VS2008 Load Testing - Page Response Time - asp.net

I am running a load test from VS 2008 on my asp.net web application. The thing I notice is that for some of my pages Average Page Time is around 20.
Does this mean it takes 20 seconds for the server to render the page before it sends the request? Or is it simply 20 seconds until the whole page is fully loaded on the client's browser?
Does this statistic take Network Type into an account; so say that I change from 52kbps to 1.5mbps, is this statistic supposed to change?
Another thing is - my Average Response Time is 0.21, whilst some pages have Average Page Time at 20. Why is it so different? What does each mean?
Thank you.

Average Page Time usually just includes the time to receive all of the bytes over the network. So yeah, maybe this will change on a different bandwidth.
EDIT: As for your second question, Average Response Time is just the statistic for ALL requests that are filed during the duration of the test.

Related

Different TTFB value on Chrome vs Web Vitals

I am noticing different TTBF values in Chrome network tab vs logged by WebVitals. Ideally it should be exactly same value, but sometimes seeing large difference as much as 2-3 seconds for certain scenarios.
I am using Next.js and using reportWebVitals to log respective performance metrics.
Here is a sample repo, app url and screenshots for reference.
Using performance.timing.responseStart - performance.timing.requestStart is returning more appropriate value than relying on WebVitals TTFB value.
Any idea what could be going wrong? Is is a bug on WebVitals and I shouldn't be using it or mistake at my end in consuming/logging the values?
The number provided by reportWebVitals (and the underlying library web-vitals) is generally considered the correct TTFB in the web performance community (though to be fair, there are some differences in implementation across tools).
I believe DevTools labels that smaller number "Waiting (TTFB)" as an informal hint to the user what that "waiting" is to give it context and because it usually is the large majority of the TTFB time.
However, from a user-centric perspective, time-to-first-byte should really include all the time from when the user starts navigating to a page to when the server responds with the first byte of that page--which will include time for DNS resolution, connection negotiation, redirects (if any), etc. DevTools does include at least some information about that extra time in that screenshot, just separated into various periods above the ostensible TTFB number (see the "Queueing", "Stalled", and "Request Sent" entries).
Generally the Resource Timing spec can be used as the source of truth for talking about web performance. It places time 0 as the start of navigation:
Throughout this work, all time values are measured in milliseconds since the start of navigation of the document [HR-TIME-2]. For example, the start of navigation of the document occurs at time 0.
And then defines responseStart as
The time immediately after the user agent's HTTP parser receives the first byte of the response
So performance.timing.responseStart - performance.timing.navigationStart by itself is the browser's measure of TTFB (or performance.getEntriesByType('navigation')[0].responseStart in the newer Navigation Timing Level 2 API), and that's the number web-vitals uses for TTFB as well.

How do I remedy the Pagespeed Insights message "pages served from this origin does not pass the Core Web Vitals assessment"?

In Pagespeed insights, I get the following message in Origin Summary: "Over the previous 28-day collection period, the aggregate experience of all pages served from this origin does not pass the Core Web Vitals assessment."
screenshot of the message in PageSpeed Insights
Does anyone know what % of URLs have to pass the test in order to change this? Or what the criteria is?
Explanation
Lets use Largest Contentful Paint (LCP) as an example.
Firstly, the pass / fail is not based on the percentage of URLs, it is based on the average time / score.
This is an important distinction as you could have 50% of the data fail, but if it only fails by 0.1s (2.6s) and the other 50% of data is passing by 1 second (1.5s) the average will be a pass (average of 2.05s which is a pass).
Obviously this is an over-simplified example but you hopefully get the idea that you could have 50% of your site in the red and still pass in theory, which is why the percentages in each category are more for diagnostics.
If the average time for LCP across all pages in the CrUX dataset is less than 2.5 seconds ("Good") then you will get a green score and that is a pass.
If the time is less than 4 seconds the score will be orange ("Needs improvement") but this will still count as a fail.
Over 4 seconds and it fails and will be red ("Poor").
Passing criteria
So you need the following to be true to pass the web vitals (at time of writing):-
Largest Contentful Paint (LCP) average is less than 2.5 seconds
First Input Delay (FID) is less than 100ms
Cumulative Layout Shift is less than 0.1
If any one of those is over the threshold you will fail, even if the other two are within the green / passes.
FID - when running lighthouse (or Page Speed Insights) on a page you do not get the FID as part of the synthetic test (Lab Data).
Instead you get Total Blocking Time (TBT) - this is a close enough approximation for FID in most circumstances so use that (or run a performance trace).

K6 Load Testing - How to calculate accurate response times when using the rps option

I am recording how long each request takes by capturing Date.now() before and after the request.
I am doing this because the inbuild metric for the response time only records the time taken for the FIRST REQUEST and not for any redirects that it follows.
My method was working fine until I started using the rps option.
The rps option throttles how many requests per second are sent.
The problem that this is causing is that my manual calculations are going up even though the HTTP_REQ_DURATION is roughly the same.
I presume this is because of the RPS throttle i.e. it is WAITING and this is causing my calc using Date.now() to go up - which is not an accurate reflection of what is happening.
How can I calculate the total time taken for a response to a request including all redirects when I am using the rps option?
I'd advise against using the RPS option and using an arrival-rate executor instead, for example, constant-arrival-rate.
Alternatively, you can set the maxRedirects option to 0, so k6 doesn't handle redirects itself. Then, when you handle the redirects yourself, you can get the Response object for each of the requests, not just the last one. Then you can sum their Response.timings.duration (or whatever you care about) and add the result in your custom metric, it will not contain any artificial delays caused by --rps.

Calculate time offset using HTTP header `date`

I have a program that needs to do something exactly every hour. The catch is that the time needs to be relative to the remote server, which is not synchronised with a time server and is, in fact, about 6 seconds ahead (!). There is no way for me to change that server.
All I have, is access to the HEAD headers of the web server, which have a handy field date (that's how I found out about the discrepancy).
Question: regardless of the language (I use nodeJS, but that's not the point), what would you do to calculate a precise offset between my server and the remote server?
I am especially worried about network latency: I have the following variables:
Local server time
Time when request was sent
Time when the response with the Date header arrived
Remote server time
However, the remote server time was generated when the server received the request -- something that might have taken up to 1 second. And, the time when the response arrived needs to take into account the time it took to receive it...
Right now I am offsetting with (Time request was sent - Time response arrived) / 2. However, it feels lame.
Is there a better, established way to deal with this?
Hmm, i know this kind of problem, though i never had the limitation of not being able to change one of the 2 'actors'. I would say this approximation (Time request was sent - Time response arrived) / 2 feels ok. If you care more about it you could experiment with the approximation in a 'benchmark' kind of way:
don't make one synchronization request but make 10 in sequence, then eliminate the first 3 offsets and the last 3 offsets and average the remaining 4
or:
don't make one synchronization request but make a burst of 10 in 10 different threads, this should theoretically eliminate the client side (local side) time it takes to create the request and should block (if it blocks) on the server side (or remote side in your case). But this would involve some math and i think it's too much trouble for value
P.S. the number 10 is arbitrary (and hopefully the remote server doesn't ban/block you for making too many requests :)

How can I find the average number of concurrent users for IIS to simulate during a load/performance test?

I'm using JMeter for load testing. I'm going through and exercise of finding the max number of concurrent threads (users) that our webserver can handle by simply increasing the # of threads in my distributed JMeter test case, and firing off the test.
Then -- it struck me, that while the MAX number may be useful, the REAL number of users that my website actually handles on average is the number I need to make the test fruitful.
Here are a few pieces of information about our setup:
This is a mixed .NET/Classic ASP site. Upon login, a browser session (with timeout) is created in both for the users.
Each session times out after 60 minutes.
Is there a way using this information, IIS logs, performance counters, and/or some calculation that will help me determine the average # of concurrent users we handle on our production site?
You might use logparser with the QUANTIZE function to determine the peak number of requests over a suitable interval.
For a 10 second window, it would be something like:
logparser "select quantize(to_localtime(to_timestamp(date,time)), 10) as Qnt,
count(*) as Hits from yourLogFile.log group by Qnt order by Hits desc"
The reported counts won't be exactly the same as threads or users, but they should help get you pointed in the right direction.
The best way to do exact counts is probably with performance counters, but I'm not sure any of the standard ones works like you would want -- you'd probably need to create a custom counter.
I can see a couple options here.
Use Performance Monitor to get the current numbers or have it log all day and get an average. ASP.NET has a Requests Current counter. According to this page Classic ASP also has a Requests current, but I've never used it myself.
Run the IIS logs through Log Parser to get the total number of requests and how long each took. I'm thinking that if you know how many requests come in each hour and how long each took, you can get an average of how many were running concurrently.
Also, keep in mind that concurrent users isn't quite the same as concurrent threads on the server. For one, multiple threads will be active per user while content like images is being downloaded. And after that the user will be on the page for a few minutes while the server is idle.
My suggestion is that you define the stop conditions first, such as
Maximum CPU utilization
Maximum memory usage
Maximum response time for requests
Other key parameters you like
It is really subjective to choose the parameters and I personally cannot provide much experience on that.
Secondly you can see whether performance counters or IIS logs can map to the parameters. Then you set up proper mappings.
Thirdly you can start testing by simulating N users (threads) and see whether the stop conditions hit. If not hit, you can go to a higher number. If hit, you can use a smaller number. Recursively you will find a rough number.
However, that never means your web site in real world can take so many users. No simulation so far can cover all the edge cases.

Resources