How to use VSTS Loadtest Goal based load pattern to achieve a constant test per second - asp.net

I am using Visual Studio TS Load Test for running WebTest (one client/controls hitting one server). How can I configure goal based load pattern to achieve a constant test / second?
I tried to use the counter 'Tests/Sec' under 'LoadTest:Test' but it does not seem to do anything.

I've recently tested against the Tests / Sec, and I've confirmed it working.
For the settings on the Goal Based Load Pattern, I used:
Category: LoadTest:Test
Counter: Tests/Sec
Instance: _Total
When the load test starts, verify it doesn't show an error re: not being able to access that Performance Counter.
Tests I ran for my own needs:
Set Initial User Load quite low (10), and gave it 5 minutes to see if
it would reach the target Tests / Sec target, and stabilise. In my case, it stabilised after about 1 minute 40.
Set the Maximum User Count [Increment|Decrement] to 50. Turns out the
user load would yo-yo up and down quite heavily, as it would keep
trying to play catch-up. (As the tests took 10-20 seconds each)
Set the Initial User Load quite close to the 'answer' from test 1,
and watched it make small but constant adjustments to the user
volume.
Note: When watching the stats, watch the value under "Last". I believe the "Average" is averaged over a relatively long period, and may appear out of step with the target.

Related

How do I remedy the Pagespeed Insights message "pages served from this origin does not pass the Core Web Vitals assessment"?

In Pagespeed insights, I get the following message in Origin Summary: "Over the previous 28-day collection period, the aggregate experience of all pages served from this origin does not pass the Core Web Vitals assessment."
screenshot of the message in PageSpeed Insights
Does anyone know what % of URLs have to pass the test in order to change this? Or what the criteria is?
Explanation
Lets use Largest Contentful Paint (LCP) as an example.
Firstly, the pass / fail is not based on the percentage of URLs, it is based on the average time / score.
This is an important distinction as you could have 50% of the data fail, but if it only fails by 0.1s (2.6s) and the other 50% of data is passing by 1 second (1.5s) the average will be a pass (average of 2.05s which is a pass).
Obviously this is an over-simplified example but you hopefully get the idea that you could have 50% of your site in the red and still pass in theory, which is why the percentages in each category are more for diagnostics.
If the average time for LCP across all pages in the CrUX dataset is less than 2.5 seconds ("Good") then you will get a green score and that is a pass.
If the time is less than 4 seconds the score will be orange ("Needs improvement") but this will still count as a fail.
Over 4 seconds and it fails and will be red ("Poor").
Passing criteria
So you need the following to be true to pass the web vitals (at time of writing):-
Largest Contentful Paint (LCP) average is less than 2.5 seconds
First Input Delay (FID) is less than 100ms
Cumulative Layout Shift is less than 0.1
If any one of those is over the threshold you will fail, even if the other two are within the green / passes.
FID - when running lighthouse (or Page Speed Insights) on a page you do not get the FID as part of the synthetic test (Lab Data).
Instead you get Total Blocking Time (TBT) - this is a close enough approximation for FID in most circumstances so use that (or run a performance trace).

Is there a way to measure or monitor a scheduler job's resources used?

Recently i got the task to optimize a quite huge PLSQL script which prior to my changes took about 1 hour +/- 10mins.
So I got to do some reallocation of some methods and generally just some replacement of big views with simpler subquery or with statements. I noticed that if I ran the scheduled job by right-clicking it and execute job I would in most cases see the run duration change (in a positive way). But if I enabled the job and let it run by its schedule it takes the original hour no matter what changes you do to it.
Now my question here is: Is there any way to monitor the RAM or CPU usage of the session/job or is there a difference in general how many resources are allocated to background processes? Because my suspicion here is the "manual" run job somehow gets some priorities the scheduler doesn't get or doesn't take.
Either way for troubleshooting purposes you can't take a few hours a work day just to wait for results.

How to run Airflow DAG for specific number of times?

How to run airflow dag for specified number of times?
I tried using TriggerDagRunOperator, This operators works for me.
In callable function we can check states and decide to continue or not.
However the current count and states needs to be maintained.
Using above approach I am able to repeat DAG 'run'.
Need expert opinion, Is there is any other profound way to run Airflow DAG for X number of times?
Thanks.
I'm afraid that Airflow is ENTIRELY about time based scheduling.
You can set a schedule to None and then use the API to trigger runs, but you'd be doing that externally, and thus maintaining the counts and states that determine when and why to trigger externally.
When you say that your DAG may have 5 tasks which you want to run 10 times and a run takes 2 hours and you cannot schedule it based on time, this is confusing. We have no idea what the significance of 2 hours is to you, or why it must be 10 runs, nor why you cannot schedule it to run those 5 tasks once a day. With a simple daily schedule it would run once a day at approximately the same time, and it won't matter that it takes a little longer than 2 hours on any given day. Right?
You could set the start_date to 11 days ago (a fixed date though, don't set it dynamically), and the end_date to today (also fixed) and then add a daily schedule_interval and a max_active_runs of 1 and you'll get exactly 10 runs and it'll run them back to back without overlapping while changing the execution_date accordingly, then stop. Or you could just use airflow backfill with a None scheduled DAG and a range of execution datetimes.
Do you mean that you want it to run every 2 hours continuously, but sometimes it will be running longer and you don't want it to overlap runs? Well, you definitely can schedule it to run every 2 hours (0 0/2 * * *) and set the max_active_runs to 1, so that if the prior run hasn't finished the next run will wait then kick off when the prior one has completed. See the last bullet in https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled.
If you want your DAG to run exactly every 2 hours on the dot [give or take some scheduler lag, yes that's a thing] and to leave the prior run going, that's mostly the default behavior, but you could add depends_on_past to some of the important tasks that themselves shouldn't be run concurrently (like creating, inserting to, or dropping a temp table), or use a pool with a single slot.
There isn't any feature to kill the prior run if your next schedule is ready to start. It might be possible to skip the current run if the prior one hasn't completed yet, but I forget how that's done exactly.
That's basically most of your options there. Also you could create manual dag_runs for an unscheduled DAG; creating 10 at a time when you feel like (using the UI or CLI instead of the API, but the API might be easier).
Do any of these suggestions address your concerns? Because it's not clear why you want a fixed number of runs, how frequently, or with what schedule and conditions, it's difficult to provide specific recommendations.
This functionality isn't natively supported by Airflow
But by exploiting the meta-db, we can cook-up this functionality ourselves
we can write a custom-operator / python operator
before running the actual computation, check if 'n' runs for the task (TaskInstance table) already exist in meta-db. (Refer to task_command.py for help)
and if they do, just skip the task (raise AirflowSkipException, reference)
This excellent article can be used for inspiration: Use apache airflow to run task exactly once
Note
The downside of this approach is that it assumes historical runs of task (TaskInstances) would forever be preserved (and correctly)
in practise though, I've often found task_instances to be missing (we have catchup set to False)
furthermore, on large Airflow deployments, one might need to setup routinal cleanup of meta-db, which would make this approach impossible

How to read "Graph Result" in JMeter

I am trying to understand the "Graph Result" in JMeter I got when I followed the following scenario:
I am hitting www.google.com with:
No. of users: 10,
Ramp up Period is 5 Seconds,
Loop count is 10
I am finding it difficult to read "Graph Result", I have used another listeners too (View Results in Tree, View Results in Table, Summary Report) which are easy to understand but I would like to learn this too.
Please refer the result Image link:
https://www.cubbyusercontent.com/pli/Image.png/_2855385a0bbb40a0b7cd2d31224b521c
Help appreciated.
According to JMeter Help,
Graph results contains,
The Graph Results listener generates a simple graph that plots all
sample times. Along the bottom of the graph, the current sample
(black), the current average of all samples(blue), the current
standard deviation (red), and the current throughput rate (green) are
displayed in milliseconds.
The throughput number represents the actual number of requests/minute
the server handled. This calculation includes any delays you added to
your test and JMeter's own internal processing time.
Basically it shows data,average,median,deviation,throughput i.e.system statistics during test in a graphical format.
These values are plotted runtime thus it updates values at bottom at runtime i.e. total no. of samples are no. of samples occurred till that point of time with deviation at that point of time and similarly other counters represent their values.
Due to its runtime behavior, this listener consumes lot of memory and cpu and it is advised that it should not be used while load test (I think you have used it just to know its use and working.)
While running actual load test you can learn/understand these statistics from aggregate report other reports which can be created in non-ui mode also.
I hope this have cleared what graph result shows and how to read it and when to use it.

How can I find the average number of concurrent users for IIS to simulate during a load/performance test?

I'm using JMeter for load testing. I'm going through and exercise of finding the max number of concurrent threads (users) that our webserver can handle by simply increasing the # of threads in my distributed JMeter test case, and firing off the test.
Then -- it struck me, that while the MAX number may be useful, the REAL number of users that my website actually handles on average is the number I need to make the test fruitful.
Here are a few pieces of information about our setup:
This is a mixed .NET/Classic ASP site. Upon login, a browser session (with timeout) is created in both for the users.
Each session times out after 60 minutes.
Is there a way using this information, IIS logs, performance counters, and/or some calculation that will help me determine the average # of concurrent users we handle on our production site?
You might use logparser with the QUANTIZE function to determine the peak number of requests over a suitable interval.
For a 10 second window, it would be something like:
logparser "select quantize(to_localtime(to_timestamp(date,time)), 10) as Qnt,
count(*) as Hits from yourLogFile.log group by Qnt order by Hits desc"
The reported counts won't be exactly the same as threads or users, but they should help get you pointed in the right direction.
The best way to do exact counts is probably with performance counters, but I'm not sure any of the standard ones works like you would want -- you'd probably need to create a custom counter.
I can see a couple options here.
Use Performance Monitor to get the current numbers or have it log all day and get an average. ASP.NET has a Requests Current counter. According to this page Classic ASP also has a Requests current, but I've never used it myself.
Run the IIS logs through Log Parser to get the total number of requests and how long each took. I'm thinking that if you know how many requests come in each hour and how long each took, you can get an average of how many were running concurrently.
Also, keep in mind that concurrent users isn't quite the same as concurrent threads on the server. For one, multiple threads will be active per user while content like images is being downloaded. And after that the user will be on the page for a few minutes while the server is idle.
My suggestion is that you define the stop conditions first, such as
Maximum CPU utilization
Maximum memory usage
Maximum response time for requests
Other key parameters you like
It is really subjective to choose the parameters and I personally cannot provide much experience on that.
Secondly you can see whether performance counters or IIS logs can map to the parameters. Then you set up proper mappings.
Thirdly you can start testing by simulating N users (threads) and see whether the stop conditions hit. If not hit, you can go to a higher number. If hit, you can use a smaller number. Recursively you will find a rough number.
However, that never means your web site in real world can take so many users. No simulation so far can cover all the edge cases.

Resources