How to read "Graph Result" in JMeter - http

I am trying to understand the "Graph Result" in JMeter I got when I followed the following scenario:
I am hitting www.google.com with:
No. of users: 10,
Ramp up Period is 5 Seconds,
Loop count is 10
I am finding it difficult to read "Graph Result", I have used another listeners too (View Results in Tree, View Results in Table, Summary Report) which are easy to understand but I would like to learn this too.
Please refer the result Image link:
https://www.cubbyusercontent.com/pli/Image.png/_2855385a0bbb40a0b7cd2d31224b521c
Help appreciated.

According to JMeter Help,
Graph results contains,
The Graph Results listener generates a simple graph that plots all
sample times. Along the bottom of the graph, the current sample
(black), the current average of all samples(blue), the current
standard deviation (red), and the current throughput rate (green) are
displayed in milliseconds.
The throughput number represents the actual number of requests/minute
the server handled. This calculation includes any delays you added to
your test and JMeter's own internal processing time.
Basically it shows data,average,median,deviation,throughput i.e.system statistics during test in a graphical format.
These values are plotted runtime thus it updates values at bottom at runtime i.e. total no. of samples are no. of samples occurred till that point of time with deviation at that point of time and similarly other counters represent their values.
Due to its runtime behavior, this listener consumes lot of memory and cpu and it is advised that it should not be used while load test (I think you have used it just to know its use and working.)
While running actual load test you can learn/understand these statistics from aggregate report other reports which can be created in non-ui mode also.
I hope this have cleared what graph result shows and how to read it and when to use it.

Related

How to analyze jmeter TPS graph

I made a TPS graph using jmeter, but I can't interpret what it means. Please tell me how to interpret the graph
request of type get named "A"
threads: 1000
ramp-up period: 100
loop Count: 1
Active Threads Over Time
1. image
Response Times Over Time
2. image
Transactions per Second
3. image
Your graphs mean that:
You run 1 thread (virtual user)
You can reach 10 transactions per second with this 1 thread
Average response time is around 100 milliseconds
You might also be interested in:
Apache JMeter HTML Reporting Dashboard - an alternative and automated way of generating pretty zoomable charts
Apache JMeter Glossary - so you would understand what do your metrics mean in general
Performance Metrics for Websites - so you would understand how to correlate the metrics and what are possible relationships between them

How to generate live plot of the residuals vs. number of iteration while the driver is still running?

My objective is while the driver/non-linear solver is still running, I would not only like to print the values of the inputs/outputs/gradients information but I would also like to generate a live plot for the residual of the objective function and other design variables/inputs/gradients of my interest. Residual of the objective function (or any other parameter) can be defined as log of the difference between two consecutive values.
I came across two particular features in Openmdao, driver/solver debug_print and driver/solver recorder features, where debug_print prints the input/output values live and information from the recorder is accessed to assess the convergence of the model. I have two specific questions:
How to save the values that debug_print prints on the screen in a text file (or other formats) while the driver is still running? (The text file is dynamically updated while the driver is running)
Can I use recorder to generate live plots of the residuals for the quantities of my interest? I have seen the Advanced recording tutorial where recorder is used to plot the design variables and objective functions, but recorder information can be only retrieved once the driver/solver completes its job right? Or can the information in the recorder be retrieved while the driver/solver is still running? If the information from the recorder can be retrieved while the driver/solver is still running, I believe I can achieve my goal.
Thanks!
You can write a separate script to read the recorder values while the main script is running and writing those values. There is nothing stopping the reading script from being a different file/process from the main execution.

Build a cumulative variable/aggregate from raw data

I got a power consumption sensor (kWh) sending data to my TSI Gen2 environment, and it is malfunctioning in a way that it is losing its accumulated measuremente value when it is shut down. I need to create a new aggregate/variable that would "stack" the measurements , never letting it drop to zero, but always adding to the last greatest value.
I thought about creating a dataset with values from differences from right to left over a fixed timespan, if positive, and then I could create a SUM aggregation over the bucket period on top of it. I am clueless on how to do such thing based on the poor official documentation provided by Microsoft. Any Ideas?
Here are a couple of pictures illustrating my problem and What I am trying to accomplish:
You probably need to add something in the middle (before the IoT Hub/Event Hub) to save the last state of the sensor, and do the appropriate sum if if detects the device was rebooted.

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

How to use VSTS Loadtest Goal based load pattern to achieve a constant test per second

I am using Visual Studio TS Load Test for running WebTest (one client/controls hitting one server). How can I configure goal based load pattern to achieve a constant test / second?
I tried to use the counter 'Tests/Sec' under 'LoadTest:Test' but it does not seem to do anything.
I've recently tested against the Tests / Sec, and I've confirmed it working.
For the settings on the Goal Based Load Pattern, I used:
Category: LoadTest:Test
Counter: Tests/Sec
Instance: _Total
When the load test starts, verify it doesn't show an error re: not being able to access that Performance Counter.
Tests I ran for my own needs:
Set Initial User Load quite low (10), and gave it 5 minutes to see if
it would reach the target Tests / Sec target, and stabilise. In my case, it stabilised after about 1 minute 40.
Set the Maximum User Count [Increment|Decrement] to 50. Turns out the
user load would yo-yo up and down quite heavily, as it would keep
trying to play catch-up. (As the tests took 10-20 seconds each)
Set the Initial User Load quite close to the 'answer' from test 1,
and watched it make small but constant adjustments to the user
volume.
Note: When watching the stats, watch the value under "Last". I believe the "Average" is averaged over a relatively long period, and may appear out of step with the target.

Resources