Group by HTTP request parameters in a graph - graph

I'm using JMeter and looking to do load testing on a specific URL that can use a number of parameters (like ?param=value1 then value2, then...). For the needs of testing, I have a CSV file with multiple sets of these parameters.
I have already created my request & CSV data set and when I'm running the test, I can see it runs over the different datasets. But the problem is that my graph only displays the response time for every request of the sampler, without grouping it by dataset.
I want to get a graph with multiple lines, one per dataset. E.g first data set is param1=value1, second one is param1=value2, etc.
Am I forced to create a sampler per dataset, or is it possible to have a listener do that?
Thank you

How about adding this value to your Sampler's name?
It will generate a separate Sample Result for each unique Sampler label so Listeners and the HTML Reporting Dashboard will display/plot the same Sampler with different "params" as different Sample Results:

Related

Graphs in Jmeter

I need to perform some load test via a jmeter and for this I have created a test plan that is going to be uploading files for some time while there also is a JDBC request selecting amount of not yet processed files.
Could anyone please help me with some advice how to put the resulting values from the JDBC listener into some meaningful graph?
In the graph I would like to have amount of unprocessed files on one axis and on the second one a time stamp from which the result is.
Many thanks.
You can use Sample Variables property, add the next line to user.properties file (lives in "bin" folder of your JMeter installation)
sample_variables=your_variable_from_JDBC_request_with_number_of_files
This way when next time you launch JMeter in command-line non-GUI mode you will see an extra column in the .jtl results file with the values for the number of not processed files for each and every Sample Result.
Going forward you can create a custom chart for the number of not processed files over time, see Generating customs graphs over time chapter for more details. You will need to change at least jmeter.reportgenerator.graph.custom_testGraph.property.set_Sample_Variable_Name property value to match the one you set in the sample_variables and amend chart and axis titles according to your needs

Filter Calculated Model

I am trying to use a calculated model in Google App Maker to store data from an external API. I am able to load the data to a model and render it in a table. But now I want to filter the data in the table without calling the external API again.
For example if I use the Weather (Call REST services) sample code, after the weather is rendered on screen I want to click a button to only show the days with the temperature is below 32F. How would I do that without calling the external APIs again to reload the model.
After talking out loud, I believe I can answered my own question. I hope this helps others and please correct me if I am wrong.
What I am asking isn't possible with Calculated Models. How a Calculated Model works is the server script (query script) calls the external database, receives the data and formats (cleans up) the data following a Datasource. The datasource cleans up the data to fit within the Model before returning the data as Records to the
Client. Once the data is on the Client everything on the serve is forgotten.
So to search the data 2 options I can think of:
Create different Datasources, Models or Parameters that calls the external database every time and returning the filtered data to the Client.
Or use the javascript filter() method on the records already loaded to client and some extra code and ui to show the filtered results. The filter() method doesn't modify the records on the client but the results can shown on another table.

Using R with IOT data logged using "Hysteresis Logging" (only log differences)

We want to perform data analysis on IOT data which is stored into our SQL Server database. The data itself is generated by IOT devices and some are using hysteresis based logging for data compression. Which means that it only logs a value when the data for that particular property has changed.
As an example, here's how it looks inside the database:
The Float and Timestamp are actually the interesting values we're looking for. The rest is meta data. AssetTypePropertyId is linked to the name of a certain property. Which describes what the value is actually about.
We can reshape this data into a 2d matrix, making it already more useable. However, since the data is compressed with hysteresis logging we need to 'recreate' the missing values.
To give an example we want to go from this 2d dataset:
To a set which has all the gaps filled in:
This is generated under the assumption that the previous value is valid as long as no new value has been logged for it.
My question: How can this transformation be done in R?

Custom SSIS data flow component with multiple output in asp.net

I am building a Custom SSIS data flow control which will provide three out put flow based on some rules validation. Please help me what exactly I need to implement in ProcessInput method to redirect data in these three output based on the some logic applied on the input
If I understand correctly, I think you want to route each 1 of the 3 outputs from the Custom Control component shown in your data flow, to various different destinations based on conditions. For that, you could use a CONDITIONAL SPLIT Component as shown below.
You can split the input into 2 or more row streams by giving the CONDITIONAL SPLIT various conditions that look at column values etc.
If my understanding is incorrect, then please update your question with more details on your situation.

Dynamically assign Google Analytics data to dataframes and name variables accordingly

I'm using rga to make API requests from Google Analytics. At the moment, I'm manually assigning the returned data to variables (e.g. website1.topPages <- ga$getData(id,start.date=...).
While this works, I'm generating quite a few tables and would love to cut down potential user error in naming variables. As such, I want to pass website1 and the desired name of the dataset, make the API request, and end up with the dataframe assigned to website1.**desiredName**. Any ideas or recommendations?

Resources