Jmeter Async process with while controller - asynchronous

In order to verify whether my concurrency request is persisted asynchronously in the datastore. I have the below test plan which makes requests to the database as part of the preprocessor in the while controller. is this the right test plan? I am not able to see the preprocessor running in the while controller. the logs are not displaying.
sample preprocessor is below
MFG mfgRepo = new MFG();
log.info("device {}","${deviceID}");
long crashTime= Long.parseLong(vars.get('p1'));
String deviceId1=vars.get('deviceID');
log.info("the crashtime is {}",crashTime);
List<Map<String,Object>> dbItem = mfgRepo.getItemFromDB("TEST",deviceId1,crashTime);
String info=dbItem.get(0).get("info"); ```
how to validate in while controller the data is persisted in the database?

PreProcessor itself will not be executed, you need to have a Sampler inside the While Controller in order to enter the While Loop and/or execute the pre/post processors, assertions and timers so:
Either switch to JSR223 Sampler instead of the JSR223 PreProcessor
Or add a Flow Control Action sampler as a child of the While Controller, this guy doesn't generate a sample result so you won't see it in the Listeners or in the .jtl results file, however it will trigger the execution of the PreProcessor.

Related

if-else conditions based on expect statement results

I'm working towards testrail integrations, i want to update testrail after each test pass/fail. Lets day if i've a test like below
it('rightpanel should exist', () => {
//some logic or preparatory work
expect(rightpanel.isLoaded()).to.be.true;
// here i want to know whether above expect statement failed or passed.
// based on it, i want to update test rail by making a webservice call
});
we are using WDIO, is there a better way to integrate to testrail? No one replies on their community forum so, i'm asking here.
With mocha you can use a repoter to manage the results of your tests.
The default reporter is Spec, but if you use json-stream you can attach another process to this stream to send test reports to testrail while executing.
Otherwise, if you don't need to send them in real time, you can use the json reporter and parse them in a single call.
You can also check on github some existing reporter that connects directly to testrail:
https://github.com/CommodoreBeard/mocha-testrail-advanced-reporter
https://github.com/awaragi/mocha-testrail-reporter
complete list

How to make command to wait until all events triggered against it are completed successfully

I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.
What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.

Chaining Handlers with MediatR

We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}

Customizing failure reporting in TestNG

Background:
I have created a basic playground project that contains:
A testLogin.java file that contains:
a. testng package imports (org.testng.*)
b. selenium webdriver imports (org.openqa.selenium.*)
c. 5 test-methods with testng annotations:
#Test(groups={"init"})
public void openURL()
Contains webdriver code to initiate the webdriver and open a chrome >instance with a given url.
#Test(dependsOnGroups={"init"})
public void testLogin()
Contains webdriver code to:
1. Locate username password text-input elements, enter the username password from a properties file.
2. Locate the "log in" button and click the button to log-in
3. Manage a login-forcefully scenario if someone else has already logged in using the credentials.
#Test(dependsOnMethods={"testLogin"})
public void testPatientsScheduleList()
Contains webdriver code to check if any patients have been scheduled. If yes, then fetch the names and display in console.
#Test()
public void testLogout()
Contains webdriver code to locate the logout button and click on the button to logout of the app.
#AfterTest()
public void closeConnection()
Contains webdriver code to dispose the webdriver object and close the chrome instance.
Currently I am simply running the test script wrapped as testng methods from ANT and a testng-xslt report gets generated.
Issues:
1. Performing validations against every line of code of webdriver script in a test method:
I know:
1. Selenium webdriver script contains API methods (findElement() and others.) that throw exceptions as a result of a default assertion/validation they perform. These exceptions show up in the generated report when a test-method fails.
2. TestNG provides Assert class that has many assertion methods but I have not yet figured out how can i use them to perform validation/assertions against every line of code of webdriver script. I tried adding assertion methods after every line of webdriver script code. What appeared in the output was just an AssertionError exception for a testmethod.
2. Failing a certain test method which gets passed due to try.. catch block.
If I use a try catch block around a set of 2 or more test drive script steps, and if a test-case fails in any of the steps (script line) then the try..catch block handles it thereby showing the test-method as "passed" in the execution report, which actually failed.
3. Creating a custom report which will show desired test execution results and not stack-traces!
When I execute the above script, a testng-xslt report gets generated that contains pass/fail status of each test method in a test-suite (configured in testng.xml).
The test-results only give me whether a test-method has passed or failed and provides an exception's stack-trace which really doesn't provide any helpful information.
I don't want such abstract level of test execution results but something like:
Name | Started | Duration | What-really-went-wrong (Failure)
Can anyone please suggest/ give some pointers regarding:
1. How can I perform validation/assertion against every line of code of webdriver script in a test-method without writing asserts after every script line?
2. How can I fail a certain test method which gets passed due to try catch block?
3. How can I customize the failure reporting so that I can send a failure result like "Expected element "button" with id "bnt12" but did not find the element at step 3 of test-method" to testng's reporting utility?
4. In the testng-xslt report I want to display where exactly in the test-method a failure occurred. So for example if my test-method fails because of a webelement = driver.findElement() at line 3 of a test-method, I want to display this issue in the test-report in the "What-really-went-wrong" column. I have read about testng testlisteners TestListenerAdapter / ITestListener/ IReporter but I don't understand how to use them after checking testng's javadocs.
5. Also, I have to implement PageObject pattern once I am done with customizing the test report. Where would be the right place to perform assertions in a page-object pattern? Should assertions be written in the page object test methods or in the higher level test methods that will use the PageObject classes?
P.S: I am completely new to testng framework and webdriver scripting. Please bear with any technical mistakes or observation errors if any in the post.
How can I perform validation/assertion against every line of code of webdriver script in a test-method without writing asserts after
every script line?
I dont think so. It is the assertions that does the comparison. So u need it.
How can I fail a certain test method which gets passed due to try catch block?
try-catch will mask the assertion failure.(because on assertion failure, an assertion exception is thrown, so if your catch block is like (catch(Exception e)) Assertion failures wont escape the catch block.
How can I customize the failure reporting so that I can send a failure result like "Expected element "button" with id "bnt12" but did
not find the element at step 3 of test-method" to testng's reporting
utility?
You need to use test listeners . TestNG TestListenerAdapter is a good start
Also, I have to implement PageObject pattern once I am done with
customizing the test report. Where would be the right place to perform
assertions in a page-object pattern? Should assertions be written in
the page object test methods or in the higher level test methods that
will use the PageObject classes?
My personal choice is to use assertions in Test methods, since it is where we are doing the actual testing. Page objects contains scripts for navigating inside the web page.
How can I customize the failure reporting so that I can send a failure
result like "Expected element "button" with id "bnt12" but did not
find the element at step 3 of test-method" to testng's reporting
utility?
You can use extent report and testng listener class( in this class use onTestFailure method to customize your failure report).

Cancel a long task that's managed by a web service

I have a web service with three methods: StartReport(...), IsReportFinished(...) and GetReport(...), each with various parameters. I also have a client application (Silverlight) which will first call StartReport to trigger the generation of the report, then it will poll the server with IsReportFinished to see if it's done and once done, it calls GetReport to get the report. Very simple...
StartReport is simple. It first generates an unique ID, then it will use System.Threading.Tasks.Task.Factory.StartNew() to create a new task that will generate the report and finally return the unique ID while the task continues to run in the background. IsReportFinished will just check the system for the unique ID to see if the report is done. Once done, the unique ID can be used to retrieve the report.
But I need a way to cancel the task, which is implemented by adding a new parameter to IsReportFinished. When called with cancel==true it will again check if the report is done. If the report is finished, there's nothing to cancel. Otherwise, it needs to cancel the task.
How do I cancel this task?
You could use a cancellation token to cancel TPL tasks. And here's another example.

Resources