#Override
public void onTestSuccess(ITestResult tr) {
}
I add method onTestSuccess, but it takes screenshot only when the whole test passed, not assetion passed.
what can I do?
Related
I'm using Java+TestNG+Allure. I need to get all test fails in Allure report, not only the first fail of the test but all, and the test should run from the beginning to the end despite failed steps.
For reporting the test failures in Allure report we have to do little bit of modifications in Allure Class. Here we want to report any of the sub step as a failure, execute the remaining steps and then mark the main test step as a failed test. For doing this we can use the concept of SoftAssertions. I had created one class called as AllureLogger. Inside the class we will have 5 Methods.
1)starttest() 2)endtest() 3) markStepAsPassed(String message) 4)marstepAsFailed(String message) 5)logStep().
public class AllureLogger {
public static Logger log = Logger.getLogger("devpinoylog");
private static StepResult result_fail;
private static StepResult result_pass;
private static String uuid;
private static SoftAssert softAssertion;
public static void startTest() {
softAssertion = new SoftAssert();
}
public static void logStep(String discription) {
log.info(discription);
uuid = UUID.randomUUID().toString();
result_fail = new StepResult().withName(discription).withStatus(Status.FAILED);
result_pass = new StepResult().withName(discription).withStatus(Status.PASSED);
}
public static void markStepAsFailed(WebDriver driver, String errorMessage) {
log.fatal(errorMessage);
Allure.getLifecycle().startStep(uuid, result_fail);
Allure.getLifecycle().addAttachment(errorMessage, "image", "JPEG", ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES));
Allure.getLifecycle().stopStep(uuid);
softAssertion.fail(errorMessage);
}
public static void markStepAsPassed(WebDriver driver, String message) {
log.info(message);
Allure.getLifecycle().startStep(uuid, result_pass);
Allure.getLifecycle().stopStep(uuid);
}
public static void endTest() {
softAssertion.assertAll();
softAssertion = null;
startTest();
softAssertion = new SoftAssert();
}
}
In the above class, we are using different methods from allureClass and we are doing little bit of modification to add soft assertions.
Everytime we start a TestMethod in testClass we can call the starttest() and end testmethod().Inside the test methods if we have some substeps we can use try catch block to mark the substeps as pass or fail.Ex-Please check the below test method as an Example
#Test(description = "Login to application and navigate to Applications tab ")
public void testLogin()
{
AllureLogger.startTest();
userLogin();
navigatetoapplicationsTab();
AllureLogger.endTest();
}
Above is a test method which will login to one application and then navigate to application tab.Inside we have two methods which will be reported as substeps, 1)login()- For logging in the application 2) navigatetoapplicationsTab()-to navigate to application tab. If any of the substep fails then the main step and substep will be marked as fail and remaining steps will be executed.
We will define the body of the above functions which are defined in test method as below:
userLogin()
{
AllureLogger.logStep("Login to the application");
try
{
/*
Write the logic here
*/
AllureLogger.MarStepAsPassed(driver,"Login successful");
}
catch(Exception e)
{
AllureLogger.MarStepAsFailed(driver,"Login not successful");
}
}
navigatetoapplicationsTab()
{
AllureLogger.logStep("Navigate to application Tab");
try
{
/*
Write the logic here
*/
AllureLogger.MarStepAsPassed(driver,"Navigate to application Tab successful");
}
catch(Exception e)
{
e.printStackTrace();
AllureLogger.MarStepAsFailed(driver,"Navigate to application Tab failed");
}
}
Everytime any exception is thrown they will be caught in catch block and reported in the Allure Report. The soft assertion enables us to execute all the remaining steps successfully.
Attached is a screenshot of an Allure report generated by using the above technique.The main step is marked as Failed and remaining test cases have got executed.
The report attached here is not from the above example which is mentioned. It is just a sample as how the report would look.
Given the following unit test, which uses the Vertx Unit testing framework:
#RunWith(VertxUnitRunner.class)
public class VertxUnitTest {
private Vertx vertx;
#Rule
public RunTestOnContext rule = new RunTestOnContext(new VertxOptions().setClustered(false)
.setClusterManager(new HazelcastClusterManager()).setMaxEventLoopExecuteTime(2000000000000L)
.setMaxWorkerExecuteTime(60000000000000L).setBlockedThreadCheckInterval(1000000)
.setEventBusOptions(new EventBusOptions().setClustered(false).setIdleTimeout(0)));
#Before
public void setup() throws Exception {
io.vertx.core.Vertx v = rule.vertx();
vertx = Vertx.newInstance(v);
}
private class MyVerticle extends AbstractVerticle {}
#Test
public void runFlow_correctMessage_stepsCalledInCorrectOrder(TestContext context) {
Async async = context.async();
vertx.getDelegate().deployVerticle(new MyVerticle(), new DeploymentOptions().setWorker(true), c -> {
c.cause();
vertx.eventBus().<Object>send("", new JsonObject(), new DeliveryOptions(), rpl -> {
async.complete();
fail();
});
});
}
}
the call to fail() is throwing an exception to the console, but it is not actually failing the test itself, which finishes successfully and is green.
The same is true when working with Mockito. I can successfully verify the behavior of the verticle and its dependencies using mocks, but even when the Mockito assertions fail, the test itself will still pass. Calling fail on the vertx TestContext object - context.fail() - will also not fail the test.
The core issue is this: any call to fail() after async.complete() will not fail the test, only the console will show the error. But without the call to async.complete(), the code in the verticle (called upon consuming from the event bus), will not have run before the test assertions are called.
Without the call to async.complete(), the test will it appears never complete.
What is the correct approach to this?
Thanks
the correct approach is to call the TestContext.fail() method, like so:
#Test
public void runFlow_correctMessage_stepsCalledInCorrectOrder(TestContext context) {
Async async = context.async();
vertx.getDelegate().deployVerticle(new MyVerticle(), new DeploymentOptions().setWorker(true), c -> {
if(c.succeeded()) {
vertx.eventBus().<Object>send("", new JsonObject(), new DeliveryOptions(), rpl -> {
if(rpl.succeeded()) {
// make assertions based on reply contents, and then...
async.complete();
} else {
context.fail(rpl.cause());
}
});
} else {
context.fail(c.cause());
}
});
}
I create an Observable from a long running operation + callback like this:
public Observable<API> login(){
return Observable.create(new Observable.OnSubscribe<API>() {
#Override
public void call(final Subscriber<? super API> subscriber) {
API.login(new SimpleLoginListener() {
#Override
public void onLoginSuccess(String token) {
subscriber.onNext(API.from(token));
subscriber.onCompleted();
}
#Override
public void onLoginFailed(String reason) {
subscriber.onNext(API.error());
subscriber.onCompleted();
}
});
}
})
}
A successfully logged-in api is the pre-condition for multiple other operations like api.getX(), api.getY() so I thought I could chain these operation with RxJava and flatMap like this (simplified): login().getX() or login().getY().
My biggest problem is now, that I don't have control over when login(callback) is executed. However I want to be able to reuse the login result for all calls.
This means: the wrapped login(callback) call should be executed only once. The result should then be used for all following calls.
It seems the result would be similar to a queue that aggregates subscribers and then shares the result of the first execution.
What is the best way to achieve this? Am I missing a simpler alternative?
I tried code from this question and experiemented with cache(), share(), publish(), refCount() etc. but the wrapped function is called 3x when I do this for all of the mentioned operators:
apiWrapper.getX();
apiWrapper.getX();
apiWrapper.getY();
Is there something like autoConnect(time window) that aggregates multiple successive subscribers?
Applying cache() should make sure login is only called once.
public Observable<API> login() {
return Observable.create(s -> {
API.login(new SimpleLoginListener() {
#Override
public void onLoginSuccess(String token) {
s.setProducer(new SingleProducer<>(s, API.from(token)));
}
#Override
public void onLoginFailed(String reason) {
s.setProducer(new SingleProducer<>(s, API.error()));
}
});
}).cache();
}
If, for some reason you want to "clear" the cache, you can do the following trick:
AtomicReference<Observable<API>> loginCache = new AtomicReference<>(login());
public Observable<API> cachedLogin() {
return Observable.defer(() -> loginCache.get());
}
public void clearLoginCache() {
loginCache.set(login());
}
Ok I think I found one major problem in my approach:
Observable.create() is a factory method so even if every single observable was working as intented, I created many of them. One way to avoid this mistake is to create a single instance:
if(instance==null){ instance = Observable.create(...) }
return instance
I would like to know if there is a function that fires up when user set value in field but not if program set value in field.
so function :
user click on field 'myField and change value -> method fires up
in program : myField.setValue = SomeValue; -> method doesn't fires up.
problem is with loop detection. If your logic is that you have 4 field and try to detect if any of those fields are changed and then fire method for change some values inside those fields :
#Override
protected void execChangedValue() throws ProcessingException {
super.execChangedValue();
valueFieldsChange(this);
}
protected void valueInPriceBoxFieldsChange(AbstractValueField field) {
... calculate some values in those fields....
}
and I get :
!MESSAGE org.eclipse.scout.rt.client.ui.form.fields.AbstractValueField.setValue(AbstractValueField.java:338) Loop detection in...
So I know the method execChangedValue() are not what I am looking for. Is there similar method with explained behavior ?
Marko
Let start to say that loop detection is useful in 90% of the cases.
The warning is displayed when you are inside an execChangedValue() and you try to update the value of the same field.
If I understand your example correctly you have inside your field:
public class FirstField extends AbstractStringField {
#Override
protected void execChangedValue() throws ProcessingException {
calculateNewValues();
}
}
And the method:
public void calculateNewValues() {
//some logic to compute the new values: value1, value2, value3
getFirstField().setValue(value1);
getSecondField().setValue(value2);
getThirdField().setValue(value3);
}
At this point, you really need to be sure that when the user sets the value in the FirstField to something, you might want to change it pragmatically to something else. This might be really confusing for the user.
If you are sure that you need to update the value, there is a way to set a value without triggering execChangedValue() on the field. I have proposed a new method: setValueWithoutChangedValueTrigger(T value) on the Eclipse Scout Form: Controlling if execChangedValue() is called or not.
On the forum you will find a snippet that you can add to your field (or in a common template: AbstractMyAppStringField).
public class FirstField extends AbstractStringField {
#Override
protected void execChangedValue() throws ProcessingException {
calculateNewValues();
}
public void setValueWithoutValueChangeTrigger(String value) {
try {
setValueChangeTriggerEnabled(false);
setValue(value);
}
finally {
setValueChangeTriggerEnabled(true);
}
}
}
And you will be able to use it:
public void calculateNewValues() {
//some logic to compute the new values: value1, value2, value3
getFirstField().setValueWithoutChangedValueTrigger(value1);
getSecondField().setValueWithoutChangedValueTrigger(value2);
getThirdField().setValueWithoutChangedValueTrigger(value3);
}
I hope this helps.
This is how my testNG test looks like:-
public class orderTest{
#Test
public void meth1() throws InterruptedException{
System.out.println("1");
Reporter.log("1");
}
#Test
public void meth2() throws InterruptedException{
System.out.println("2");
Reporter.log("2");
}
#Test
public void meth3() throws InterruptedException{
System.out.println("3");
Reporter.log("3");
}
#Test
public void meth4() throws InterruptedException{
System.out.println("4");
Reporter.log("4");
}
}
When i run it on eclipse, the console shows as :-
1
2
3
4
PASSED: meth1
PASSED: meth2
PASSED: meth3
PASSED: meth4
But when i open the testNG report, click on reporter output link, it shows as :-
Reporter output -
meth1
1
meth4
4
meth3
3
meth2
2
Why the order is not correct in the testng report?
order of execution is 1,2,3,4
but,
order of reporting is 1,4,3,2.
It can also be displayed by execution order in the output report. To do so, your TestNG Reporter should implement IReporter.
A good plugin that uses it is ReportNG.
You can override it's generateReport method, to display the suites in the Html report, by their parent XML suite order, like so:
public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites, String outputDirectoryName) {
...
Comparator<ISuite> suiteComparator = new TestSuiteComparator(xmlSuites);
suites.sort(suiteComparator);
...
}
Where the TestSuiteComparator is as follow:
public class TestSuiteComparator implements Comparator<ISuite> {
public List<String> xmlNames;
public TestSuiteComparator(List<XmlSuite> parentXmlSuites) {
for (XmlSuite parentXmlSuite : parentXmlSuites) {
List<XmlSuite> childXmlSuites = parentXmlSuite.getChildSuites();
xmlNames = new ArrayList<String>();
xmlNames.add(parentXmlSuite.getFileName());
for (XmlSuite xmlsuite : childXmlSuites) {
xmlNames.add(xmlsuite.getFileName());
}
}
}
#Override
public int compare(ISuite suite1, ISuite suite2) {
String suite1Name = suite1.getXmlSuite().getFileName();
String suite2Name = suite2.getXmlSuite().getFileName();
return xmlNames.indexOf(suite1Name) - xmlNames.indexOf(suite2Name);
}
}
If you want the classes and methods listed in this file to be run in an unpredictible order, set the preserve-order attribute to false.
That is what the TestNG documentation says, so unless the wrong spelling (not saying that mine is perfect :), that's a feature.
But it seems to me, that only the reporting order is unpredictable, the execution seems quite predictable.
The dtd says,
#attr preserve-order If true, the classes in this tag will be run in the same order as
found in the XML file.
so that matches what it actually does.
It's not wrong but maybe expected the other way round.
Seems to be a feature to make the reports more attractive :)