Karate - how to stop following test(s) when current test fails - automated-tests

I am using Karate and I have written test scenario with multiple runs (using Scenario Outline).
Suppose that I have defined variables in table (Examples section). It contains five rows -> scenario will be run 5 times.
Suppose that two first scenarios (runs) passed and third failed.
Is there some way how to set the behavior that after test scenario failure all following test scenarios (runs) will not be processed (skipped)?
I am not sure if something like that is supported.
Thank you!

No. This request is a surprise, because everybody seems to want the other behavior: https://stackoverflow.com/a/54108755/143475
But you can write a custom loop and handle this the way you want, but I wouldn't recommend it.
If you are ok to do some Java coding, this behavior can be implemented as an ExecutionHook: https://stackoverflow.com/a/59080128/143475

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Which way is better for test data preparation in robot framework?

I’m using robot framework and selenium library to testing a web application, which way is better for test data preparation?
Writing test data directly into the test cases, test data act as the user keywords arguments. This way is simple and without any test data file needed, but some user keywords have a bit more arguments and test cases looks strange for people not familiar with.
Prepare test data file per test case, then load test data file into variables when executing. This way removes the user keywords arguments and easier for making higher level user keywords, but can’t identify where the variables in the user keywords come from directly and need to open and check the test data file when editing test data.
There is no way best way in general, it will depend on the context (how many tests, how many keywords, how many arguments etc.). Writing Robot tests is like writing code in any other langage: you have to refactor it again and again as it grows.
Though in the specific case of Robot, I agree there is a tension between having short/readable keywords with few/no arguments (solution 1) and more detailed keywords with more arguments (solution 2). My strategy is usually to keep the most important/relevant arguments (like 1 or 2) clearly provided in the test itself and take the other ones from data/lib files. This way you can see what this test is specifically doing without having to check other files.
Best approach depends on the volume and verity of data, if the expectation is to get huge amounts of data to be crunched then, document db i.e mongo db is extermely ood, on all other cases, excel should be good enough as well.

Spring Batch to generate steps dynamically corresponding to job though the program

As I have the requirement of creating multiple steps for a single job based upon some conditions parameters through standalone java program can anyone able to share the code snippet program corresponding for that.
Thanks for the response
Create a whole job with all steps you may need.
Add some JobExecutionDecider in job's flow to skip/repeat step.
This should resolve your problem.

How to start with unit testing?

First,I don't want to do TDD. I'd like to write some code and then just write the unit tests for the important stuff.
I know the tools. I know how to write a unit test.
I wrote some code (couple off classes) and I don't know where to start. I'm missing the semantic.
How to pick up what to unit test ?
Should I unit test every class extra ?
Should I try to simulate every possible variation of method's parameters ?
The idea with unit testing is to test pieces of code. We use TDD at my company which works create. We write tests for every possible option of a function. So to answer your three questions;
How to pick up what to unit test ?
It's useless to write unit tests for code or functions that contain no intelligence. This actually is what needs to happen when you strictly follow TDD. But if it's obvious what the function returns and you're sure nothing can go wrong, then you probably don't have to write an test for it. Though it's better to do so.
Should I unit test every class extra?
What exactly do you mean by this? If the question is do you need to test classes the answer is no. Unit testing is for testing the smallest piece of code. Mostly functions and constructors. What you want to know is if your function gives the result you want and you want it to return the desired result or throw an nicely handeled exception no matter what parameter-value you send to the function
Should I try to simulate every possible variation of method's parameters ?
You should. That's the whole idea of writing a unit test. You want to test a piece of code an rule out every single thing that can go wrong. Parameters are the most important here. What happens if a string-parameter contains html for example? And what if an required parameter is NULL? Or an empty string. Every option should be tested to rule out the possible things that can go wrong.
If you're using the .net framework it is very interessting to look at the Moq framework. Shortly put it is an framework allowing you to create an fake object of some type and you can validate your test against it to check if the result is as expected using several different parameter and return values. You can read about it in Scott Hanselmans blog post.
Should I try to simulate every
possible variation of method's
parameters ?
You should have a look at Pex which can generate input values for your parameterised tests that provide the greatest code coverage.

Resources