Skip next tests when one fail serenity cucumber - automated-tests

I am using Serenity with Cucumber to write automated web tests, I could not find in docummentation a way to ignore next tests when one fails.
Currently, if a step fails to run, next steps in the same SCENARIO are ignored, but next scenarios in the feature are executed.
I want that when a test fail, skip all next steps and scenarios.

That isn't supported in Serenity or in BDD tools in general. Scenarios are intended to be independent examples of acceptance criteria or business rules, not steps in a larger test

To elaborate on what John Smart has said:
Each scenario should be able to pass without having to rely on the scenarios that have been run before it.
What's more: internet connection is known to be temperamental on occasion. If one of your scenarios fails because the Internet dropped out while waiting for a page to load, you don't want to have all the scenarios after that (that could be unaffected by the first failure) to be skipped.
In short:
Making your scenarios independent reduces brittleness of your automation suite.
Skipping scenarios if one fails is bad practice (especially for web applications), due to the fact that internet connection is not a constant that you can rely on.

Related

Run Pact Provider Test Class in Sequence

I am using pact-jvm provider spring. I have two different pact(.json) files lets say (order.json and irs.json), i need to run them in sequentially (order followed by irs), but based on alphabetical order the test classes are picked, the irs run first and order runs second. Is there way to call execute the particular test class provider states or define the test class execution order?
Pact is not a tool for end-to-end testing, in fact, one of Pact's stated objectives is to reduce or in some cases, completely remove the need for E2E testing.
Instead of doing end-to-end testing, we use contract tests to avoid the need to do that. Doing this has a lot of benefits, including the ability to test and release things separately, avoiding the need for managing test environments and data, and reducing coupling/ordering in tests themselves. Furthermore, it should be able to run on your laptop or on a CI build - you don't need to test against a running provider deployed to a real environment.
If you have to run a set of these tests in a particular sequence, you're doing it wrong
Here are some links to help you understand what I mean a bit better:
https://docs.pact.io/consumer/contract_tests_not_functional_tests
https://docs.pact.io/faq/#do-i-still-need-end-to-end-tests
https://docs.pact.io/getting_started/what_is_pact_good_for
I would also recommend completing one of our workshops, probably https://github.com/DiUS/pact-workshop-jvm.
It takes about 1 hour, but is well worth your time as all of the core concepts are clearly explained.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

Debugging flows seems really painful

I'm running into serious productivity issues when debugging flows. I can only assume at this point is due to a lack of knowledge on my part; particularly effective debugging techniques of flows. The problems arise when I have one flow which needs to "wait" for a consumption of a specific state. What seems to happen is the waiting flow starts and waits for the consumption of the specified state, but despite implemented as a listening future with an associated call back (at this point I'm simply using getOrThrow on the future returned from 'WhenConsumed'), the flows just hang and I see hundreds of Artemis send/write messages in the console window. If I stop the debug session, delete the node build directory, redeploy the nodes and start again the flows restart and I can return to the point of failure. However if I simply stop and detach the debugger from the node and attempt to run the calling test (calling the flow via RPC), nothing seems to happen. It's almost as if the flow code (probably incorrect at this point) results in the StateMachine/messaging layer becoming stuck in some kind of stateful loop which is only resolved by wiping the node build directories and redeploying. Simply restarting the node results in the flow no longer executing at all. This is a real productivity killer, and so I'm writing this question in the hope and assumption I've missed an obvious trick in how to effectively test/debug flows in such a way which avoids repeatedly re-deploying the nodes.
It would be great if someone could explain how to effectively debug flows; especially flows which are dependent on vault updates and thus wait on a valut update event. I have considered using a subflow, but this would ultimately, (I believe?) not result in quite the functionality required; namely to have a flow triggered when an identified state is consumed by a node. Or maybe it would? Perhaps this issue is due to not using a subFlow??? I look forward to your thoughts anyway!!
Not sure about your specific use case. But in general,
I would do as much unit testing as possible before physically running the nodes and see if the flow works.
Corda provides three levels of unit testing: transaction/ledger DSL, mock network and driver DSL. So if done right, most if not all bugs in the flows should be resolved by the time of runnodes. Actual runnodes mostly just reveal configuration issues.

Cucumber.js: Give Scenarios a second try

Even though I am aware of the fact that tests should run reliably, my experience tells me: that cannot always be accomplished with reasonable effort (and need not be accomplished; see my calculation below).
Particularly, if tests are introduced for a pre-existing Web application that needs to be persistently improved, it can be difficult to build reliable E2E tests. But, in contrast, it is pretty easy and besides sufficient to build tests which crash occasionally (as long as they fail reliably when it comes to expects/asserts).
If you are using Protractor for E2E testing, you may have experienced that as well.
Statistics tell me that a test that is known to have a 25% chance to crash, will have a 6.25% chance to crash twice when run twice, a 1.56% chance to crash three times when run three times, a 0.39% chance to crash four times when run four times and a 0.10% chance to crash five times when run five times (and so on).
Hence, running a set of such tests until each of them manages to terminate without error is no big deal.
My demand is to run a Cucumber.js scenario again and again until it succeeds for the first time during a single feature run, and then take the test pass/fail result of the succeeding run.
I tried to build an After hook that re-runs the scenario but I did not find a way of invoking a scenario from within the hook. Apart from that, I did not find a way of discarding a crashing scenario run.
Please tell me which options I have. I am using Grunt, Protractor and Cucumber.
Since our test suite is huge, still growing rapidly and run by an automated build process, I need a way of running unreliable tests as reliably as possibly.
A common reason for angular failures is not waiting for all the pending requests to complete. In my page constructors I insert a wait for this Javascript to return true:
"return angular.element(document.body).injector().get(\'$http\').pendingRequests.length == 0;"
before proceeding. By being careful to explicitly wait for JavaScript to complete before continuing with my tests their reliability is quite high.
That being said, there are rerun frameworks out there. Here is one by Nickolay Kolesnik .

Confusion over Symfony testing practices: functional vs integration testing

The Symfony testing documentation doesn't really mention a distinction between functional tests and integration tests, but my understanding is that they are different.
The Symfony docs describe functional testing like this:
Make a request;
Test the response;
Click on a link or submit a form;
Test the response;
Rinse and repeat.
While the Ruby on Rails docs describe it like this:
was the web request successful?
was the user redirected to the right page?
was the user successfully authenticated?
was the correct object stored in the response template?
was the appropriate message displayed to the user in the view?
The Symfony docs seem to be describing something more akin to integration testing. Clicking links, filling out forms, submitting them, etc. You're testing that all these different components are interacting properly. In their example test case, they basically test all actions of a controller by traversing the web pages.
I am confused why Symfony does not make a distinction between functional and integration testing. Does anyone in the Symfony community isolate tests to specific controller actions? Am I overthinking things?
Unit testing refers to test methods of a class, one by one, and check that they make the right calls in the right context. If those methods use dependencies (injected services or even other methods of that class), we're mocking them to isolate the test to the current method only.
Integration testing refers to automatically test a feature of your application. This is checking that all possible usage scenarios for the given feature work as expected. To do such tests, you're basically using the crawler, and simulate a website user to work through the feature, and check the resulting page or even resulting database data are consistent.
Functional testing refers to manually challenge the application usability, in a preproduction environment. You have a Quality Assurance team that will roll out some scenarios to check if your website works as expected. Manual testing will give you feedbacks you can't have automatically, such as "this button is ugly", "this feature is too complex to use", or whatever other subjective feedback a human (that will generally think like a customer) can give.
The way I see it the 2 lists don't contradict eachother. The first list (Symfony) can be seen as a method to provide answers for the second list (Rails).
Both lists sound like functional testing to me. They use the application as a whole to determine if the application satisfies the requirements. The second list (Rails) describes typical questions to determine if requirements are met, the first list (Symfony) offers a method on how to answer those questions.
Integration tests are more focused on how units work together. Say you have a repository unit that depends on a database abstraction layer. A unit test can make sure the repository itself functions correctly by stubbing/mocking out the database abstraction layer. An integration test will use both units to see if they actually work together as they should.
Another example of integration testing is to use real database to check if the correct tables/columns exist, and if queries deliver the results you expect. But also that when a logger is called to store a message, that message really ends up in a file.
PS: Functional testing that actually uses a (headless) browser is often called acceptance testing.
Both referenced docs describes functional tests. Functional tests are performed from user perspective (typically on GUI layer). It tests what user will see, what will happen if user submit form or will click on some button. Does not matter if it is automatic or manual process.
Then there are integration and unit tests. These test are on lower level. Basic prediction for unit tests is that they are isolated. You can test particular object and its methods but without external or real dependecies. This is what are mocks for (basically mock simulates real object according to unit test needs). Without understanding of IOC is really hard write isolated tests.
If you are writing tests using real/external dependencies (no mocks), you write integration tests. Integration tests can test cooperation of two objects or whole package/module including querying database, sending mails etc.
Yes, you are overthinking things ;)
I don't know why Symfony or Ruby on Rails states things like that. There is a time where testing depends on the eye it's looking at it. Bottom line: it doesn't matter the name. The only important thing is the confidence that the test gives to you on what you are doing.
Apart from that, the tests are alive and should evolve with your code. I sometimes test only for a specific HTTP status code, other times I isolate a module and unit test it... depends on the time I have to spend, the benefits, etc.
If I have a piece of code that only is used in a controller, I usually go for a functional test. If I'm making an utility I usually go for unit testing.

Resources