Even though I am aware of the fact that tests should run reliably, my experience tells me: that cannot always be accomplished with reasonable effort (and need not be accomplished; see my calculation below).
Particularly, if tests are introduced for a pre-existing Web application that needs to be persistently improved, it can be difficult to build reliable E2E tests. But, in contrast, it is pretty easy and besides sufficient to build tests which crash occasionally (as long as they fail reliably when it comes to expects/asserts).
If you are using Protractor for E2E testing, you may have experienced that as well.
Statistics tell me that a test that is known to have a 25% chance to crash, will have a 6.25% chance to crash twice when run twice, a 1.56% chance to crash three times when run three times, a 0.39% chance to crash four times when run four times and a 0.10% chance to crash five times when run five times (and so on).
Hence, running a set of such tests until each of them manages to terminate without error is no big deal.
My demand is to run a Cucumber.js scenario again and again until it succeeds for the first time during a single feature run, and then take the test pass/fail result of the succeeding run.
I tried to build an After hook that re-runs the scenario but I did not find a way of invoking a scenario from within the hook. Apart from that, I did not find a way of discarding a crashing scenario run.
Please tell me which options I have. I am using Grunt, Protractor and Cucumber.
Since our test suite is huge, still growing rapidly and run by an automated build process, I need a way of running unreliable tests as reliably as possibly.
A common reason for angular failures is not waiting for all the pending requests to complete. In my page constructors I insert a wait for this Javascript to return true:
"return angular.element(document.body).injector().get(\'$http\').pendingRequests.length == 0;"
before proceeding. By being careful to explicitly wait for JavaScript to complete before continuing with my tests their reliability is quite high.
That being said, there are rerun frameworks out there. Here is one by Nickolay Kolesnik .
Related
I am using pact-jvm provider spring. I have two different pact(.json) files lets say (order.json and irs.json), i need to run them in sequentially (order followed by irs), but based on alphabetical order the test classes are picked, the irs run first and order runs second. Is there way to call execute the particular test class provider states or define the test class execution order?
Pact is not a tool for end-to-end testing, in fact, one of Pact's stated objectives is to reduce or in some cases, completely remove the need for E2E testing.
Instead of doing end-to-end testing, we use contract tests to avoid the need to do that. Doing this has a lot of benefits, including the ability to test and release things separately, avoiding the need for managing test environments and data, and reducing coupling/ordering in tests themselves. Furthermore, it should be able to run on your laptop or on a CI build - you don't need to test against a running provider deployed to a real environment.
If you have to run a set of these tests in a particular sequence, you're doing it wrong
Here are some links to help you understand what I mean a bit better:
https://docs.pact.io/consumer/contract_tests_not_functional_tests
https://docs.pact.io/faq/#do-i-still-need-end-to-end-tests
https://docs.pact.io/getting_started/what_is_pact_good_for
I would also recommend completing one of our workshops, probably https://github.com/DiUS/pact-workshop-jvm.
It takes about 1 hour, but is well worth your time as all of the core concepts are clearly explained.
As it stands currently, we have a single template with a dozen tests underneath it. We have two actors, but the second actor never picks up any tests under the session using the template that was launched.
How should I structure my distributed testing to allow for the tests to be executed in parallel against the two actors?
As of version 1.1.4, test sessions execute sequentially, within one test session. The reason for that is to be deterministic about what happens when, so testers can make reliable assumptions about the execution flow. This is important because tests can have dependencies between them and must execute in a specific order for them to succeed. To be sure, this is a bad practice, but it's sometimes necessary for practical reasons.
To execute tests in parallel, you must create two (or more) separate test sessions, so you must split your current session template in two. In the future, OpenTest will introduce an option that will allow one single test session to execute against multiple actors, but the default will still be executing the tests sequentially.
I'm running into serious productivity issues when debugging flows. I can only assume at this point is due to a lack of knowledge on my part; particularly effective debugging techniques of flows. The problems arise when I have one flow which needs to "wait" for a consumption of a specific state. What seems to happen is the waiting flow starts and waits for the consumption of the specified state, but despite implemented as a listening future with an associated call back (at this point I'm simply using getOrThrow on the future returned from 'WhenConsumed'), the flows just hang and I see hundreds of Artemis send/write messages in the console window. If I stop the debug session, delete the node build directory, redeploy the nodes and start again the flows restart and I can return to the point of failure. However if I simply stop and detach the debugger from the node and attempt to run the calling test (calling the flow via RPC), nothing seems to happen. It's almost as if the flow code (probably incorrect at this point) results in the StateMachine/messaging layer becoming stuck in some kind of stateful loop which is only resolved by wiping the node build directories and redeploying. Simply restarting the node results in the flow no longer executing at all. This is a real productivity killer, and so I'm writing this question in the hope and assumption I've missed an obvious trick in how to effectively test/debug flows in such a way which avoids repeatedly re-deploying the nodes.
It would be great if someone could explain how to effectively debug flows; especially flows which are dependent on vault updates and thus wait on a valut update event. I have considered using a subflow, but this would ultimately, (I believe?) not result in quite the functionality required; namely to have a flow triggered when an identified state is consumed by a node. Or maybe it would? Perhaps this issue is due to not using a subFlow??? I look forward to your thoughts anyway!!
Not sure about your specific use case. But in general,
I would do as much unit testing as possible before physically running the nodes and see if the flow works.
Corda provides three levels of unit testing: transaction/ledger DSL, mock network and driver DSL. So if done right, most if not all bugs in the flows should be resolved by the time of runnodes. Actual runnodes mostly just reveal configuration issues.
I am using Serenity with Cucumber to write automated web tests, I could not find in docummentation a way to ignore next tests when one fails.
Currently, if a step fails to run, next steps in the same SCENARIO are ignored, but next scenarios in the feature are executed.
I want that when a test fail, skip all next steps and scenarios.
That isn't supported in Serenity or in BDD tools in general. Scenarios are intended to be independent examples of acceptance criteria or business rules, not steps in a larger test
To elaborate on what John Smart has said:
Each scenario should be able to pass without having to rely on the scenarios that have been run before it.
What's more: internet connection is known to be temperamental on occasion. If one of your scenarios fails because the Internet dropped out while waiting for a page to load, you don't want to have all the scenarios after that (that could be unaffected by the first failure) to be skipped.
In short:
Making your scenarios independent reduces brittleness of your automation suite.
Skipping scenarios if one fails is bad practice (especially for web applications), due to the fact that internet connection is not a constant that you can rely on.
Is anyone aware of an application I might be able to use to record or data-log network delays? Or failing that, would it be feasible to write such a program?
I work for a big corporation who have recently deployed a remote file management platform which is causing severe productivity issues for staff in our branch. We work direct to a server, and every time a file is saved now, there is a significant delay (generally between 5-15 seconds, but sometimes timing out all together). Everything is just extremely unresponsive & slow, and it makes people avoid saving files so often, so quite frequently, when crashes occur, quite a bit of work is lost.
And these delays don't only occur on save operations. They also occur on navigating the network file structure. 2-3 seconds pause / outage between each folder-hop is incredibly frustrating, and adds up to a lot of time when you add it all up.
So when these delays occurs, it freezes out the rest of the system. Clicking anywhere on screen or on another application does nothing until the delay has passed its course.
What I am trying to do is to have some sort of data-logger running which records the cumulative duration of these outages. The idea is to use it for a bit, and then take the issue higher with evidence of the percentage of lost time due to this problem.
I suspect this percentage to be a surprising one to managers. They appear to be holding their heads in the sand and pretending like it only takes away a couple of minutes a day. Going by my rough estimates, we should be talking hours lost per day (per employee), not minutes. :/
Any advice would be greatly appreciated.
You could check if the process is responding using C#. Just modify the answer from the other question to check for the name of the application (or the process id, if possible with that C# API) and sum up the waiting times.
This might be a bit imprecise, because Windows will give a process a grace period until it declares it "not responding", but depending on the waiting times might be enough to make your point.
I'm not sure how your remote file management platform works. In the easiest scenario where you can access files directly on the platform you could just create a simple script that opens files, navigates the file system and lists containing dirs/files.
In case it's something more intricate, I would suggest to use a tool like wireshark to capture the network traffice, filter out the relavant packets and do some analysis on their delay. I'm not sure if this can be done directly in wireshark otherwise I'd suggest to export it as a spreadsheet csv and do you analysis on that.