I use stopOnFailure in phpunit. Just my preference. I'm wondering if there is a way to isolate 1 test and run it, like --filter but then keep going. So that if my test suite finds a failure half way thru all the tests ... I just want to pick up where I left off when I'm done isolating and fixing.
(Not looking for solutions like paratest)
No. But you can configure PHPUnit to start the execution with tests that failed the last time: --order-by=defects.
Related
I am using Karate and I have written test scenario with multiple runs (using Scenario Outline).
Suppose that I have defined variables in table (Examples section). It contains five rows -> scenario will be run 5 times.
Suppose that two first scenarios (runs) passed and third failed.
Is there some way how to set the behavior that after test scenario failure all following test scenarios (runs) will not be processed (skipped)?
I am not sure if something like that is supported.
Thank you!
No. This request is a surprise, because everybody seems to want the other behavior: https://stackoverflow.com/a/54108755/143475
But you can write a custom loop and handle this the way you want, but I wouldn't recommend it.
If you are ok to do some Java coding, this behavior can be implemented as an ExecutionHook: https://stackoverflow.com/a/59080128/143475
I am new to robot framework. I want to know how to capture screenshots on failure.
Doesnt robot framework automatically take screenshots if script fails?
An example will be of great help!
this is actually a feature of the Selenium2Library that would be required with Robot if you were doing Selenium based tests.
More information can be found here: http://robotframework.org/Selenium2Library/doc/Selenium2Library.html
As it says it the documentation, setting up screenshots on failure is very easy, for example here is mine from a test suite I'm working with:
Library Selenium2Library timeout=10 implicit_wait=1.5 run_on_failure=Capture Page Screenshot
You can use the below keyword to capture screen shot after any of the step you want :
Capture Page Screenshot
Hope so this was helpful!
In this case teardown will be executed once the test case is executed/not executed and if the test case fail, it will take screenshot:
[Teardown] Run Keyword If Test Failed Capture Page Screenshot
Or you can do it even better on suite level if you don't need different teardowns for particular tests:
[Test Teardown] Run Keyword If Test Failed Capture Page Screenshot
So far, all of the other answers assume that you are using Selenium
If you are not, there is a "Screenshot" library that has the keyword "Take Screenshot." To include this library, all you need to do is put "Library Screenshot" in your settings table.
In my robotframework code, my teardowns all just reference a keyword I made called "Default Teardown" which is defined as:
Default Teardown
Run Keyword If Test Failed Take Screenshot
Close All
(I think that the Close All might be one of my custom keywords).
I have noticed a few issues with the Take Screenshot keyword. Some of this may be configurable, but I don't know. First off, it will take a screenshot of your screen, not necessarily just the application that you are interested in. So if you're using this and are allowing other people to view the resulting screenshots, make sure that you don't have anything else on your screen that you wouldn't want to share.
Also, if you kick off your tests and then lock your screen so you can take a quick break while it runs, all of your screenshots will just be pictures of your lock screen.
I'm using this on my Jenkins server as well which is using the xvfb-run command to create sort of a fake GUI to run the robot framework tests. If you're doing this as well, make sure that in your xvfb-run command you include something along the lines of
xvfb-run --server-args="-screen 0 1024x768x24" <rest of your command>
You'll have to decide what screen resolution works the best for you, but I found that with the default screen resolution, only a small portion of my app was captured.
Long story short, I think that you're better off using Capture Page Screenshot if you're using selenium. However if you're not, this may be your best (or only) solution.
It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.
I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.
We have an autotools project that has a mixture of unit and integration tests, all of which run via 'make check'. This isn't ideal, as some of the integration tests take a while, and have all sorts of dependencies (database, etc.)
I'd like to separate the integration tests and assign them their own make target. That way, unit tests can still be run often (via make check), and the integration tests can be run as needed in a similar fashion.
Is there a straightforward (or otherwise) way to add an additional make target?
NOTE: I should probably also add that this is a large project, so editing/maintaining every makefile by hand is not desirable. I'd like to do it the 'autotools way' if possible.
-- UPDATE 1 --
I've attempted Jon's solution, and it's a step closer, but not quite there. I still have a couple of issues:
1) Recursion - I'm OK with modifying the makefile.am in the root of the build tree, as well as any directory that contains the tests, but it seems like there should be a way to do this where I don't have to change every Makefile.am in the hierarchy. (the check target works this way, after all)
2) .PHONY - I keep getting messages about .PHONY being redefined. Which is understandable, because it's being set by another package (specifically, doxygen). How do I make the two play nice together?
In your am files, all make syntax is passed into the generated Makefile. So if you want a new target just create it like you would in a Makefile and it will appear in the auto generated Makefile. Put the following at the bottom of your am files.
integration-tests: prerequisites....
commands to run test
.PHONY: integration-tests
Since there haven't been any more responses, I'm going to answer with my solution.
I solved the recursion issue by eliminating the recursion altogether. Using this page
as a guide, I switched the entire project from recursive make to non-recursive make. I then cloned the non-recursive check-related targets (check, check-am, check-TESTS, etc.) into a new set of targets for the integration tests. So far, this works extremely well.
Note: you may be wondering why I didn't just clone the recursive targets instead. Quite frankly, I couldn't find them. Either I didn't know where to look (the rules weren't in the generated Makefile) or something is happening implicitly, and I don't understand autotools well enough to follow it.
As for the issue with .PHONY being redefined, I still haven't found a solution, other than to conditionally exclude the other definition when I'm doing the integration tests.