Typically when we can define the callback before and/or after the test case. Are there any callback for the test suite using data provider?
(i.e. run before or/and after a group of test cases?)
No, there is no such callback. Also note that all data providers are executed before the first test is executed.
Related
I'm using robot framework to test a REST API and as such I do have to create many temporary resources (test user or any other resource). Those are abstracted in keyword, but I need to clean them up after each test. But I would like to not worry about cleaning that explicitly in each test since it would require our test case to "play" at different levels of abstraction.
What would be great would be to have the keyword teardown running after the testcase is completed instead of directly after the keyword is completed.
I did some research but haven't found a good way to handle that.
Is there any solution to clean up resources created in a keyword at the end of a test case without doing it explicitly in the test case?
here is a code example to illustrate the situation:
helper.robot
*** Keywords ***
a user exists
create a user
Delete user
actually remove the user
test.robot
Resource helper.robot
*** Test Cases ***
test user can login
Given a user exists
When user login
Then it succeeds
[Teardown] Delete user
What I want is to move the teardown out of the test case to some other way that deletes the user after each test case, but without specifying if in each test case. I don’t want to configure that exact teardown at the setting level since we don’t always use the same resource for all test cases.
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#test-setup-and-teardown
At the time of writing, the only special tags are robot:exit, that is
automatically added to tests when stopping test execution gracefully,
and robot:no-dry-run, that can be used to disable the dry run mode.
More usages are likely to be added in the future.
2.2.6 Test setup and teardown
Robot Framework has similar test setup and teardown functionality as
many other test automation frameworks. In short, a test setup is
something that is executed before a test case, and a test teardown is
executed after a test case. In Robot Framework setups and teardowns
are just normal keywords with possible arguments.
Setup and teardown are always a single keyword. If they need to take
care of multiple separate tasks, it is possible to create higher-level
user keywords for that purpose. An alternative solution is executing
multiple keywords using the BuiltIn keyword Run Keywords.
The test teardown is special in two ways. First of all, it is executed
also when a test case fails, so it can be used for clean-up activities
that must be done regardless of the test case status. In addition, all
the keywords in the teardown are also executed even if one of them
fails. This continue on failure functionality can be used also with
normal keywords, but inside teardowns it is on by default.
The easiest way to specify a setup or a teardown for test cases in a
test case file is using the Test Setup and Test Teardown settings in
the Setting table. Individual test cases can also have their own setup
or teardown. They are defined with the [Setup] or [Teardown] settings
in the test case table and they override possible Test Setup and Test
Teardown settings. Having no keyword after a [Setup] or [Teardown]
setting means having no setup or teardown. It is also possible to use
value NONE to indicate that a test has no setup/teardown.
*** Settings ***
Test Setup Open Application App A
Test Teardown Close Application
*** Test Cases ***
Default values
[Documentation] Setup and teardown from setting table
Do Something
Overridden setup
[Documentation] Own setup, teardown from setting table
[Setup] Open Application App B
Do Something
No teardown
[Documentation] Default setup, no teardown at all
Do Something
[Teardown]
No teardown 2
[Documentation] Setup and teardown can be disabled also with special value NONE
Do Something
[Teardown] NONE
Using variables
[Documentation] Setup and teardown specified using variables
[Setup] ${SETUP}
Do Something
[Teardown] ${TEARDOWN}
if you provide it in settings , it works as you want you don't have to specify for each test case
I am testing my scheduled function via this approach:
firebase functions:shell
firebase> RUN_NAME_OF_THE_FUCTION()
In this function, I am verifying if an action should be run, and if it should, I am sending emails. The problem - I can't differentiate between the test and prod environment as I do not know how to:
Pass an argument to the scheduled function
Understand the context of me running a local function.
Is there a way for me to somehow identify that the scheduled function was run manually?
Pass an argument to the scheduled function
Scheduled functions don't accept custom arguments, so it doesn't really make sense to pass one. They receive a context, and that's all it should expect.
Understand the context of me running a local function.
You can simply set an environment variable on your development machine prior to executing the function. Check this variable at the time of execution to determine that it's being tested, as opposed to invoked on a schedule.
You can also use environment variables to effectively "pass" data to the function for the purpose of development, if that helps you.
I got robot framework tests in below schema:
Suite Setup
Test Case 1
Test Case 2
Test Case 3
...
Suite Teardown
In tear down step I have got a loop that goes through all tests cases and do some additional checks for all test cases (i can do this when test cases are execute because it need to wait some time for some operations in external system). If any of this checks will fail, the teardown step is fail and it also fail every test case. I can set tear down keyword to don't fail tear down step but than I will have all pass in test suite.
Is there any option/feature (or walkaround) that will give me possibility to set status and error message of selected test case in tear down step (something like tc[23].status=fail, tc[23].message='something'.
This is not possible, at least not out-of-the-box. In any event I also think this is not a desirable test approach. Every test should be self contained and all the logic to assess PASS or FAIL should be in that test. Revisiting the result is in my view an anti-pattern.
It is understandable that when there is a large pause of inactivity that you would like to progress with your tests. However, I think that parallelising your tests is a better and more stable approach. For Robot Framework there is Pabot to help you with that, but creating your own test runner is possible.
I want to test a flow that expects several Cash States to exist.
At the moment I have to run 2 other flows before I can run my flow under test.
Obviously this isn't a great test as one of the other flows could fail before my actual test code starts.
How can I initialise the vault directly before running my test?
Take a look at the VaultFiller class that is used internally by some Corda tests to pre-fill the vault without using flows.
It is defined here: https://github.com/corda/corda/blob/release-V3/testing/test-utils/src/main/kotlin/net/corda/testing/internal/vault/VaultFiller.kt
You can see a sample usage here: https://github.com/corda/corda/blob/release-V3/finance/src/test/kotlin/net/corda/finance/contracts/asset/CashTests.kt#L98
I know there is an option in PHPUnit to stop-on-failure but I don't want it to stop when any test fails, just when this specific test fails.
For example, in my setUp I connect to a DB, and in the first test I check if it connected to the correct DB. If that fails, I can't run the rest of the tests.
Use the #depends feature of PHP.
If a test depends on another, it will only be executed if that other test succeeded. Otherwise it will be skipped. This allows you to pinpoint problems better.
Usage: Add a PHPDOC block at the top of the test function that should only be executed when the other test is successful, and add a line #depends testConnectToDb.
See http://phpunit.de/manual/current/en/appendixes.annotations.html#appendixes.annotations.depends and http://phpunit.de/manual/current/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.test-dependencies for some more details.