Robot framework - way to set statuses of previous test-cases - robotframework

I got robot framework tests in below schema:
Suite Setup
Test Case 1
Test Case 2
Test Case 3
...
Suite Teardown
In tear down step I have got a loop that goes through all tests cases and do some additional checks for all test cases (i can do this when test cases are execute because it need to wait some time for some operations in external system). If any of this checks will fail, the teardown step is fail and it also fail every test case. I can set tear down keyword to don't fail tear down step but than I will have all pass in test suite.
Is there any option/feature (or walkaround) that will give me possibility to set status and error message of selected test case in tear down step (something like tc[23].status=fail, tc[23].message='something'.

This is not possible, at least not out-of-the-box. In any event I also think this is not a desirable test approach. Every test should be self contained and all the logic to assess PASS or FAIL should be in that test. Revisiting the result is in my view an anti-pattern.
It is understandable that when there is a large pause of inactivity that you would like to progress with your tests. However, I think that parallelising your tests is a better and more stable approach. For Robot Framework there is Pabot to help you with that, but creating your own test runner is possible.

Related

Resource cleanup after each test

I'm using robot framework to test a REST API and as such I do have to create many temporary resources (test user or any other resource). Those are abstracted in keyword, but I need to clean them up after each test. But I would like to not worry about cleaning that explicitly in each test since it would require our test case to "play" at different levels of abstraction.
What would be great would be to have the keyword teardown running after the testcase is completed instead of directly after the keyword is completed.
I did some research but haven't found a good way to handle that.
Is there any solution to clean up resources created in a keyword at the end of a test case without doing it explicitly in the test case?
here is a code example to illustrate the situation:
helper.robot
*** Keywords ***
a user exists
create a user
Delete user
actually remove the user
test.robot
Resource helper.robot
*** Test Cases ***
test user can login
Given a user exists
When user login
Then it succeeds
[Teardown] Delete user
What I want is to move the teardown out of the test case to some other way that deletes the user after each test case, but without specifying if in each test case. I don’t want to configure that exact teardown at the setting level since we don’t always use the same resource for all test cases.
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#test-setup-and-teardown
At the time of writing, the only special tags are robot:exit, that is
automatically added to tests when stopping test execution gracefully,
and robot:no-dry-run, that can be used to disable the dry run mode.
More usages are likely to be added in the future.
2.2.6 Test setup and teardown
Robot Framework has similar test setup and teardown functionality as
many other test automation frameworks. In short, a test setup is
something that is executed before a test case, and a test teardown is
executed after a test case. In Robot Framework setups and teardowns
are just normal keywords with possible arguments.
Setup and teardown are always a single keyword. If they need to take
care of multiple separate tasks, it is possible to create higher-level
user keywords for that purpose. An alternative solution is executing
multiple keywords using the BuiltIn keyword Run Keywords.
The test teardown is special in two ways. First of all, it is executed
also when a test case fails, so it can be used for clean-up activities
that must be done regardless of the test case status. In addition, all
the keywords in the teardown are also executed even if one of them
fails. This continue on failure functionality can be used also with
normal keywords, but inside teardowns it is on by default.
The easiest way to specify a setup or a teardown for test cases in a
test case file is using the Test Setup and Test Teardown settings in
the Setting table. Individual test cases can also have their own setup
or teardown. They are defined with the [Setup] or [Teardown] settings
in the test case table and they override possible Test Setup and Test
Teardown settings. Having no keyword after a [Setup] or [Teardown]
setting means having no setup or teardown. It is also possible to use
value NONE to indicate that a test has no setup/teardown.
*** Settings ***
Test Setup Open Application App A
Test Teardown Close Application
*** Test Cases ***
Default values
[Documentation] Setup and teardown from setting table
Do Something
Overridden setup
[Documentation] Own setup, teardown from setting table
[Setup] Open Application App B
Do Something
No teardown
[Documentation] Default setup, no teardown at all
Do Something
[Teardown]
No teardown 2
[Documentation] Setup and teardown can be disabled also with special value NONE
Do Something
[Teardown] NONE
Using variables
[Documentation] Setup and teardown specified using variables
[Setup] ${SETUP}
Do Something
[Teardown] ${TEARDOWN}
if you provide it in settings , it works as you want you don't have to specify for each test case

Reference the detault suite teardown, in a case's custom[Teardown]

Is it possible - and if yes, how - to call default teardown defined in the suite, when you've overriding it in a specific test case?
Example:
*** Settings ***
Suite Setup Setup The Environment
Test Teardown Clean The System
*** Test Cases ***
Test the thing
Do something
Create an object
[Teardown] Delete the object # at this point the suite's test case teardown is overriden, "Clean The System" will not be called
The question is there an internal reference to the suite-level Test Teardown, or a setting to force its execution after any custom teardowns in test cases - apart from the obvious
Run Keywords Delete the object AND Clean The System
The latter shifts the responsibility to the person creating the test cases, and can be easily overlooked - esp. in a large suite/long list of keywords.
I don't think there's any way to do what you want. Your obvious solution is the only solution.

What's wrong with my robot framework teardown?

I'm new to using robot framework and I'm struggling to get my teardown to work.
It currently looks like:
[Teardown] run keyword if any tests failed KeyFail
When I run the program with code like this, I get the error: Keyword 'Run Keyword If Any Tests Failed' can only be used in suite teardown.
I can change it so that I put it inside it's own test case, however I then get the error that: Test Case contains no keywords.
Please advise me as to what I'm doing wrong. It would be appreciated. Thanks.
Edit:
***Keywords***
Generation
(Some stuff)
KeyFail
log to console Error report being sent.
***Test Cases***
Requires successful generation of file
Generation
Teardown Case
[Teardown] run keyword if any tests failed KeyFail
Edit: And how to fix this problem. Thanks
It looks like you have defined it in the test case teardown instead of the test suite teardown. You can change it to use the Test teardown instead.
Edit: Here are two solutions:
1. Change your keyword to the TEST specific one, Run Keyword If Test Failed which applies to the last executed test, and can only be used in a test teardown.
2. The second is to use Suite Setups / teardowns. These apply to ALL test cases that you run. Like this:
***Settings***
Suite Setup Your Test Setup Keyword
Suite Teardown run keyword if any tests failed KeyFail
***Keywords***
Generation
(Some stuff)
KeyFail
log to console Error report being sent.
***Test Cases***
Requires successful generation of file
Generation
Teardown Case
Stuff to do
# teardown is automatic, and does not need to be called.

RobotFramework : Getting testcase results while execution is in progress

We are using Robot framework and RIDE tool for test case execution. we have 100+ testcases and test execution takes more than 6 hours to complete.
RF result and log html is great for viewing results. But these 2 files are viewable only after completion of test case execution.
Is there any plugin / tool or mechanism to view the testcase result status during execution. in RIDE tool -"Run" tab - only shows pass:<> fail:<> and not very user useful.
Need real time testcase status report instead of waiting for completion
You can use the listener interface. With it, you can have robot framework call a python function each time a keyword, testcase or suite starts and finishes. For the case where they finish, the data that is passed in will include the pass or fail status.
Using the listener interface (as Bryan Oakley suggested) is surely the most flexible way to intercept test progession status. If you are looking for tools, Jenkins (with Robot Framework plugin) gives you the opportunity to follow a test run in real time at test case granularity. Just start a job and switch to (Jenkins) console to see the output dropping in.

How to set delay between converge and verify on kitchen test?

I'm running Serverspec integration smoke tests with Test Kitchen on a system built with Vagrant+Chef Solo. When i run kitchen test then the tests are started right after successful converge, and some of my tests fail because it takes time for the system to fully start up for the first time.
So i'm wondering what would be a good way to insert delay between converge and verify, otherwise preserving the default behaviour of kitchen test? Currently i have the following ideas:
write a shell script that does kitchen converge+check if converge was unsuccessful, then abort+sleep xx+kitchen verify+if successful then kitchen destroy. But this would not allow to run multiple suites on parallel (i'm testing multiple versions of the system).
create a recipe that just executes sleep xx and append it to the end of chef run list. This seems to work, but looks a bit too "hacky" for me.
Does anyone know a better way?
taavi
For now i went on with idea 2. Also created a feature request: https://github.com/test-kitchen/test-kitchen/issues/598

Resources