Robot Framework: How to build teardown dynamically - robotframework

I am trying to build teardown actions dynamically in my test case. For e.g. for every step in the test case I am having a corresponding teardown step. So depending on at which point the test case fails, I am trying to run only those many clean actions in the teardown.
I am expecting something like below to work (unfortunately Run Keywords need to have AND mentioned specifically in the syntax)
*** Settings ***
Library Collections
*** Test Cases ***
Sample Test1
${Cleanup KWS}= Create List Log Cleanup Step1
Log Test Step1
${Cleanup KW}= Create List Log Cleanup Step2 AND
${Cleanup KWS}= Combine Lists ${Cleanup KW} ${Cleanup KWS}
Log Test Step2
${Cleanup KW}= Create List Log Cleanup Step3 AND
${Cleanup KWS}= Combine Lists ${Cleanup KW} ${Cleanup KWS}
Log Test Step3
[Teardown] Run Keywords #{Cleanup KWS}
If the above is possible, the test case execution might be more efficient (if test fails in-between) and/or I can avoid unnecessary failures in the teardown stage.
Is there any other elegant way to support the above kind of desired behavior?

You can do this by defining each teardown step in the Keywords sections:
*** Keywords ***
Teardown_Step_1
log to console Teardown for step 1
Teardown_Step_2
log to console Teardown for step 2
Teardown_Step_3
log to console Teardown for step 3
Execute teardown steps
[Documentation] Execute a list of keywords
[Arguments] #{keywords}
[Return] #{result}
#{result}= create list
FOR ${keyword} IN #{keywords}
${keyword result}= Run keyword ${keyword}
Append to list ${result} ${keyword result}
END
And then in the test you can use the above keywords like this:
Sample Test1
#{teardown_list} = create list
Log Test Step1
append to list ${teardown_list} Teardown_Step_1
Log Test Step2
append to list ${teardown_list} Teardown_Step_2
Log Test Step3
append to list ${teardown_list} Teardown_Step_3
[Teardown] Execute teardown steps #{teardown_list}

Related

Robot: Set and use a local list in Test Cases

I need to set a list inside a test case and use this list in [SETUP] to pass this list to a python script, how can I achieve this?
TEST-List
#{lst} Create List a b
#{tmp} Set Test Variable #{lst}
[Setup] Receive List ${tmp} ${another_var}
When I try the code above, I got this error:
Variable '{${tmp}' not found.
[Setup] setting is used for performing actions before a test case. Its purpose is to set up a state for your test. That implies that it happens (executes) before test steps, regardless of where you type it.
In your case, [Setup] Receive List ${tmp} ${another_var} is executed first, and ${tmp} variable has not been declared yet.
The solution might be to move the declaration of ${tmp} to a suite level.
In your example, the code in [Setup] is run before any other code in the test. Therefore, #{lst} and #{tmp} are undefined at the time that it runs.
The simplest solution is to create a local keyword that performs everything you need in the setup, and then call that keyword from [Setup].
Example
*** Keywords ***
Initialize test
#{lst} Create List a b
#{tmp} Set Test Variable #{lst}
Receive List ${tmp} ${another_var}
*** Test Cases ***
TEST-List
[Setup] Initialize test
# ... rest of your test goes here ...

Unable to see Set Test Message log value after test case marked as fail

I have the below code:
*** Settings ***
Library OperatingSystem
Library Process
Library String
*** Variables ***
#{MyList}= item items items2
${LogStr1} *HTML*
*** Test Cases ***
#Start Test#
[xxxxx] My tests
FOR ${item} IN #{MyList}
General test.out testProfile ${item}
[Template] Run Test
[Tags] TestTags
END
*** Keywords ***
Run Test
[Documentation] Run the test
[Arguments] ${type} ${profile} ${file} ${test}
When suite config is updated
And updated the config in directory ${test}
Then publish test status
suite config is updated
[Documentation] Get the variables list
Log to Console "Updating get suite config file"
updated the config in directory ${test}
[Documentation] Get the variables list
Run keyword if "${test}" == "items" Stop Test "This is stop called"
publish test status
[Documentation] Create and check if any issue found
${LogStr}= Catenate Test Passed : Log created: Hello
Log to Console ${LogStr}
${LogStr1}= Catenate ${LogStr1} ${LogStr}\n
Set Test Variable ${LogStr1}
Set Test Message ${LogStr1}
Stop Test
[Documentation] Stop Execution
[Arguments] ${FIALUREMSG}
Log To Console ${FIALUREMSG}
${LogStr1}= Catenate ${LogStr1} ${FIALUREMSG}
Fail ${LogStr1}
As per the code the test can be pragmatically made to fail at 1st second or third run. So when I have the code like:
Run keyword if "${test}" == "item" Stop Test "This is stop called"
in mentioned keyword, there are 2 test cases that passes for a suite but report states:
Now if I make the second test case to fail I get below test message logs:
Run keyword if "${test}" == "items" Stop Test "This is stop called"
in mentioned keyword, there are 2 test cases that passes for a suite but report states:
Similarly if
Run keyword if "${test}" == "items2" Stop Test "This is stop called"
And so on - hence it seems that "Set Test Message" log values are ignored in report message when a test case is marked as Fail. Note that below is the log.html content when I ran the code to mark the very first test case as failure:
Run keyword if "${test}" == "item" Stop Test "This is stop called"
In all my question is thus that if I want report.html file to show logs for all fail and passed test cases how can I achieve it?
If you check the documentation of the Set Test Message keyword it says that any failure will override the messages, but you have the option to override the failure message from the teardown.
In test teardown this keyword can alter the possible failure message,
but otherwise failures override messages set by this keyword. Notice
that in teardown the message is available as a built-in variable
${TEST MESSAGE}.
So what you can do is instead of calling Set Test Message, you could just save the messages into a test variable. Then you should add a teardown in which you call the Set Test Message and concatenate your test variable with the ${TEST MESSAGE}. For example:
*** Test Cases ***
Test
[template] Template
[setup] Set Test Variable ${MSG} ${EMPTY} # create empty test variable to store messages
1
3
2
5
6
4
6
[teardown] Set Test Message ${MSG}\n${TEST MESSAGE} # concatenate failure messages to normal test messages
*** Keywords ***
Template
[arguments] ${number}
No Operation
Run Keyword If ${number} == 2 Fail fail message ${number}
Run Keyword If ${number} == 4 Fail fail message ${number}
Set Test Variable ${MSG} ${MSG}\nMy test message ${number} # concatenate next test message
This example produces the following report:
With this approach you could have only the template tests in the particular robot suite as all test listed in the Test Cases table will invoke the template.
Other, completely different solution can be to get rid of the FOR loop as elements in #{MyList} are static. If you move the template into the Settings table and then manually list all iterations, you could separate each one of them into an independent test case. This way failure in one iteration won't affect the test message set in an other iteration. For example:
*** Settings ***
Test Template Template
*** Test Cases ***
Test1 1
Test2 2
Test3 3
Test4 4
Test5 5
Test6 6
*** Keywords ***
Template
[arguments] ${number}
No Operation
Run Keyword If ${number} == 2 Fail fail message ${number}
Run Keyword If ${number} == 4 Fail fail message ${number}
Set Test Message My test message ${number}
This would produce the following report:
You have a third option in addition to my other answer but it is a bit more advanced solution so I have decided to post it as a separate answer.
You can write a small test library in Python that would also act as a listener. In this listener library you would need three functions out of which two would be keyword and one would be a listener function.
First function is _log_message which is described in the listener API 2. This function will be called by the framework if any logging happens. So when a test fails this keyword will receive the log entry of the failure which can be saved for later use. This function, as it starts with _ is not available in a robot suite as keyword.
def _log_message(self, message):
if message['level'] == 'FAIL':
self.test_message = f"{self.test_message}\n{message['message']}" # concatenate failure message
Second function add_test_message will replace the Set Test Message keyword in your code. Its purpose will be similar, which is to append the messages you want to set as test messages. This can be called from a robot suite as a keyword.
def add_test_message(self, message):
self.test_message = f"{self.test_message}\n{message}" # concatenate normal test message
The last function set_final_test_message will set the actual test message for you. This keyword has to be called at the end of the test teardown to ensure no other failure will override your test message. It simply calls the Set Test Message keyword internally and sets the string created by the previous two functions. This can be called from a robot suite as a keyword.
def set_final_test_message(self):
"""
Call this keyword at the end of the test case teardown.
This keyword can only be used in a test teardown.
"""
BuiltIn()._get_test_in_teardown('Set Final Test Message') # Check if we are in the test teardown, fail if not.
BuiltIn().set_test_message(self.test_message) # Call Set Test Message internally
As the purpose of the library is to set test messages the library scope should be TEST CASE. This means that a new library object will be created before every test case, effectively resetting any messages set by previous tests.
Here is the whole code of the library (TestMessageLibrary.py):
from robot.libraries.BuiltIn import BuiltIn
class TestMessageLibrary(object):
ROBOT_LIBRARY_SCOPE = 'TEST CASE' # define library scope
ROBOT_LISTENER_API_VERSION = 2 # select listener API
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self # tell the framework that it will be a listener library
self.test_message = '' # internal variable to build the final test message
def _log_message(self, message):
if message['level'] == 'FAIL':
self.test_message = f"{self.test_message}\n{message['message']}" # concatenate failure message
def add_test_message(self, message):
self.test_message = f"{self.test_message}\n{message}" # concatenate normal test message
def set_final_test_message(self):
"""
Call this keyword at the end of the test case teardown.
This keyword can only be used in a test teardown.
"""
BuiltIn()._get_test_in_teardown('Set Final Test Message') # Check if we are in the test teardown, fail if not.
BuiltIn().set_test_message(self.test_message)
globals()[__name__] = TestMessageLibrary
This is an example suite with the library:
*** Settings ***
Library TestMessageLibrary
*** Test Cases ***
Test
[template] Template
1
3
2
5
6
4
6
[teardown] Set Final Test Message
Other Test
Add Test Message This should not conflict
[teardown] Set Final Test Message
*** Keywords ***
Template
[arguments] ${number}
No Operation
Run Keyword If ${number} == 2 Fail fail message ${number}
Run Keyword If ${number} == 4 Fail fail message ${number}
Add Test Message My test message ${number}
Example run called with robot --pythonpath ./ SO.robot. As the library is in the same directory as the suite file --pythonpath ./ was need to be able to import the library.
Report file:

RobotFramework/RIDE can't pass sample test - "No keyword with name 'Run ${sakura}' found."

I'm trying sample test of RobotFramework/RIDE according to this article;
Desktop Application Automation With Robot Framework
https://medium.com/#joonasvenlinen/desktop-application-automation-with-robot-framework-6dc39193a0c7
Now, I'd set up ride and run it, and made first test code below;
*** Settings ***
Documentation sample
Library OperatingSystem
Library C:/Python27/Lib/site-packages/AutoItLibrary/
*** Variables ***
${sakura} C:\Sakura\sakura.exe
*** Test Cases ***
first_test
first_test_run
*** Keywords ***
first_test_run
log to console Hello, world!
Run ${sakura}
But when I run this test in ride, I got result report below;
command: pybot.bat --argumentfile c:\users\tie292~1\appdata\local\temp\RIDEujrsg3.d\argfile.txt --listener C:\Python27\Lib\site-packages\robotide\contrib\testrunner\TestRunnerAgent.py:57677:False C:\Users\tie292025\Desktop\first_test.robot
TestRunnerAgent: Running under CPython 2.7.13
First Test :: sample
first_test Hello, world!| FAIL |
No keyword with name 'Run ${sakura}' found.
First Test :: sample | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
My application environment is below;
numpy==1.16.5
Pillow==6.2.1
Pygments==2.4.2
PyPubSub==3.3.0
pywin32==227
robotframework==3.1.2
robotframework-autoitlibrary==1.2.2
robotframework-ride==1.7.3.1
robotframeworklexer==1.1
six==1.13.0
wxPython==4.0.7.post2
Anyone can help?
Put two or more spaces between Run and ${sakura}.
Now Robot tries to find a keyword called Run ${sakura} rather than a keyword Run with ${sakura} value as an argument.
You can test
$ {sakura} C:\\Sakura\\sakura.exe
or
Run C:\\Sakura\\sakura.exe

robotframework: How can I run test cases until one of them fails?

Here is my situation: I need to run number of test cases repeatedly over long period of time (Stress test + Longevity test). In the test case, there are number of events that should be passed all the time. However, I would like to catch any expected failure.
Is there a way to set robot test suite to keep executing over period of time or until it encounter a failure?
While you certainly can do an endurance test in Robot Framework, you may find that too much memory is consumed and may cause the interpreter to exit prematurely with a MemoryError. If you implement your test as a keyword, you can run your "test" many times with a for loop. In the below example, Scenario is where your test code would go. This is just a simulation that will fail after 200 runs.
*** Variables ***
${ITERATION} ${1}
*** Test Cases ***
Endurance Test
[Timeout] 4 hours
:FOR ${i} IN RANGE 1000
\ Scenario
*** Keywords ***
Scenario
Set Suite Variable ${ITERATION} ${ITERATION+1}
Run Keyword If ${ITERATION} > 200 Fail You wore me out

Run a test case Multiple times and display the pass and fail count under test statistics

How to run a particular test case multiple times and display the pass and fail count under Test Statistics?
Below is the current code I have to run a test case multiple times. (The test case is implemented in a keyword and called)
*** Test Cases ***
Testcase
repeat keyword 5 Run Keyword And Continue On Failure Execute
*** Keywords ***
Execute
log Hello world!
The code is run from cmd using "pybot testcase.robot"
This code runs the test multiple times but I'm not getting the final pass/fail count in the logs.
I need to manually count the pass and fail test case repetitions.
So what modifications should I do to get the data automatically and should be seen in Test Statistics of the log also.
Instead of using "Repeat Keyword", use For loop.
Use "Run Keyword And Return Status" instead of "Run Keyword And Continue On Failure ".
*** Test Cases ***
Test Me
${fail}= Set Variable 0
:FOR ${index} IN RANGE 5
\ ${passed}= Run Keyword and Return Status Execute
\ Continue For Loop If ${passed}
\ ${fail}= ${fail} + 1
${success}= Set Variable 5 - ${fail}
Log Many Success: ${success}
Log Many fail: ${fail}

Resources