How to run JUnit test last - like #Order(999) - automated-tests

I have 20 tests in one class but one test need to run last. Is there any option how to achieve that? Like annotation #Order(Last) or #Order(999)
Best what I can do is to mark all tests method with #Order(19) to #Order(19) and test that need run last have #Order(100)

Related

Defining setup, teardown and variable in argumentfile in robotframework

Basically 2 issues:
1. I plan to execute multiple test cases from argument file. The structure would look like that:
SOME_PATH/
-test_cases/
-some_keywords/
-argumentfile.txt
How should i define a suite setup and teardown for all those test cases executed from file (-A file)?
From what i know:
a) I could execute it in file with 1st and last test case, but the order of test cases may change so it is not desired.
b) provide it in init.robot and put it somewhere without test cases only to get the setup and teardown. This is because if I execute:
robot -i SOME_TAG -A argumentfile /path/to/init
and the init is in test_case folder it will execute the test_cases with a specific tag + those in a folder twice.
Is there any better way? Provide it, for example, in argumentfile?
2 How to provide PATH variable in argumentfiles in robotframework?
I know there is possibility to do:
--variable PATH:some/path/to/files
but is it not for test suite env?
How to get that variable to be visible in the file itself: ${PATH}/test_case_1.robot
For your 2nd question, you could create a temporary environment variable that you'd then use. Depending on the OS you're using, the way you'll do this will be different:
Windows:
set TESTS_PATH=some/path/here
robot -t %TESTS_PATH%/test_case_1.robot
Unix:
export TESTS_PATH="some/path/here"
robot -t $TESTS_PATH/test_case_1.robot
PS: you might want to avoid asking multiple, different questions in the same thread

How to run tests in random order with robotframework maven plugin?

I am trying to run Robotframework testcases from eclipse with Robotframework-maven plugin. Can anyone tell me the configuration of POM.xml to run the testcases according to my given order instead of alphabetical order? For example, I have the following tags in the corresponding test suites:
TestSuit1--->
Testcase1.robot -- >MyTestcase1 [Tags] a
Testcase2.robot --- >MyTestcase2 [Tags] b
Testcase3.robot -- - > MyTestcase3 [Tags] c
I want to executes the above test cases random order. If I write in pom.xml
<includes_cli>b,a,c</includes_cli>
It executes the tests according to alphabetical order instead of my given order. Can anyone have a solution for that?
Br,
Dew
You can use --randomize option to execute the test cases in random order as below:
Case 1:
robot --randomize tests <Testcase1.robot>
tests: Test cases inside each test suite will be executed in random order
Case 2:
robot --randomize suites <path/to/Testsuite>
suites: All test suites will be executed in a random order, but test cases inside suites will run in the order they are defined
It looks like the latest version of the maven plugin has a randomize option:
http://robotframework.org/MavenPlugin/run-mojo.html#randomize
options are:
<randomize>all</randomize>
<randomize>suite</randomize>
<randomize>test</randomize>
with the default being no randomization.
Looks like the same options as for the --randomize command line argument for the robot command:
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#randomizing-execution-order

Need to run marked tests dynamically using py.test

I was looking for a way to run selective tests on run time. Found each test can be marked using #pytest.mark.
import ...
#pytest.mark.feature1
#pytest.mark.priority1
or something like : #pytest.mark(Feature.feature1, Priority.priority2)
def m1
def m2
..
Now I need to run test scripts which are marked as feature1, without giving the script name.
Something like (not sure about command)
py.test -m "feature1"
It should pick all scripts which are marked as feature1 in test suite as other scripts might be marked as feature2..n as well.
Please suggest:
above way to mark tests and pick on run time is suggested or not?
Do I need to have classes to mark test? I believe above way will mark complete test.
What is the command to pick marked test from complete test suite?
Thanks!!
I think you are looking for is the -m option of py.test.
After marking tests with different labels (using the #pytest.mark.label, where label is feature1 or whatever you want), you can run all the feature1 marked tests using:
py.test -m feature1
This will run only the tests marked with the 'feature1' label.
Marked tests can be combined:
py.test -m "feature1 or feature2"
or skipped using not, such as
py.test -m "not (feature1 or feature2)"
I think you are looking for #pytest.mark.skipif
This will skip tests according to condition you put in marker . You can use a string in if condition to skip the desired tests
Ok, in that case this might be the answer pytest -k string select all tests that contains the string in their name and run it

Is it possible to write Robot Framework tests (not keywords) in Python?

Is it possible to write Robot Framework tests in Python instead of the .txt format?
Behind the scenes it looks like the .txt test get converted into Python by pybot so I'm hoping that this is simply a matter of importing the right library and inheriting from the right class but I haven't been able to figure out how to do that.
(We already have a bunch of suites and have keywords written in both formats but sometimes the RF syntax makes it very difficult to do things that are simple in Python. I understand it would be possible to just write a Python keyword for each test plus 'wrap' setup and teardown functions the same way, but that seems cumbersome.)
Robot does not convert your test cases to python behind the scenes before running them. Instead, it parses the test cases, then iterates over each keyword, calling the code that implements the keyword. There isn't ever a stage where there's a completely pure python representation of a test case.
It is not possible to write tests in python, and have those tests run alongside traditional robot tests by the provided test runner. Like you said in your question, your only option is to put all of your logic for a single test case in a single keyword, and call that keyword from a test case.
It is possible to create and execute tests in python solely via the published API. This might not be what you're really asking for, because ultimately you're still creating keywords, you're just creating them via python.
from robot.api import TestSuite
suite = TestSuite('Activate Skynet')
suite.imports.library('OperatingSystem')
test = suite.tests.create('Should Activate Skynet', tags=['smoke'])
test.keywords.create('Set Environment Variable', args=['SKYNET', 'activated'], type='setup')
test.keywords.create('Environment Variable Should Be Set', args=['SKYNET'])
The above example was taken from here:
http://robot-framework.readthedocs.org/en/2.8.1/autodoc/robot.running.html
Well, you should not care if your python code represents tests or keywords as long as you code the logic of the tests in python.
The best you can do is to keep some html tables in robot format. Each line would be a call for a keyword. The keyword could be implemented in python, and, logically, represents a whole test (although in robot terminology it is still a "keyword").
This post shows how you can have access to the robot context from your python code.
robot variables
BuiltIn().get_variable_value("${USERNAME}")
java keywords
from com.mycompany.myproject.testtools import LoginRobotKeyword
LoginRobotKeywords().login(user, pwd)
robot keywords
BuiltIn().run_keyword("check user connected", user)
Robotframework does not support writting test cases in python directly. I've submitted an enhancement PR, check it here
https://github.com/robotframework/robotframework/issues/3128
But I've tried to do that by moving all the test cases logic to python code, and make RF test cases just a entry point to them.
Here is an example.
We could create a python file to include all testing logic and setup/teardown logic, like this
# *** case0001.py *****
from SchoolClass import SchoolClass
schCla = SchoolClass()
class case0001:
def steps(self):
print('''\n\n***** step 1 **** add school class \n''')
self.ret1 = schCla.add_school_class('grade#1', 'class#1', 60)
assert self.ret1['retcode'] == 0
print('''\n\n***** step 2 **** list school class to check\n''')
ret = schCla.list_school_class(1)
schCla.classlist_should_contain(ret['retlist'],
'grade#1',
'class#1',
60,
self.ret1['id'])
def setup(self):
pass
def teardown(self):
schCla.delete_school_class(self.ret1['id'])
And then we creat a Robot file. In which all RF test cases are in the same form and just work as entry points to python test cases above.
like this
*** Settings ***
Library cases/case0001.py WITH NAME C000001
Library cases/case0002.py WITH NAME C000002
*** Test Cases ***
add class - tc000001
[Setup] C000001.setup
C000001.steps
[Teardown] C000001.teardown
add class - tc000002
[Setup] C000002.setup
C000002.steps
[Teardown] C000002.teardown
You could see, in this way, the RF testcases are similar. We could even create a tool to auto generate them by scanning Python testcases.

What do the numbers before test testcase in Test lab indicate?

What do the numbers before test testcase in Test lab indicate? For example:
[1]Login with ur……
What does the [1] mean?
the [1] before the test case in the test lab indicates the number of occurrences of that test case. If you add the same test case twice it will look something like this.
[1] Test Case Name
[2] Test Case Name
it's the number of instances of that test case in the Execution Grid.
each time you add this test case to the grid you get a new instance added with a new number beside it.
It took me a while to understand instances of test 'cases'. I was thinking of it all wrong (because the test lab is where you run the tests, right?).
When you want a specific set of parameters for a test, use test configurations in the Test Plan.
Once the values are set, you create your instance in the Test Lab from the configuration so that you can execute it.
If, on the other hand, you have an instance in Test Lab with some specific values that you like, you can use right-click -> generate configuration.
This creates a configuration for you and you can give it a name of your choosing (something that I really wanted to do in the test lab, until I discovered how Configurations work).
There is also a button in the test plan configurations tab to 'push' updated values to the Test Lab. So while there is a disconnect between configurations and test lab (by design), but it is not such a disconnect that you can't connect them when you want.

Resources