I was looking for a way to run selective tests on run time. Found each test can be marked using #pytest.mark.
import ...
#pytest.mark.feature1
#pytest.mark.priority1
or something like : #pytest.mark(Feature.feature1, Priority.priority2)
def m1
def m2
..
Now I need to run test scripts which are marked as feature1, without giving the script name.
Something like (not sure about command)
py.test -m "feature1"
It should pick all scripts which are marked as feature1 in test suite as other scripts might be marked as feature2..n as well.
Please suggest:
above way to mark tests and pick on run time is suggested or not?
Do I need to have classes to mark test? I believe above way will mark complete test.
What is the command to pick marked test from complete test suite?
Thanks!!
I think you are looking for is the -m option of py.test.
After marking tests with different labels (using the #pytest.mark.label, where label is feature1 or whatever you want), you can run all the feature1 marked tests using:
py.test -m feature1
This will run only the tests marked with the 'feature1' label.
Marked tests can be combined:
py.test -m "feature1 or feature2"
or skipped using not, such as
py.test -m "not (feature1 or feature2)"
I think you are looking for #pytest.mark.skipif
This will skip tests according to condition you put in marker . You can use a string in if condition to skip the desired tests
Ok, in that case this might be the answer pytest -k string select all tests that contains the string in their name and run it
Related
I implemented test cases for my application and decided to run it everyday. The problem is the result of the previous test will be overwritten by the latest test result. I need to keep them both so I came up with a solution that include the test date and time in the report name, for example; report-202111181704.html (use time in 24-hour format).
I searched through the internet and did not found any solution yet. Anybody here know the solution? or any alternative solution will be fine.
It depends on where you execute your tests. From command line you can save the date to variable. Then use this variable to change the name of generated outputs. For example
date=$(date '+%Y-%m-%d_%H:%M:%S')
robot --output ${date}output.xml --log ${date}log.html --report ${date}report.html test.robot
I found the solution. Instead of setting .html file name, I create a folder and put the result there.
To do this, add --outputdir in pabot command so it's gonna look like this
pabot --pabotlibport $PABOT_PORT --pabotlib --resourcefile ./DeviceSet.dat --processes $thread --verbose --outputdir ./result/$OUTPUT_DIR $ENV
where
$OUTPUT_DIR=`date + "%Y%m%d-%H%M"`
The output folder gonna be like ./result/20220301-2052
Basically 2 issues:
1. I plan to execute multiple test cases from argument file. The structure would look like that:
SOME_PATH/
-test_cases/
-some_keywords/
-argumentfile.txt
How should i define a suite setup and teardown for all those test cases executed from file (-A file)?
From what i know:
a) I could execute it in file with 1st and last test case, but the order of test cases may change so it is not desired.
b) provide it in init.robot and put it somewhere without test cases only to get the setup and teardown. This is because if I execute:
robot -i SOME_TAG -A argumentfile /path/to/init
and the init is in test_case folder it will execute the test_cases with a specific tag + those in a folder twice.
Is there any better way? Provide it, for example, in argumentfile?
2 How to provide PATH variable in argumentfiles in robotframework?
I know there is possibility to do:
--variable PATH:some/path/to/files
but is it not for test suite env?
How to get that variable to be visible in the file itself: ${PATH}/test_case_1.robot
For your 2nd question, you could create a temporary environment variable that you'd then use. Depending on the OS you're using, the way you'll do this will be different:
Windows:
set TESTS_PATH=some/path/here
robot -t %TESTS_PATH%/test_case_1.robot
Unix:
export TESTS_PATH="some/path/here"
robot -t $TESTS_PATH/test_case_1.robot
PS: you might want to avoid asking multiple, different questions in the same thread
I have robot files in a folder (tests) as shown below:
tests
1_robotfile1.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
10_robotfile10.robot
11_robotfile11.robot
Now if I execute '/root/users1/power$ pybot root/user1/tests' command, robot files are running in following order:
tests
1_robotfile1.robot
10_robotfile10.robot
11_robotfile11.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
I want to force robot_framework to pick robot files in sequential order, like 1,2,3,4,5....
Do we have any option for this?
If you have the option of renaming your files, you just need to make sure that the prefix is sortable. For numbers, that means they should all have the same number of digits.
I recommend renaming your test cases to have three or four digits for the prefix:
001_robotfile1.robot
002_robotfile2.robot
003_robotfile3.robot
004_robotfile4.robot
005_robotfile5.robot
006_robotfile6.robot
007_robotfile7.robot
008_robotfile8.robot
009_robotfile9.robot
010_robotfile10.robot
011_robotfile11.robot
...
With that, they will sort in the order that you expect.
Following #Emna answer, RF docs ( http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order ) provides some solution.
So what could you do:
rename all the files to have consecutive and computer numbering (001-test.robot instead of 1-test.robot). This may break any internal references to other files (resources), hard to add test in-between,error prone when execution order needs to be changed
you can tag it as Emna
idea from RF docs - write a script to create argument file which will keep ordering in proper way and use it as argument to robot execution. For 1000+ files it should not take longer than few seconds.
try to design tests to not be dependent from execution order, use suite setup instead.
good luck ;)
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order you want
pybot -i bar tests || pybot -i foo tests
Is it possible to write Robot Framework tests in Python instead of the .txt format?
Behind the scenes it looks like the .txt test get converted into Python by pybot so I'm hoping that this is simply a matter of importing the right library and inheriting from the right class but I haven't been able to figure out how to do that.
(We already have a bunch of suites and have keywords written in both formats but sometimes the RF syntax makes it very difficult to do things that are simple in Python. I understand it would be possible to just write a Python keyword for each test plus 'wrap' setup and teardown functions the same way, but that seems cumbersome.)
Robot does not convert your test cases to python behind the scenes before running them. Instead, it parses the test cases, then iterates over each keyword, calling the code that implements the keyword. There isn't ever a stage where there's a completely pure python representation of a test case.
It is not possible to write tests in python, and have those tests run alongside traditional robot tests by the provided test runner. Like you said in your question, your only option is to put all of your logic for a single test case in a single keyword, and call that keyword from a test case.
It is possible to create and execute tests in python solely via the published API. This might not be what you're really asking for, because ultimately you're still creating keywords, you're just creating them via python.
from robot.api import TestSuite
suite = TestSuite('Activate Skynet')
suite.imports.library('OperatingSystem')
test = suite.tests.create('Should Activate Skynet', tags=['smoke'])
test.keywords.create('Set Environment Variable', args=['SKYNET', 'activated'], type='setup')
test.keywords.create('Environment Variable Should Be Set', args=['SKYNET'])
The above example was taken from here:
http://robot-framework.readthedocs.org/en/2.8.1/autodoc/robot.running.html
Well, you should not care if your python code represents tests or keywords as long as you code the logic of the tests in python.
The best you can do is to keep some html tables in robot format. Each line would be a call for a keyword. The keyword could be implemented in python, and, logically, represents a whole test (although in robot terminology it is still a "keyword").
This post shows how you can have access to the robot context from your python code.
robot variables
BuiltIn().get_variable_value("${USERNAME}")
java keywords
from com.mycompany.myproject.testtools import LoginRobotKeyword
LoginRobotKeywords().login(user, pwd)
robot keywords
BuiltIn().run_keyword("check user connected", user)
Robotframework does not support writting test cases in python directly. I've submitted an enhancement PR, check it here
https://github.com/robotframework/robotframework/issues/3128
But I've tried to do that by moving all the test cases logic to python code, and make RF test cases just a entry point to them.
Here is an example.
We could create a python file to include all testing logic and setup/teardown logic, like this
# *** case0001.py *****
from SchoolClass import SchoolClass
schCla = SchoolClass()
class case0001:
def steps(self):
print('''\n\n***** step 1 **** add school class \n''')
self.ret1 = schCla.add_school_class('grade#1', 'class#1', 60)
assert self.ret1['retcode'] == 0
print('''\n\n***** step 2 **** list school class to check\n''')
ret = schCla.list_school_class(1)
schCla.classlist_should_contain(ret['retlist'],
'grade#1',
'class#1',
60,
self.ret1['id'])
def setup(self):
pass
def teardown(self):
schCla.delete_school_class(self.ret1['id'])
And then we creat a Robot file. In which all RF test cases are in the same form and just work as entry points to python test cases above.
like this
*** Settings ***
Library cases/case0001.py WITH NAME C000001
Library cases/case0002.py WITH NAME C000002
*** Test Cases ***
add class - tc000001
[Setup] C000001.setup
C000001.steps
[Teardown] C000001.teardown
add class - tc000002
[Setup] C000002.setup
C000002.steps
[Teardown] C000002.teardown
You could see, in this way, the RF testcases are similar. We could even create a tool to auto generate them by scanning Python testcases.
In a larger project, I have set up ./tests/Makefile.am to run a number of tests when I call make check. The file global_wrapper.c contains the setup / breakdown code, and it calls test functions implemented in several subdirectories.
TESTS = global_test
check_PROGRAMS = global_test
global_test_SOURCES = global_wrapper.c foo/foo_test.c bar/bar_test.c
Works great. But the tests take a long time, so I would like to be able to optionally execute only tests from a single subdir. This is how I did it at first.
I added the subdirectories:
SUBDIRS = foo bar
In the subdirectories, I added local wrappers and Makefile.am's:
TESTS = foo_test
check_PROGRAMS = foo_test
# the foo_test.c here is of course the same as in the global Makefile.am
foo_test_SOURCES = foo_wrapper.c foo_test.c
This, too, works great - when I call make check in the subdirectory foo, only the foo tests are executed.
However, when I now call make check in ./tests, all tests are executed twice. Once through global_test, and once through the local test programs.
If I omit the SUBDIRS statement in the global Makefile.am, the subdirectory makefiles don't get build. If I omit TESTS from the local Makefile.am's, make check doesn't do anything for the local directories.
I'm not that familiar with automake, but I am pretty sure there is some way to solve this dilemma. Can anybody here give me a hint?
Break your tests up. In your tests/Makefile.am do:
TESTS = foo_test bar_test
and build foo_test bar_test appropriately with something like
foo_test_SOURCES = foo/foo_wrapper.c foo/foo_test.c
bar_test_SOURCES = bar/bar_wrapper.c bar/bar_test.c
Now, if you do a raw 'make check', both tests will be run. If you only want to run one test, you can do that with 'make check TESTS=foo_test' or 'make check TESTS=bar_test' and only the appropriate test will run. Typically, the Makefile.am lists all the tests that will be run by default in TESTS and the user selects alternate tests at make-time. Naturally, if you are running the tests a lot, you can 'export TESTS=foo_test' in your shell session and then only type 'make check'.
Can't you remove from "global_test" any test that is already executed in a subdirectory? (Just so they simply don't get executed twice.)
I think you could maybe overwrite the check rule at the top-level to define an environment variable:
check:
DISABLE_SUBTESTS=1 make check-recursive
and then test DISABLE_SUBTESTS in your sub-directories to decide whether to actually run the tests or not.
(Personally, I'd rather arrange to work in the existing make check framework by concealing the output of my tests, rather than overwriting the produced rules like this.)