Mark version for backward compatibility execution of test cases - robotframework

I have a set of test cases, some of them have evolved as the software releases came out. I need to maintain backward compatiility, so I need to be able to execute all tests compatible with a specific version of the software. I thought of using tags to check if a test case is compatible or not, but it doesn't work so well... Imagine the following test cases:
TC1 - v1.0
[tags] version=1.0
...
TC1 - v2.0
[tags] version=3.0
...
TC2 - v1.0
[tags] version=2.0
...
Right now, let's say I'm testing version 2.2 of my software.
I would like to run robot framework that way:
robot -i version<=2.2 myTestSuite.robot
I don't think that's possible.. so what's the proper way of handling this?
Looking forward to your suggestions!

Related

how to verify the syntax of Robot framework code

I have robot framework written in Linux OS. I often get syntax issue in my ROBOT FRAMEWORK code.
Do we have any online compiler or in python to trace where the syntax error is occurring(Which line)?
You can use the --dryrun command line argument to check test data validity and syntax.
From the user guide which I strongly suggest to browse in general.
Robot Framework supports so called dry run mode where the tests are run normally otherwise, but the keywords coming from the test libraries are not executed at all. The dry run mode can be used to validate the test data; if the dry run passes, the data should be syntactically correct. This mode is triggered using option --dryrun.
The dry run execution may fail for following reasons:
Using keywords that are not found.
Using keywords with wrong number of arguments.
Using user keywords that have invalid syntax.

Roboframework exit on success of a specific test

I am looking for a way to make robotframework exits the execution of a test suite if a specific test passes. It is the exact contrary of what --exitonfailuredoes so I want to know if there is a way to do this with robot framework.
Up to and including robot framework 3.1 there is no good way to skip tests once a test run has started, except to call [Fatal Error][1]. Being able to skip tests has been a feature that people have wanted for many years now.
At the time that I write this, it does not appear that this feature will be added in version 3.2.

How to run tests in same browser in parallel using selenium webdriver?

We have set of automated test cases around 2000 and we need to run them daily on every new build that come up. It's currently taking 4 hours to finish the test on one machine. In order reduce this we planning to run tests in batches (500 per batch) on same machine by initiating multiple browsers of same type. Say 4 firefox browser sessions per test suite. So it can finish in 1 hour time.Is it possible to achieve this using selenium webdriver and testng? Please suggest.
It is possible using Selenium Grid and TestNG. Grid can help distribute your tests on various machines or browser instances as you require. To get started, refer : Grid2
I think you might need to change your driver instantiation code a bit to include RemoteWebDriver instead of concrete drivers, but would be ok if your driver instantiation code in your framework is isolated. TestNG and Grid can help provided your tests are well written to support parallel execution.
For TestNG, you can refer, parallel running in TestNG.
If you are using Python - the best way to go is to use py.test to drive the tests. To distribute the tests the pytest-xdist plugin can be used.
Anyway, for both Java and Python you can use Jenkins to run/distribute your tests (using Selenium plugin)

Jmeter console manipulation for automation purposes

I am pretty newbie by this question.
I want to know about possibilities of how to manipulate Jmeter through the console (bash or cmd).
My goal for a start consists in understanding of how to run my testplan.jmx for several URLS. For this I add "server" and "port" parameters into my testplan.
How could I can change these parameters through the console and then run Jmeter ?
Morover, I want to ask you guys to suggest any free online tutorials where I can learn more about "Jmeter in non gui mode" and possibilities for integration Jmeter between different frameworks to use for automated testing.
Thank you very much indeed.
See:
http://jmeter.512774.n5.nabble.com/How-to-Run-Jmeter-in-command-line-td2640725.html
You can launch your test plan from the command line, specifying parameters, like:
jmeter -n -t plan.jmx -Jmy_url=http://www.firsturl.com
Inside your testplan you'd reference that command line param as ${__P(my_url)}
In terms of capturing results when running in non-gui mode, you may want to see:
http://blogs.amd.com/developer/2009/03/31/using-apache-jmeter-in-non-gui-mode/
Personally, my experience is with using the GUI and writing and running test plans that way but this seems workable.

What alternatives exist for running QTP tests in batch?

We are in the process of implementing automated regression testing for our applications, and are looking for a solid batch-testing utility. We have QuickTest Professional 10.0, and it comes bundled with 'Test Batch Runner' which appears to be deprecated. It appears in previous versions there was 'Multi-Test Manager', which has been discontinued as well.
What alternatives exist, if any?
The canonical way to do this is via Quality Center, if you don't have QC you can use QTP's automation model from a vbs file. The documentation for this is available in Start -> Programs -> QuickTest Professional -> Documentation -> Automation Object Model Reference
QTP 10 works excellent with Multi Test Manager V8.2.4.
We use it for our project (previously used it with QTP 9.2).
Try google for an installation (of you don't have one), it should be free but just not supported by HP anymore.
From WinRunner times I very extensively used Test Driver scripts with great success due to the following benefits:
non-programming testers can easily create/maintain batches as their stored in XML format
test input files are externally configurable through mapping
a variety of customization parameters supported, from login credentials to prefixes and switches
test dependencies could be established so that if critical test cases is failed the whole branch of dependant test cases is skipped.
Now I continue using Test Drivers and introducing them to clients.
And Test Driver approach was integrated not only by the client companies that do not use Quality Center. Some others followed it because it gives much more flexibility and robustness in automated test plan execution.
Thank you,
Albert Gareev
http://automationbeyond.wordpress.com
I echo Motti...if i get your question right ....You can see the below written link as well..
Work with Test Batch runner

Resources