I have a suite with more than 500 test scenarios written in Web driver. I am using TestNG framework to execute the test cases.
Could you please suggest me how to execute specific test cases from the suite like 31, 45, 68 etc?
Thank you
If you have prefaced all of your tests with #Test, then in Eclipse with TestNG plugin, you can right click on the method name, and run it as a TestNG Test. You can also click on a class file, and only run that class. However, if you want to run test X and test Y that are in different tests, you can press Run Configurations... and add which methods you want run.
If you don't have the TestNG plugin, you can write an XML file that has the classes, methods, groups, or suites you want to run. It is described here
Related
Is it possible to call a robot framework file from another robot framework file. Can some one give examples of it
Requirement
We have some tests which are of repetitive nature. The idea is to have these tests present in a Robot file which can be called into the main robot tests file. This will allow us to keep adding to the list of repetitive tests and all the new / old tests will be available for the main tests.
Any examples will help. Thanks.
-KK
Tests (or Test cases) are not reusable components in robot framework. Tests exist to perform verifications. Once those verifications have been made, there's no point in running the test again in the same test run.
Even though tests can't call other tests, they can call user keywords, which is the foundation of robot framework. If you have bits of functionality that you want to be reusable, you put that functionality in keywords, and then you can use those keywords in as many tests as you want.
For example, lets say that you need to send a signal to a device and check for a light to come on. Instead of writing a test that does this, and then repeating the test over and over, you create a keyword that sends the signal, and a keyword that verifies the light is on, and then call these keywords from multiple tests (or one data-driven test).
Yeah.
Just declare the file that you wish to call in Settings section of your code as specified below.
Resource ../common/Resource_Apps.robot
Now you can use or call all the keywords written in this resource file.
just import the another robot as a Resource
Settings:
Library PythonLibrary.py
Resource <Folder_Name>/Example.robot
I recently created a Meteor package and want to write some tests. What my test package basically do is that users can insert into the template {{> abc}} and they'll get an HTML element printed on the page.
With TinyTest, all you can do is test the package's API using something like test.equal(actual, expected, message, not). However, I need it to test whether the element was successfully printed on the page. Furthermore, I will pass the template some parameters and test them too.
It seems like I'd have to create a dummy app, run bash to initiate the app, and test whether the elements can be found on page. So should I only use TinyTest to test the API, and write my own tests (somehow!) for templating? If not, how should I go about it?
I read something about Blaze.toHTML, but I cannot find anything in the documentation on it? Nor it's source page.
I think TinyTest is great for starting with Unit testing, but what you need sounds more like an Integration test.
I would recommend you look into the following links for more info on testing with Meteor, especially with Velocity - Meteor's official testing framework:
Announcing Velocity: the official testing framework for Meteor applications
Velocity
The Meteor Testing Manual
You can create a demo application, and run integration tests using Mocha or Jasmine.
I am running test using a phpunit.xml.dist file. This file defines several test suites and specifies a bootstrap.php. In this bootstrap.php I am currently loading all dependencies for all tests.
A small subset of the tests is dependent on some third party library, which is optional. These tests are all part of a particular test suite. So I only want to load this library in the bootstrapping file when that particular test suite is specified.
How can I determine if this test suite was specified? This then ensures that most tests can be run when the library is not loaded, and that one can easily verify the code and tests that should not depend on the library indeed do not need it.
I currently have the following. Is there something better?
if ( !in_array( '--testsuite=WikibaseDatabaseStandalone', $GLOBALS['argv'] ) ) {
require_once( __DIR__ . '/evilMediaWikiBootstrap.php' );
}
The feature request on the PHPUnit bugtracker for a test suite specific bootstrap is here: https://github.com/sebastianbergmann/phpunit/issues/733
For now there are two options: One is yours which is fine but feels really hackish and doesn't work out well if you run "all the tests" if you have specific bootstrap for every one of them.
My suggestion would be to write a test listener and hook into "startTestSuite" and "endTestSuite". This is a nice maintained and BC compatible way to execute code only when the test suite is actually started and you can also clean up afterwards.
See http://phpunit.de/manual/3.7/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener and http://phpunit.de/manual/3.7/en/appendixes.configuration.html#appendixes.configuration.test-listeners for how to include the test listener.
One of the usual way to handle this is to check if a required dependency is installed, and if not, run
$this->markTestAsSkipped('lib not installed');
That skipping can also happen in the setUp() phase of a test.
Finally, you can add #group annotations to the test-class and/or test functions to give some choice to whether or not the test is run from the command line (with the --group [names...] parameter).
Finally, an option that has also been used in the ZendFramework is to only add the TestSuite that runs a subset within a larger set of a testsuite - in code. There is an example of being able to
a) turn off as will,
b) turn off if the extension is not loaded, or
c) run the tests, for the use of (for example)
caching with APC
I use Simpletest test suites to run all tests, with different configuration methods. I've created one suite per config method, and it sets up the environment and then runs all tests.
Take a look at Adapter.php and Constants.php if that's unclear.
Now there seems to be support for test suites in PHPUnit, but as I understand it, it's merely for grouping tests with no support for setting up the environment or running PHP scripts.
How can I convert my test suites to PHPUnit? I'm open to rethink how the tests are structured :)
You can use test fixtures to setup your environment either before each test, or before each test suite:
http://www.phpunit.de/manual/3.6/en/fixtures.html#fixtures.more-setup-than-teardown
The methods you are looking for are
setUp() and tearDown() (called before/after each single test), and
setUpBeforeClass() and tearDownAfterClass() (called before/after each test class (suite)).
Apart from restructuring your test suite you should also be aware that the behaviour of the assertions in PHPUnit is different from SimpleTest.
While in SimpleTest you can test on an assertions result and execute further code after a failed assertion, you cannot in PHPUnit: test the return value of a method that triggers an error with PHPUnit
This will likely also force refactoring some of the actual tests.
I have created a new build definition for TFS 2010. After building my C# solution I would like it to execute a couple of unit tests. These unit tests require an XML input file, so I have a [DeploymentItem] attribute to the test methods which provides the relative path the XML files. If I run the unit tests from within Visual Studio they pass ok.
When the unit tests get run following a build (via my build definition), they fail with: "Microsoft.BizTalk.TestTools.BizTalkTestAssertFailException: Input file does not exist..."
It would be great if I could get to a trace of what the build agent was trying to do, to help with troubleshooting.
Does anyone know how to get such a trace output? I guess I could increase the verbosity of the trace output from the main solution under test but I don't think that would give me any indication of where the build agent was looking for the test input XML or why?
Thanks
Rob
I found it! Needed to click the "View Log" link from the screen that's displayed following the build. I had been looking at the default view of "View Summary"