I have multiple spec files for running tests related to each other and to run them easier separately. When I run my cypress test with a junit reporter only test suites of the last spec files are present. We use the junit reporter for jenkins.
Is these some config I need to add to make sure all test suites are present in the junit report file?
This is a known issue in cypress. https://github.com/cypress-io/cypress/issues/1824
You can use [hash] as a workaround to generate multiple test output files, jenkins will automatically compile the results together.
Add this to your cypress.json file:
"reporterOptions": {
"mochaFile": "./cypress/results/cypress-output.[hash].xml"
},
I was hunting for an answer to this question and just learned that if you use Mochawesome as your test reporter, you can use Mochawesome Merge to combine the output of multiple specs and then use Mochawesome Report Generator to create a single HTML report from your specs.
Related
Is it possible to call a robot framework file from another robot framework file. Can some one give examples of it
Requirement
We have some tests which are of repetitive nature. The idea is to have these tests present in a Robot file which can be called into the main robot tests file. This will allow us to keep adding to the list of repetitive tests and all the new / old tests will be available for the main tests.
Any examples will help. Thanks.
-KK
Tests (or Test cases) are not reusable components in robot framework. Tests exist to perform verifications. Once those verifications have been made, there's no point in running the test again in the same test run.
Even though tests can't call other tests, they can call user keywords, which is the foundation of robot framework. If you have bits of functionality that you want to be reusable, you put that functionality in keywords, and then you can use those keywords in as many tests as you want.
For example, lets say that you need to send a signal to a device and check for a light to come on. Instead of writing a test that does this, and then repeating the test over and over, you create a keyword that sends the signal, and a keyword that verifies the light is on, and then call these keywords from multiple tests (or one data-driven test).
Yeah.
Just declare the file that you wish to call in Settings section of your code as specified below.
Resource ../common/Resource_Apps.robot
Now you can use or call all the keywords written in this resource file.
just import the another robot as a Resource
Settings:
Library PythonLibrary.py
Resource <Folder_Name>/Example.robot
I was wondering if it was possible to access the settings defined within my karma.config.js file, and if so how?
I'm currently using a Grunt task runner to perform various tasks like building, linting, packaging, etc. I'm also using Grunt to kick off the Karma test runner to run my Jasmine unit tests. Furthermore, I'm pulling in the Jasmine-jQuery library so I can define and read in JSON and HTML fixtures from separate files that I setup earlier.
While I was writing some new tests, I noticed that I was redefining my fixtures base path in every test file. So I decided to pull it out and put it into a global constant.
The problem I'm having is where I should define it. Currently, I have a file named "testSettings.js" that I'm including in my karma configuration, where I define a configuration object and set it to window.testSettings. Thats all well and good, but I think it would be better if I defined it within my karma configuration and then just referenced that from my tests. But there doesn't look like a way to do this... or is there?
My library versions are:
"karma": "~0.12.32"
"karma-jasmine": "~0.3.5"
"karma-jasmine-jquery": "~0.1.1"
I've feel like I've tried everything we currently have a solution that upon checkin to TFS we force a build on CruiseControl.net. In our solution we use the Chutzpah JS Test Adapter. We were able to successfully use the Chutzpah.console.exe to fail the build if any of the JS tests fails, now we would like to fail the build on the coverage. I cannot find any way to have Chutzpah.console.exe output coverage to the XML file it generates.
I thought I could solve the problem by writing my own .xsl that would parse _Chutzpha.coverage.html. I was going to convert that to xml using the junit format that CruiseControl already can interpret. Since I just care about failing the build I was going to make the output of my transform just look like more unit tests that failed. in the xsl i would set the attribute failures > 0
<?xml version="1.0" encoding="UTF-8" ?>
<testsuites>
<testsuite name="c:\build\jstest\jstest\tests\TestSpec2.js" tests="1" failures="1">
<testcase name="Coverage" time="46" />
</testsuite>
</testsuites>
But I really can't since the incoming html has self closing tags.
So, now I want to just run Chutzpah.Console.exe and pipe the output to a file, because the console output does display the total average coverage, read that value, and fail the build if it drops below a threshold.
Is there a better option? Am I missing soemthing? I don't know that much about cruisecontrol.net
I think the output to a file, and parse that is the only option left.
A pity that the coverage info is not in the xml file :-(
This is in fact more a problem with Chutzpah than with CCNet,
think of CCNet as an upgraded task scheduler, it has a lot of options, but relies on the input it receives from the called program. If that one can not provide the data, you're stuck with these kind of workarounds :-(
There is another option, but it may be more of a trip than you're interested in embarking upon. Sonarqube is a server based tool for managing and analyzing code quality. It has a plugin called "Build Breaker", which allows you to fail the build if any number of code quality metrics are not met, including unit test coverage. Like I said, it's kind of involved because you have to set up a server and learn this new tool, but it's good to have anyway.
I use Chutzpah with the /lcov command line option so that it outputs the coverage to a file, then tell sonar to find the coverage in that file in the sonar configuration for that project. Then, you add a step to your cruise control.net build process to run the sonar analysis, and if you have configured your build breaker plugin right, it will fail the build in cruise control if the coverage is not at the level you specified.
I have a robot test suite that contains child test suites, and those have their own child test suites.
All tests use a certain set of variables and libraries.
As far as I can tell, I have to define the variables and import the libraries in every single test suite. I hope I'm just missing a trick -- is there a better way to make these things available to all tests at all levels of the hierarchy?
Bonus points if I can do it in a way that supports keyword completion in RIDE. I'm using RIDE 1.2.3 and robot 2.8.3
Create one main resource where you import everything and then import only that main resource in every test suite.
You can wire a resource file and polulate it with Keywords that can be imported into your test suites and test cases.
Read the detailed explaination.
I am running test using a phpunit.xml.dist file. This file defines several test suites and specifies a bootstrap.php. In this bootstrap.php I am currently loading all dependencies for all tests.
A small subset of the tests is dependent on some third party library, which is optional. These tests are all part of a particular test suite. So I only want to load this library in the bootstrapping file when that particular test suite is specified.
How can I determine if this test suite was specified? This then ensures that most tests can be run when the library is not loaded, and that one can easily verify the code and tests that should not depend on the library indeed do not need it.
I currently have the following. Is there something better?
if ( !in_array( '--testsuite=WikibaseDatabaseStandalone', $GLOBALS['argv'] ) ) {
require_once( __DIR__ . '/evilMediaWikiBootstrap.php' );
}
The feature request on the PHPUnit bugtracker for a test suite specific bootstrap is here: https://github.com/sebastianbergmann/phpunit/issues/733
For now there are two options: One is yours which is fine but feels really hackish and doesn't work out well if you run "all the tests" if you have specific bootstrap for every one of them.
My suggestion would be to write a test listener and hook into "startTestSuite" and "endTestSuite". This is a nice maintained and BC compatible way to execute code only when the test suite is actually started and you can also clean up afterwards.
See http://phpunit.de/manual/3.7/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener and http://phpunit.de/manual/3.7/en/appendixes.configuration.html#appendixes.configuration.test-listeners for how to include the test listener.
One of the usual way to handle this is to check if a required dependency is installed, and if not, run
$this->markTestAsSkipped('lib not installed');
That skipping can also happen in the setUp() phase of a test.
Finally, you can add #group annotations to the test-class and/or test functions to give some choice to whether or not the test is run from the command line (with the --group [names...] parameter).
Finally, an option that has also been used in the ZendFramework is to only add the TestSuite that runs a subset within a larger set of a testsuite - in code. There is an example of being able to
a) turn off as will,
b) turn off if the extension is not loaded, or
c) run the tests, for the use of (for example)
caching with APC