Is there a way to override the old karate-reports folder with the new karate-report folder instead of creating new reports folder for each execution [duplicate] - report

I have recently upgraded to version 1.0.0 from 0.9.6 and noticed that the generated karate-summary.html file, it doesn't display all the tested feature files in the JUnit 5 Runner unlike in 0.9.6.
What it displays instead was the last tested feature file only.
The below screenshots are from the provided SampleTest.java sample code (excluding other Tests for simplicity).
package karate;
import com.intuit.karate.junit5.Karate;
class SampleTest {
#Karate.Test
Karate testSample() {
return Karate.run("sample").relativeTo(getClass());
}
#Karate.Test
Karate testTags() {
return Karate.run("tags").relativeTo(getClass());
}
}
This is from Version 0.9.6.
And this one is from Version 1.0.0
However, when running the test below in 1.0.0, all the features are displayed in the summary correctly.
#Karate.Test
Karate testAll() {
return Karate.run().relativeTo(getClass());
}
Would anyone be kind to confirm if they are getting the similar result? It would be very much appreciated.

What it displays instead was the last tested feature file only.
This is because for each time you run a JUnit method, the reports directory is backed up by default. Look for other directories called target/karate-reports-<timestamp> and you may find your reports there. So maybe what is happening is that you have multiple JUnit tests that are all running, so you see this behavior. You may be able to over-ride this behavior by calling the method: .backupReportDir(false) on the builder. But I think it may not still work - because the JUnit runner has changed a little bit. It is designed to run one method at a time, when you are in local / dev-mode.
So the JUnit runner is just a convenience. You should use the Runner class / builder for CI execution, and when you want to run multiple tests and see them in one report: https://stackoverflow.com/a/65578167/143475
Here is an example: ExamplesTest.java
But in case there is a bug in the JUnit runner (which is quite possible) please follow the process and help the project developers replicate and then fix the issue to release as soon as possible.

Related

Meteor Testing Using Spies

I'm using Velocity with the mike:mocha framework and the chai assertions. Everything is working great, but when it comes time to do stubbing, mocking and spying, I've hit a bit of a roadblock. These aren't core features of mike:mocha or chai, so I found practicalmeteor:chai, which should/may have added spies.
My first stab at finding out whether this is true was to write the following code:
it 'calls update when both documents are present but different', ->
spies.create('log', console, 'log')
which gives me:
ReferenceError: spies is not defined
at packages/velocity:test-proxy/tests/mocha/server/charger_server_doc_spec.coffee:88:9
at wrappedFunc (packages/mike:mocha/server.js:200:1)
at runWithEnvironment (packages/mike:mocha/server.js:156:1)
That implies to me that I've misunderstood what practicalmeteor:chai provides, however, the code I showed in the first example is copied verbatim from the README.
Question: Any tips on getting this to work? Is it a load order issue? The code on Github shows spies and so on are implemented in this package. So I'm a bit stuck.
Thanks!
The package practicalmeteor:chai does not include the practicalmeteor:sinon package which is required to get the spies API included.
They are separate packages because you may not have to use spies when creating basic tests with chai.
If you look at the package.js file in the practicalmeteor:chai package, it does not include the sinon files.
So, just running meteor add practicalmeteor:sinon should solve your problem.

How do I write tests for Meteor which involves templating?

I recently created a Meteor package and want to write some tests. What my test package basically do is that users can insert into the template {{> abc}} and they'll get an HTML element printed on the page.
With TinyTest, all you can do is test the package's API using something like test.equal(actual, expected, message, not). However, I need it to test whether the element was successfully printed on the page. Furthermore, I will pass the template some parameters and test them too.
It seems like I'd have to create a dummy app, run bash to initiate the app, and test whether the elements can be found on page. So should I only use TinyTest to test the API, and write my own tests (somehow!) for templating? If not, how should I go about it?
I read something about Blaze.toHTML, but I cannot find anything in the documentation on it? Nor it's source page.
I think TinyTest is great for starting with Unit testing, but what you need sounds more like an Integration test.
I would recommend you look into the following links for more info on testing with Meteor, especially with Velocity - Meteor's official testing framework:
Announcing Velocity: the official testing framework for Meteor applications
Velocity
The Meteor Testing Manual
You can create a demo application, and run integration tests using Mocha or Jasmine.

SquashTA - Integrating existing tests

We are looking forward to using squashTA to manage our tests. The problem we are facing is that we already have a big automated tests collection and aren't able to make them run via squash TM using squash TA.
Our tests are using junit+selenium WebDriver+SpringFramework.
Currently, we launch our automated tests via maven (in commandLine), and we have a jenkins server running them regularly.
We tried to reuse our tests in a squash TA project, putting them in src/squashta/resources/selenium/java
But code in this folder doesn't even support java packages. It's like the java in the example isn't real java but a fake java parse by squashTA.
Is there any mean of using such already existing tests with squash(TA/TM) ?
Or, any alternatives you know that could do the job ? (we are currently using testlink and must change).
If your selenium test is in :
src/squashTA/resources/selenium-test/src/main/java/org/squashtest/ta/selenium/PetStoreTest.java
With a such structure, the test automation script to run the selenium test (which is in the package org.squashtest.ta.selenium) is :
TEST :
LOAD selenium-test/src/test AS seleniumTestSource
CONVERT seleniumTestSource TO script.java(compile) AS seleniumTestCompiled
CONVERT seleniumTestCompiled TO script.java.selenium2(script) USING $(org.squashtest.ta.selenium.PetStoreTest) AS seleniumTest
EXECUTE execute WITH seleniumTest AS seleniumResult
ASSERT seleniumResult IS success
If your selenium test has some dependencies to other libraries (like to spring in your case), you have to add those depencencies as dependency of the squash-ta-maven-plugin in the pom.xml of your Squash TA project

Test suites info in PHPUnit bootstrap file

I am running test using a phpunit.xml.dist file. This file defines several test suites and specifies a bootstrap.php. In this bootstrap.php I am currently loading all dependencies for all tests.
A small subset of the tests is dependent on some third party library, which is optional. These tests are all part of a particular test suite. So I only want to load this library in the bootstrapping file when that particular test suite is specified.
How can I determine if this test suite was specified? This then ensures that most tests can be run when the library is not loaded, and that one can easily verify the code and tests that should not depend on the library indeed do not need it.
I currently have the following. Is there something better?
if ( !in_array( '--testsuite=WikibaseDatabaseStandalone', $GLOBALS['argv'] ) ) {
require_once( __DIR__ . '/evilMediaWikiBootstrap.php' );
}
The feature request on the PHPUnit bugtracker for a test suite specific bootstrap is here: https://github.com/sebastianbergmann/phpunit/issues/733
For now there are two options: One is yours which is fine but feels really hackish and doesn't work out well if you run "all the tests" if you have specific bootstrap for every one of them.
My suggestion would be to write a test listener and hook into "startTestSuite" and "endTestSuite". This is a nice maintained and BC compatible way to execute code only when the test suite is actually started and you can also clean up afterwards.
See http://phpunit.de/manual/3.7/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener and http://phpunit.de/manual/3.7/en/appendixes.configuration.html#appendixes.configuration.test-listeners for how to include the test listener.
One of the usual way to handle this is to check if a required dependency is installed, and if not, run
$this->markTestAsSkipped('lib not installed');
That skipping can also happen in the setUp() phase of a test.
Finally, you can add #group annotations to the test-class and/or test functions to give some choice to whether or not the test is run from the command line (with the --group [names...] parameter).
Finally, an option that has also been used in the ZendFramework is to only add the TestSuite that runs a subset within a larger set of a testsuite - in code. There is an example of being able to
a) turn off as will,
b) turn off if the extension is not loaded, or
c) run the tests, for the use of (for example)
caching with APC

Can Opencover be used with TypeMock Isolator?

I'm looking for a .NET coverage tool, and had been trying out PartCover, with mixed success.
I see that OpenCover is intended to replace PartCover, but I've so far been unable to link it with TypeMock Isolator so my mocked-out tests pass while gathering coverage info.
I tried replicating my setup for Partcover, but there's no defined profilename that works with the "link" argument for Isolator. Thinking that OpenCover was based on Partcover, I tried to tell Isolator to link with Partcover, and it didn't complain (I still had Partcover installed), but the linking didn't work - Isolator thought it wasn't present.
Am I missing a step? Is there a workaround? Or must I wait for an Isolator version that is friends with OpenCover?
Note: I work at Typemock
I poked around with the configuration a little bit and managed to get OpenCover to run nicely with Isolator. Here's what you can do to make them work together, until we add official support:
Register OpenCover profiler by running runsvr32 OpenCover.Profiler.dll (you will need an Administrator's access for this).
Locate the file typemockconfig.xml, it should be under your installation directory, typically C:\Program Files (x86)\Typemock\Isolator\6.0.
Edit the file, and add the following entry towards the end of the file, above </ProfilerList>:
<Profiler Name="OpenCover" Clsid="{1542C21D-80C3-45E6-A56C-A9C1E4BEB7B8}" DirectLaunch="false">
<EnvironmentList />
</Profiler>
Save the file, you will now have a new entry in the Typemock Configuration utility, called OpenCover. Press the Link button to link them. You will now be able to run your tests using OpenCover.Console.exe and Isolator. For example, here's how to run your tests with MSTest:
OpenCover.Console.exe
-target:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe"
-targetargs:"/testcontainer:"d:\code\myproject\mytests.dll"
-output:opencovertests.xml
There is still a minor issue running this with TMockRunner -link (that is, with late linking). I will need to look at it further at work.
Hope that helps.

Resources