I've been searching up how to set up phpunit or phpunit.xml to log code coverage reports, but I keep finding docs like How to include files, How to ignore code blocks, and such, but nothing on setting it up. Can I get some instruction on setting code coverage up with phpunit?
https://phpunit.de/manual/current/en/code-coverage-analysis.html points you to https://phpunit.de/manual/current/en/textui.html for a list of commandline switches that control code coverage functionality and to https://phpunit.de/manual/current/en/appendixes.configuration.html#appendixes.configuration.logging for the relevant configuration settings.
https://thephp.cc/dates/2015/05/php-tek/code-coverage-covered-in-depth is a presentation on code coverage that has plenty of examples.
Related
I want only test scenarios in the Log file of the report generated in the Robot Framework but while clicking upon the Test Scenarios the test scripts is getting expanded and the Test steps are clearly visible how can stop this issue
I have attached the screenshots I want to achieve this
But I am getting this type of Log Report
In Robot Framework userguide there is an entire section dedicated to this functionality: Removing and Flattening Keywords. In your case I think that the --removekeywords all or the --removekeywords passed should be of particular interest.
I've feel like I've tried everything we currently have a solution that upon checkin to TFS we force a build on CruiseControl.net. In our solution we use the Chutzpah JS Test Adapter. We were able to successfully use the Chutzpah.console.exe to fail the build if any of the JS tests fails, now we would like to fail the build on the coverage. I cannot find any way to have Chutzpah.console.exe output coverage to the XML file it generates.
I thought I could solve the problem by writing my own .xsl that would parse _Chutzpha.coverage.html. I was going to convert that to xml using the junit format that CruiseControl already can interpret. Since I just care about failing the build I was going to make the output of my transform just look like more unit tests that failed. in the xsl i would set the attribute failures > 0
<?xml version="1.0" encoding="UTF-8" ?>
<testsuites>
<testsuite name="c:\build\jstest\jstest\tests\TestSpec2.js" tests="1" failures="1">
<testcase name="Coverage" time="46" />
</testsuite>
</testsuites>
But I really can't since the incoming html has self closing tags.
So, now I want to just run Chutzpah.Console.exe and pipe the output to a file, because the console output does display the total average coverage, read that value, and fail the build if it drops below a threshold.
Is there a better option? Am I missing soemthing? I don't know that much about cruisecontrol.net
I think the output to a file, and parse that is the only option left.
A pity that the coverage info is not in the xml file :-(
This is in fact more a problem with Chutzpah than with CCNet,
think of CCNet as an upgraded task scheduler, it has a lot of options, but relies on the input it receives from the called program. If that one can not provide the data, you're stuck with these kind of workarounds :-(
There is another option, but it may be more of a trip than you're interested in embarking upon. Sonarqube is a server based tool for managing and analyzing code quality. It has a plugin called "Build Breaker", which allows you to fail the build if any number of code quality metrics are not met, including unit test coverage. Like I said, it's kind of involved because you have to set up a server and learn this new tool, but it's good to have anyway.
I use Chutzpah with the /lcov command line option so that it outputs the coverage to a file, then tell sonar to find the coverage in that file in the sonar configuration for that project. Then, you add a step to your cruise control.net build process to run the sonar analysis, and if you have configured your build breaker plugin right, it will fail the build in cruise control if the coverage is not at the level you specified.
I recently created a Meteor package and want to write some tests. What my test package basically do is that users can insert into the template {{> abc}} and they'll get an HTML element printed on the page.
With TinyTest, all you can do is test the package's API using something like test.equal(actual, expected, message, not). However, I need it to test whether the element was successfully printed on the page. Furthermore, I will pass the template some parameters and test them too.
It seems like I'd have to create a dummy app, run bash to initiate the app, and test whether the elements can be found on page. So should I only use TinyTest to test the API, and write my own tests (somehow!) for templating? If not, how should I go about it?
I read something about Blaze.toHTML, but I cannot find anything in the documentation on it? Nor it's source page.
I think TinyTest is great for starting with Unit testing, but what you need sounds more like an Integration test.
I would recommend you look into the following links for more info on testing with Meteor, especially with Velocity - Meteor's official testing framework:
Announcing Velocity: the official testing framework for Meteor applications
Velocity
The Meteor Testing Manual
You can create a demo application, and run integration tests using Mocha or Jasmine.
I have created a new build definition for TFS 2010. After building my C# solution I would like it to execute a couple of unit tests. These unit tests require an XML input file, so I have a [DeploymentItem] attribute to the test methods which provides the relative path the XML files. If I run the unit tests from within Visual Studio they pass ok.
When the unit tests get run following a build (via my build definition), they fail with: "Microsoft.BizTalk.TestTools.BizTalkTestAssertFailException: Input file does not exist..."
It would be great if I could get to a trace of what the build agent was trying to do, to help with troubleshooting.
Does anyone know how to get such a trace output? I guess I could increase the verbosity of the trace output from the main solution under test but I don't think that would give me any indication of where the build agent was looking for the test input XML or why?
Thanks
Rob
I found it! Needed to click the "View Log" link from the screen that's displayed following the build. I had been looking at the default view of "View Summary"
I'm looking for a .NET coverage tool, and had been trying out PartCover, with mixed success.
I see that OpenCover is intended to replace PartCover, but I've so far been unable to link it with TypeMock Isolator so my mocked-out tests pass while gathering coverage info.
I tried replicating my setup for Partcover, but there's no defined profilename that works with the "link" argument for Isolator. Thinking that OpenCover was based on Partcover, I tried to tell Isolator to link with Partcover, and it didn't complain (I still had Partcover installed), but the linking didn't work - Isolator thought it wasn't present.
Am I missing a step? Is there a workaround? Or must I wait for an Isolator version that is friends with OpenCover?
Note: I work at Typemock
I poked around with the configuration a little bit and managed to get OpenCover to run nicely with Isolator. Here's what you can do to make them work together, until we add official support:
Register OpenCover profiler by running runsvr32 OpenCover.Profiler.dll (you will need an Administrator's access for this).
Locate the file typemockconfig.xml, it should be under your installation directory, typically C:\Program Files (x86)\Typemock\Isolator\6.0.
Edit the file, and add the following entry towards the end of the file, above </ProfilerList>:
<Profiler Name="OpenCover" Clsid="{1542C21D-80C3-45E6-A56C-A9C1E4BEB7B8}" DirectLaunch="false">
<EnvironmentList />
</Profiler>
Save the file, you will now have a new entry in the Typemock Configuration utility, called OpenCover. Press the Link button to link them. You will now be able to run your tests using OpenCover.Console.exe and Isolator. For example, here's how to run your tests with MSTest:
OpenCover.Console.exe
-target:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe"
-targetargs:"/testcontainer:"d:\code\myproject\mytests.dll"
-output:opencovertests.xml
There is still a minor issue running this with TMockRunner -link (that is, with late linking). I will need to look at it further at work.
Hope that helps.