why sometimes phpunit coverage is not available? - phpunit

I've configured a project to do code coverage. I really do not understand why some classes have n/a in coverage table.
Can someone explain me the reason?

The two files you show in the picture do not have any executable code lines, OR the methods could also be marked with an annotation: #codeCoverageIgnore that will show 'n/a' and '0/0' for the lines covered column. The 'functions & method' column group may still show coverage for the methods though.

Related

Rename jar and it's classes

I want to run multiple jacoco javaagents on the same target with tcp ports. This causes an issue due to a naming conflict.
My thought is to rename the jacoco_agent.jar to jacoco2_agent.jar including all the class names.
I already tried jarjar, but that failed. Attaching the renamed jar raises an error. The ClassNotFoundException: org.jacoco.agent.rt.internal_3570298.PreMain occurs. Indicating that the naming did go wrong at some point.
3 days ago Stefan wrote in a comment:
#kriegaex your solution worked. Thank you very much for your help.
Now that comment is gone, not sure why. Anyway, I am going to convert my comment into an answer:
Are you aware of the option to merge multiple JaCoCo files into one for a complete coverage report? You can do this with the JaCoCo Maven plugin (e.g. for multi-module projects) as well as manually. Just make sure that your fuzzer creates different files for each run, then merge them in the end.

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

visual code coverage in travis-ci

I've got some github projects which I want to test with code coverage. The only way I found (see blog post) to achieve this is to write a custom script that counts code coverage XML lines and outputs Code coverage is 74.32%, which is below the accepted 80%. Displaying code coverage in HTML is way better, but is it possible in travis-ci?
You can use https://coveralls.io/ together with Travis to display coverage nicely. Example can be found here: https://coveralls.io/r/phpmyadmin/error-reporting-server
PS: I know this is quite old question, but I've found it just now when searching for something else.
Travic CI doesn't support any persistent storage. One suggestion would be to create a custom script and run phpunit --coverage-html, next send the contents of the output dir to your own server using something like rsync.

How to start with unit testing?

First,I don't want to do TDD. I'd like to write some code and then just write the unit tests for the important stuff.
I know the tools. I know how to write a unit test.
I wrote some code (couple off classes) and I don't know where to start. I'm missing the semantic.
How to pick up what to unit test ?
Should I unit test every class extra ?
Should I try to simulate every possible variation of method's parameters ?
The idea with unit testing is to test pieces of code. We use TDD at my company which works create. We write tests for every possible option of a function. So to answer your three questions;
How to pick up what to unit test ?
It's useless to write unit tests for code or functions that contain no intelligence. This actually is what needs to happen when you strictly follow TDD. But if it's obvious what the function returns and you're sure nothing can go wrong, then you probably don't have to write an test for it. Though it's better to do so.
Should I unit test every class extra?
What exactly do you mean by this? If the question is do you need to test classes the answer is no. Unit testing is for testing the smallest piece of code. Mostly functions and constructors. What you want to know is if your function gives the result you want and you want it to return the desired result or throw an nicely handeled exception no matter what parameter-value you send to the function
Should I try to simulate every possible variation of method's parameters ?
You should. That's the whole idea of writing a unit test. You want to test a piece of code an rule out every single thing that can go wrong. Parameters are the most important here. What happens if a string-parameter contains html for example? And what if an required parameter is NULL? Or an empty string. Every option should be tested to rule out the possible things that can go wrong.
If you're using the .net framework it is very interessting to look at the Moq framework. Shortly put it is an framework allowing you to create an fake object of some type and you can validate your test against it to check if the result is as expected using several different parameter and return values. You can read about it in Scott Hanselmans blog post.
Should I try to simulate every
possible variation of method's
parameters ?
You should have a look at Pex which can generate input values for your parameterised tests that provide the greatest code coverage.

Resources