How to create a Gitlab CI coverage badge for R - r

Being not familar with R myself, I'd like to create a badge to summarize the test coverage in Gitlab CI, using the covr package. The only way I found on the net was to use the gitlab function; however this seems to create an HTML page, which is not what I want.
Is there a simple way to retrieve the summary on stdout, and a regexp to parse it for the badge?

usethis::use_gitlab_ci() provides a useful template to set up GitLab CI/CD for R packages which in turn can be used to create coverage badges. It basically executes covr::gitlab() during the pipeline testing stage.
More information on how to create a badge from that can be found in the GitLab docs on pipelines. According to the notes in the usethis template, inside Settings > CI/CD > General Pipelines, you need to insert \d+\.\d+ to extract the coverage from the covr report. You can then simply copy and paste the coverage badge Markdown code shown under the "Coverage report" section. As soon as the pipeline has run all tests, the badge should show the current test coverage.

Related

Snakemake reports

I would like to create a report for Snakemake which includes both runtime reports and results from a workflow.
snakemake --report report.html
Allow me to create wonderful runtime reports
This page describes one way to do reports:
https://snakemake.readthedocs.io/en/stable/snakefiles/reporting.html#
However, the tutorial described another way:
[https://snakemake.readthedocs.io/en/v3.11.0/snakefiles/utils.html][2]
Is the way described in the tutorial depracted?
What is the best way to create reports including run time, software versions AND results?
Is there any way to create reports in RTF or other formats which can be edited using text-software

TFS 2017/18 Test Hub running a test run with custom drop folder

I am currently trying to find a way to start a test run from the UI or command line following this reference. With the command line tool TCM (TFS 2017 and earlier) you could start a test run and provide an alternate build drop location through the switch "\BuildDir", if not provided it would look in the builds drop location stored in TFS.
I am looking for a way to do this in the new way of testing using Test Hub.
I have done a lot of searching but to no avail.
Any help would be very much welcome.
You could use Rest API to achieve the requirement:
POST https://{accountName}.visualstudio.com/{project}/_apis/test/runs?api-version=5.0-preview.2
For the request body there is
buildDropLocation Drop location of the build used for test run.

How can I use SonarQube web service API for reporting purpose

I want to create a custom report. Response format for sonarqube web service API /api/issues/search is JSON or XML. How can I use that response to create a html or CSV file using "unix shell without using command line tools" so that I can use it as a Report. Or is there any other better way to achieve this?
you can generate a html file if you run an analysis in the preview mode http://docs.sonarqube.org/pages/viewpage.action?pageId=6947686
It looks as if the SonarQube team has been working hard to not allow people to do this. They appear to want people to purchase an Enterprise Subscription in order to export reports.
An old version of sonar-runner (now called sonar-scanner) had an option to allow local report output. But that feature is "no more supported".
ERROR: The preview mode, along with the 'sonar.analysis.mode' parameter, is no more supported. You should stop using this parameter.
Looks like version 2.4 of Sonar Runner does what you want. If you can find it. Of course they only have 2.5RC1 available on the site now.
Using the following command should work on version 2.4:
sonar-runner -Dsonar.analysis.mode=preview -Dsonar.issuesReport.html.enable=true
There at least two open-source projects that query the SQ API to generate reports in various formats.
https://github.com/cnescatlab/sonar-cnes-report/tree/dev (Java)
https://github.com/soprasteria/sonar-report (JavaScript/Node)
At the time of writing both are active.

Documentation - automatic tests and integration with GitHub

I'm working on docs-only project (all files are HTML or MD) hosted on GitHub. I'd like each pull request to be automatically tested with spellchecker and write-good. I was thinking about using Travis CI for that, however I can't use the default approach where everything gets rebuilt. In case of docs-projects it's not desirable because:
each file in the docs is independent (no need to build the whole project each time something changes),
some spellchecker/write-good suggestions are debatable (or simply wrong) and should be ignored (e.g. because they miss context).
I don't want ALL pull requests to fail and show a long list of ignored suggestions from across the whole repo.
Is there any way for my Travis CI test to know which files/paragraphs actually changed and should be validated?
I managed to restrict Travis CI tests to files that are actually modified in the pull requests.
In .travis.yml you have a possibility to run a script before running tests. I used it to create a list of modified files:
before_script:
- git diff --name-only master > modified_files
Then, in package.json I'm passing that list to script that does actual validation.
"scripts": {
"test": "proofreader --file-list modified_files"
}
That reduced the noise, but I'd like to go deeper and validate only paragraphs that were changed.

PHPUnit Code coverage analysis for code called over HTTP

I am trying to find a reasonable approach to getting a code coverage report for code that is called from within a test via HTTP. Basically I am testing my own API the way it is supposed to be called but because of that PHPUnit/Xdebug are unaware of the execution of the code within the same codebase.
Basically what I want to achieve is already done using the PHPUnit Selenium extension but I don't run Selenium, I call the code through an OAuth2 Client which in turn uses curl.
Is it be possible to call my API with a GET-parameter that triggers a code coverage report and to have PHPUnit read that report and merge it with the other code coverage? Is there a project that already does that or do I have to resort to writing my own PHPUnit extension?
OP says the problem is that Xdebug-based code coverage collection, won't/can't collect coverage data because Xdebug isn't enabled in all (PHP) processes that execute the code.
There would seem to only be two ways out of this.
1) Find a way to enable Xdebug in all processes invoked. I don't know how to do this, but I would expect there to be some configuration parameter for the PHP interpreter to cause this. I also can't speak to whether separate Xdebug-based coverage reports can be merged into one. One the face of it, the raw coverage data is abstractly just a set of "this location got executed" signals, so merging should just be a set union. How these sets are collected and encoded may make this more problematic.
2) Find a coverage solution that doesn't involve Xdebug, so whether Xdebug is enabled or not is irrelevant. My company's (see bio) PHP Test Coverage Tool does not use Xdebug, so it can collect the test coverage data you want without an issue. You can download it and try it; there's a built in-example of test coverage collection triggered exactly by HTTP requests. The tool has a built-in ability to merge separate test-coverage runs into an integrated result. (I'd provide a direct link, but some SO people are virulently against this).

Resources