I would like to create a report for Snakemake which includes both runtime reports and results from a workflow.
snakemake --report report.html
Allow me to create wonderful runtime reports
This page describes one way to do reports:
https://snakemake.readthedocs.io/en/stable/snakefiles/reporting.html#
However, the tutorial described another way:
[https://snakemake.readthedocs.io/en/v3.11.0/snakefiles/utils.html][2]
Is the way described in the tutorial depracted?
What is the best way to create reports including run time, software versions AND results?
Is there any way to create reports in RTF or other formats which can be edited using text-software
Related
Being not familar with R myself, I'd like to create a badge to summarize the test coverage in Gitlab CI, using the covr package. The only way I found on the net was to use the gitlab function; however this seems to create an HTML page, which is not what I want.
Is there a simple way to retrieve the summary on stdout, and a regexp to parse it for the badge?
usethis::use_gitlab_ci() provides a useful template to set up GitLab CI/CD for R packages which in turn can be used to create coverage badges. It basically executes covr::gitlab() during the pipeline testing stage.
More information on how to create a badge from that can be found in the GitLab docs on pipelines. According to the notes in the usethis template, inside Settings > CI/CD > General Pipelines, you need to insert \d+\.\d+ to extract the coverage from the covr report. You can then simply copy and paste the coverage badge Markdown code shown under the "Coverage report" section. As soon as the pipeline has run all tests, the badge should show the current test coverage.
my management would like to have a simple switch to trigger the generated ROBOT test reports, to get from one test execution two reports: one comprehensive detailed report (xml, html) and one management-level report with general info and without many technical details.
Is there a standard ROBOT mechanism to generate two different reports at once?
How would you do?
Thank you for your suggestions!
Robot has the built-in ability to generate four types of outputs:
log.html is a detailed low level log of test execution, showing the details of every suite, test case, and keyword (parameters, results, duration)
report.html is a more high level overview of test execution
output.xml is a detailed log of all of the data used to generate the other reports
xUnit is an XUnit-compatible file that can be processed by many xUnit-compatibile tools
The log.html, report.html, and output.xml files are all generated by default. Generating the xUnit output requires the use of a command line option.
If none of those meet your need, there is an API for reading and processing the output.xml file which you can use to generate a custom report. The format of the output.xml file is very simple and easy to parse, so you can also use just about any xml parsing tool you want to parse the results and generate your own report.
All of this information is available in the robot frame user guide, in a section titled Created outputs.
Before I invest too much time in it I need to know... Do R Markdown Parameterized Reports require an R Studio Connect server?
If they do that's a little outside of my budget and what I want to get into.
I'd like to be able to send out static .html files people can upload their data.csv into, and another .html file will be spit out based off my scripts and R Markdown.
Parametrized Rmarkdown reports are a feature provided by the rmarkdown package (>= version 0.8). They do not require a RStudio Connect Server. However, you will still need a running R process somewhere to render the report based on the provided parameters.
I want to create a custom report. Response format for sonarqube web service API /api/issues/search is JSON or XML. How can I use that response to create a html or CSV file using "unix shell without using command line tools" so that I can use it as a Report. Or is there any other better way to achieve this?
you can generate a html file if you run an analysis in the preview mode http://docs.sonarqube.org/pages/viewpage.action?pageId=6947686
It looks as if the SonarQube team has been working hard to not allow people to do this. They appear to want people to purchase an Enterprise Subscription in order to export reports.
An old version of sonar-runner (now called sonar-scanner) had an option to allow local report output. But that feature is "no more supported".
ERROR: The preview mode, along with the 'sonar.analysis.mode' parameter, is no more supported. You should stop using this parameter.
Looks like version 2.4 of Sonar Runner does what you want. If you can find it. Of course they only have 2.5RC1 available on the site now.
Using the following command should work on version 2.4:
sonar-runner -Dsonar.analysis.mode=preview -Dsonar.issuesReport.html.enable=true
There at least two open-source projects that query the SQ API to generate reports in various formats.
https://github.com/cnescatlab/sonar-cnes-report/tree/dev (Java)
https://github.com/soprasteria/sonar-report (JavaScript/Node)
At the time of writing both are active.
I am trying to find a reasonable approach to getting a code coverage report for code that is called from within a test via HTTP. Basically I am testing my own API the way it is supposed to be called but because of that PHPUnit/Xdebug are unaware of the execution of the code within the same codebase.
Basically what I want to achieve is already done using the PHPUnit Selenium extension but I don't run Selenium, I call the code through an OAuth2 Client which in turn uses curl.
Is it be possible to call my API with a GET-parameter that triggers a code coverage report and to have PHPUnit read that report and merge it with the other code coverage? Is there a project that already does that or do I have to resort to writing my own PHPUnit extension?
OP says the problem is that Xdebug-based code coverage collection, won't/can't collect coverage data because Xdebug isn't enabled in all (PHP) processes that execute the code.
There would seem to only be two ways out of this.
1) Find a way to enable Xdebug in all processes invoked. I don't know how to do this, but I would expect there to be some configuration parameter for the PHP interpreter to cause this. I also can't speak to whether separate Xdebug-based coverage reports can be merged into one. One the face of it, the raw coverage data is abstractly just a set of "this location got executed" signals, so merging should just be a set union. How these sets are collected and encoded may make this more problematic.
2) Find a coverage solution that doesn't involve Xdebug, so whether Xdebug is enabled or not is irrelevant. My company's (see bio) PHP Test Coverage Tool does not use Xdebug, so it can collect the test coverage data you want without an issue. You can download it and try it; there's a built in-example of test coverage collection triggered exactly by HTTP requests. The tool has a built-in ability to merge separate test-coverage runs into an integrated result. (I'd provide a direct link, but some SO people are virulently against this).