I am using phpunit to do functional tests. I use the log-junit option to generate results in JUnit-XML format. I then use phing to read this XML and generate a HTML report. The report is fine and neat. However, I have two questions:--
Can I also show the results in graphical format in the same JUnit HTML report file(generated by Phing)? A pie chart or any other chart for that matter? (In terms of passed to failed tests)
The JUnit summary which is generated by using the option -log-junit when running PHPUnit tests shows the test times in seconds. It is not easily readable when the number is big. Can I convert this to minutes by setting some option in command line? Is there any other way to do this?
I am trying to do this without the use of jenkins.
Please share if you know something about this.
Thanks.
What you can do is to use PHPUnit and generate HTML report for test coverage. It's similar to junit log:
phpunit --log-junit results/phpunit/junit.xml --coverage-html=results/phpunit/covegare -c tests/phpunit.xml
Then you can use Jenkins - Post-build Actions
Publish Junit test result report
Publish HTML reports
This way you will have tests results but also very useful code coverage report.
Related
How can I run testthat in 'auto' mode such that when I'm editing files in my R folder, only specific tests are re-run?
I have a lot of tests and some are slower than others. I need to be able to run specific tests or else I'll be waiting for up to 15 minutes for my test suite to complete. (I'd like to make the test suite faster, but that's not a realistic option right now.)
Ideally, I'd like to specify a grep expression to select the tests I want. In the JavaScript world, MochaJs and Jest both support grepping to select tests by name or by file.
Alternatively, I'd be OK with being able to specify a file directly - as long as I can do it with "auto test" support.
Here's what I've found so far with testthat:
testthat::auto_test_package runs everything at first, but only re-runs a specific test file if you edit that test file. However, if you edit any code in the R folder, it re-runs all tests.
testthat::auto_test accepts a path to a directory of test-files to test. However, testthat doesn't seem to support putting tests into different subdirectories if you want to use devtools::test or testthat::auto_test_package. Am I missing something?
testthat::test_file can run the tests from one file, but it doesn't support "auto" re-running the tests with changes.
testthat::test_dir has a filter argument, but it only filters files, not tests; it also doesn't support "auto" re-running tests
Versions:
R: 3.6.2 (2019-12-12)
testthat: 2.3.1
Addendum
I created a simple repo to demo the problem: https://github.com/generalui/select_testthat_tests
If you open that, run:
renv::restore()
testthat::auto_test_package()
It takes forever because one of the tests is slow. If I'm working on other tests, I want to skip the slow tests and only run the tests I've selected. Grepping for tests is a standard feature of test tools, so I'm sure R must have a way. testthat::test_dir has a filter option to filter files, but how do you filter on test-names, and how do you filter with auto_test_package? I just can't find it.
How do you do something like this in R:
testthat::auto_test_package(filter = 'double_it')
And have it run:
"double_it(2) == 4"
"double_it(3) == 6"
BUT NOT
"work_hard returns 'wow'"
Thanks!
I run a code coverage on questasim and I got ucdb file as output. But I need to exclude code coverages of some modules that connect to the top module.I don't need some of modules to be covered which this I can improve coverage report.
How can I do it without run simulation again?
Thanks.
I used a command as shown below.
coverage exclude -du <design_unit>
or
coverage exclude -srcfile <design_file>
Reference: QuestaSim User Manual
I've got a Bookdown book for which I'd like to build a GitBook site as well as PDF and EPUB downloads. I will use Travis to build all 3 outputs, and the PDF and EPUB will be available for download from the GitBook site.
The bookdown-demo calls bookdown::render_book once for each output in _build.sh.
However, according to the logs in RStudio, the Build Book button, when building All Formats, uses rmarkdown::render_site(encoding = 'UTF-8') to build all outputs in a single command.
I'd like to ensure what happens on my CI server is exactly what happens in my IDE, so it seems like I should have Travis call rmarkdown::render_site rather than several invocations of bookdown::render_book as is done by the bookdown-demo. However, Yihui is the expert, and he has chosen to use the latter approach.
So, my question: what is the best script to invoke on a continuous integration server like Travis when multiple outputs will be built?
In bookdown projects, they usually don't make a difference, because rmarkdown::render_site() eventually calls bookdown::render_book() to render your book. Feel free to use either way.
The only exception is when your index.Rmd does not contain the field site: bookdown::bookdown_site. In that case, rmarkdown::render_site() won't work, because it doesn't know this is supposed to be a bookdown project.
BTW, to render all output formats with bookdown::render_book(), you can use the argument output_format = 'all'.
I need to find the code coverage metrics values for my QT based GUI code. Please suggest any tool which would allow to create a test case and generate the coverage values.
Thanks,
Nayan
GCOV can be used to get statement and branch coverage
ref :Qt and gcov, coverage files are not generated
other tool is TUG
ref:
http://pedromateo.github.io/tug_qt_unit_testing_fw/
I have written a series of test_that tests. There is one test_that test which has a side-effect of creating a sqlite3 table. The rest of the tests rely on this sqlite3 table. Is there a way to force this one test to run before any of the other tests do?
If you are using test_dir or test_package (otherwise you can just put the tests in the same file after the sqlite test), you can put your test that generates the table in its own file and use naming conventions for execution. For example, inside tests/run.R you could have:
test_file("tests/testthat/myspecialfile.R")
test_dir("tests/testthat/") # will run every file with a name starting with `test`