I am testing an R package called eutradeflows on travis. The package contains test programmed with testthat and I would like to see the output of devtools::test() in travis.
There was a line in the main travis log saying :
Status: 4 NOTEs
See ‘/home/travis/build/stix-global/eutradeflows/eutradeflows.Rcheck/00check.log’
for details
From this answer, I learned that its possible to display a file in the travis log. In .travis.yml I have asked travis to print that file after the test:
- cat /home/travis/build/stix-global/eutradeflows/eutradeflows.Rcheck/00check.log
But it doesn't contain the result of testthat tests.
How can I display the output of testthat tests in travis?
This is particularly important since I have skip instructions in the tests and I would like to know which tests have been skipped.
To display the result of testthat tests,
add this to .travis.yml:
r_binary_packages:
- devtools
- roxygen2
after_success:
- Rscript -e 'devtools::install();devtools::test()'
I used a slightly heavier weight fix to ensure that build and test only happened once. In the build matrix I put the following in for the script
script: |
R CMD build .
R CMD INSTALL RMINC*.tar.gz
R CMD check --as-cran --no-install RMINC*.tar.gz
cat RMINC.Rcheck/00check.log
Rscript -e "\
testr <- devtools::test(); \
testr <- as.data.frame(testr); \
if(any(testr\$error) || any(testr\$warning > 0)) \
stop('Found failing tests') \
"
pass=$?
if [[ $pass -ne 0 || $(grep -i "WARNING\|ERROR" RMINC.Rcheck/00check.log) != "" ]]; then
(exit 1)
else
(exit 0)
fi
This runs R CMD check and properly displays build output, failing on warnings or errors in either check or test phases.
Related
I have a '.js' script that I usually activate from the terminal using the command node script.js. As this is part of a process where I first do some data analysis in R, I want to avoid the manual step of opening the terminal and typing the command by simply having R do it for me. My goal would be something like this:
...R analysis
write.csv(df, "data.csv")
system('node script.js')
However, when I use that specific code, I get the error:
sh: 1: node: not found
Warning message:
In system("node script.js") : error in running command
Of course, the same command runs without problem if I type it directly on the terminal.
About my Software
I am using:
Linux computer with the PopOS!
RStudio 2021.09.1+372 "Ghost Orchid"
R version 4.0.4.
The error message node: not found indicates that it couldn't find the program node. It's likely in PATH in your terminal's shell, but not in system()'s shell (sh).
In your terminal, locate node by executing which node. This will show the full path to the executable. Use that full path in system() instead.
Alternatively, run echo $PATH in your terminal, and run system('echo $PATH') or Sys.getenv('PATH') in R. Add any missing directories to R's path with Sys.setenv(PATH = <your new path string>)
Note that system2() is recommended over system() nowadays - but for reasons unimportant for your case. See ?system and ?system2 for a comparison.
Examples
Terminal
$ which node
/usr/bin/node
R
system('/usr/bin/node script.js')
# alternatively:
system2('/usr/bin/node', c('script.js'))
or adapt your PATH permanently:
Terminal
% echo $PATH
/usr/local/bin:/home/caspar/programs:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
R
> Sys.getenv('PATH')
[1] "/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Applications/RStudio.app/Contents/MacOS/postback"
> Sys.setenv(PATH = "/home/caspar/programs:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Applications/RStudio.app/Contents/MacOS/postback")
I am trying to add Codecov support via library(covr) to my personal R package sesh.
When I check locally the coverage tests run and report without incident:
covr::package_coverage()
sesh Coverage: 68.75%
R/executeDevtoolDocument.R: 0.00%
R/sesh.R: 69.23%
But when it runs on Travis it encounters an error for missing token:
$ Rscript -e 'covr::codecov()'
Error in if (nzchar(token)) { : argument is of length zero
Calls: <Anonymous>
Execution halted
The R CMD check runs successfully on Travis.
The contents of my .travis.yml:
language: R
matrix:
include:
- r: release
after_success: Rscript -e 'covr::codecov()'
r_github_packages:
- r-lib/covr
And a link to the most recent Travis report.
I have tried to faithfully follow the covr README for getting set up. And the README says Travis is supported without needing CODECOV_TOKEN, so I have not tried to pass one yet.
What am I missing here?
Following is my .travis.yml
language: r
cache: packages
script:
- R CMD build .
- R CMD check *tar.gz
r_github_packages:
- r-lib/covr
after_success:
- Rscript -e 'covr::codecov()'
Adding the repository upload token to codecov.yml avoids the error and successfully runs the coverage report.
codecov:
token: a1c53d1f-266f-47bc-bb23-3b3d67c57b2d
The token is found in the 'Settings(tab) >>> General(sidebar)' menu on the Codecov page for the repo (which is only visible once you are logged in).
I am writing an R package with the following lint test:
context("Require no code style errors")
library(lintr)
test_that("Package has no lintr errors", {
lintr::expect_lint_free()
})
The tests pass with `devtools::check():
$ Rscript -e "devtools::check()"
...
─ checking tests ...
✔ Running ‘testthat.R’ [61s/65s]
...
0 errors ✔ | 0 warnings ✔ | 0 notes ✔
and the lint-free test fails with devtools::test():
$ Rscript -e "devtools::test()"
...
Testing PosteriorBootstrap
...
✖ | 0 1 | Require no code style errors [6.2 s]
────────────────────────────────────────────────────────────────────────────────
test_lintr.R:5: failure: Package has no lintr errors
Not lint free
tests/testthat/test_anpl.R:112:1: style: Trailing whitespace is superfluous.
^~
...
OK: 20
Failed: 1
Warnings: 0
Skipped: 0
The problem is that Github and Travis are set to reject pull requests that fail the tests, and that if I run devtools::test() after devtools::check(), all the other tests run twice.
How can I get devtools::check() to run the lintr test?
This problem is a known issue:
This is not a bug in devtools (but maybe in lintr). devtools::check() runs the check in a temporary directory, but lint_package() assumes it is being run in a package directory, therefore there are no source files to be linted. ... you can confirm this with devtools::check(check_dir = "."), which should produce linting failures if devtools::test() does.
The solution proposed, written in May 2015, no longer works. The issue is now locked and closed, so it's unlikely to be addressed.
I suggest running the check and narrowing the test only to lintr:
Rscript -e "devtools::check();devtools::test_file(file = 'testthat/test_lintr.R')"
I am running two pipeline stages on my GitLab cl (homemade Docker contatiner with R-base on Ubuntu:16.04). The only track of the error is in the codecov step (while R check is successful). This is the error message and command (on GitLab):
$ Rscript -e 'covr::package_coverage(type="tests", quiet = FALSE)'
(...)
* DONE (mypkg)
Running specific tests for package ‘mypkg’
Running ‘testthat.R’
Error: Failure in `/tmp/RtmpGgElCC/R_LIBS94b18abb4/mypkg/mypkg-tests/testthat.Rout.fail`
As usual, I can not replicate this error locally. No other message related to the error is shown. Moreover, I can not find a way to retrieve that log file. Is it possible?
Use a Codecov token:
# your .gitlab-ci.yml, ending with:
- apt-get install —yes git
- R -e 'covr::codecov(token = "yourtoken")'
Get your token from https://codecov.io/your_name/your_project/settings.
See my own implementation at https://gitlab.com/msberends/AMR :)
I am building a Qt GUI application via Jenkins. I added 3 build steps:
Building the test executable
Running the test executable
compiling a coverage report with gcovr
For some reason, the shell task for running the test executable stops after execution. Even a simple echo does not run after. The tests are written with Google Test and output xUnit XML files, which are analyzed after the build.
Some tests start the applications user interface, so I installed the jenkins xvnc plugin to get them to run.
The build tasks are as follows:
Build
cd $WORKSPACE/projectfiles/QMake
sh createbin.sh
Test
cd $WORKSPACE/bin
./Application --gtest_output=xml
Coverage Report
cd $WORKSPACE/projectfiles/QMake/out
gcovr -x -o coverage.xml
Now, an echo at the end of the first build task is correctly printed, but an echo at the end of the second is not. The third build task is therefore not even run, although the Google Test output is visible. I thought that maybe the problem is that some of the Google Tests fail, but why whould the script stop executing just because the tests fail?
Maybe someone can give me a hint on why the second task stops.
Edit
The console output looks like this:
Updating svn://repo/ to revision '2012-11-15T06:43:15.228 -0800'
At revision 2053
no change for svn://repo/ since the previous build
Starting xvnc
[VG5] $ vncserver :10
New 'ubuntu:10 (jenkins)' desktop is ubuntu:10
Starting applications specified in /var/lib/jenkins/.vnc/xstartup
Log file is /var/lib/jenkins/.vnc/ubuntu:10.log
[VG5] $ /bin/sh -xe /tmp/hudson7777833632767565513.sh
+ cd /var/lib/jenkins/workspace/projectfiles/QMake
+ sh createbin.sh
... Compiler output ...
+ echo Build Done
Build Done
[VG5] $ /bin/sh -xe /tmp/hudson4729703161621217344.sh
+ cd /var/lib/jenkins/workspace/VG5/bin
+ ./Application --gtest_output=xml
Xlib: extension "XInputExtension" missing on display ":10".
[==========] Running 29 tests from 8 test cases.
... Test output ...
3 FAILED TESTS
Build step 'Execute shell' marked build as failure
Terminating xvnc.
$ vncserver -kill :10
Killing Xvnc4 process ID 1953
Recording test results
Skipping Cobertura coverage report as build was not UNSTABLE or better ...
Finished: FAILURE
Generally, if one Build Step fails, the rest will not be executed.
Pay attention to this line from your log:
[VG5] $ /bin/sh -xe
The -x makes the shell print each command in console before execution.
The -e makes the shell exit with error if any of the commands failed.
A "fail" in this case, would be a return code of not 0 from any of the individual commands.
You can verify this by running this directly on the machine:
./Application --gtest_output=xml
echo $?
If the echo $? displays 0, it indicates successful completion of the previous command. If it displays anything else, it indicates an error code from the previous command (from ./Application), and Jenkins treats it as such.
Now, there are several things at play here. First is that your second Build Step (essentially a temporary shell script /tmp/hudson4729703161621217344.sh) is set to fail if one command fails (the default behaviour). When the Build Step fails, Jenkins will stop and fail the whole job.
You can fix this particular behaviour by adding set +e to the top of your second Build Step. This will not cause the script (Build Step) to fail due to individual command failure (it will display an error for the command, and continue).
However, the overall result of the script (Build Step) is the exit code of the last command. Since in your OP, you only have 2 commands in the script, and the last is failing, it will cause the whole script (Build Step) to be considered a failure, despite the +x that you've added. Note that if you add an echo as the 3rd command, this would actually work, since the last script command (echo) was successful, however this "workaround" is not what you need.
What you need is proper error handling added to your script. Consider this:
set +e
cd $WORKSPACE/bin && ./Application --gtest_output=xml
if ! [ $? -eq 0 ]; then
echo "Tests failed, however we are continuing"
else
echo "All tests passed"
fi
Three things are happening in the script:
First, we are telling shell not to exit on failure of individual commands
Then i've added basic error handling in the second line. The && means "execute ./Application if-and-only-if the previous cd was successful. You never know, maybe the bin folder is missing, or whatever else can happen. BTW, the && internally works on the same error code equals 0 principle
Lastly, there is now proper error handling for the result of ./Application. If the result is not 0, then we show that it had failed, else we show that it had passed. Note, this since the last command is not a (potentially) failing ./Application, but an echo from either of if-else possibilities, the overall result of the script (Built Step) will be a success (i.e 0), and the next Build Step will be executed.
BTW, you can as well put all 3 of your build steps into a single build step with proper error handling.
Yes... this answer may be a little longer than what's required, but i wanted you to understand how Jenkins and shell treat exit codes.