R-CMD-check GitHub Actions workflow failing on warnings/notes - r

In the repository of my R package, I set a GitHub Actions workflow for the R CMD check command, following the examples shown in the usethis package documentation (with the usethis::use_github_actions() command).
I noticed that my workflow is marked as Fail even if only warnings and notes are found (i.e. no errors).
Is there a way to mark runs without errors as a Pass? Like a flag in the .github/workflows/R-CMD-check.yaml file
The following is a part of my current .yaml file. I tried adding the R_REMOTES_NO_ERRORS_FROM_WARNINGS: true line but the change was uneffective.
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
steps:
...

I realized the problem was in the actual part of the file calling the rcmdcheck() function, which is automatically created and uses an already implemented workflow, check-r-package.
Therefore, the problem is solved by modifying the .github/workflows/R-CMD-check.yaml file as follows:
- uses: r-lib/actions/check-r-package#v1
with:
error-on: '"error"'
In this way, we can set the arguments to the rcmdcheck::rcmdcheck(...) command which is internally run by r-lib/actions/check-r-package#v1. Under with, you can set the arguments of rcmdcheck(...) as you wish, and you can modify the internal call to the function.
Anyway, at this link https://github.com/r-lib/actions you can find the arguments/flags you can use in the workflows already implemented, also in the workflows to install the dependencies ecc.

Related

Using Github cache action to store packages to speed up action run times

I have set up a GitHub action to run the rcmdcheck::rcmdcheck(args = "--no-manual", error_on = "error") function on my R package every time there is a pull request, but it's taking over 15 minutes each time to download and install all of the packages, then it needs to run through the unit tests, which are also taking a while to finish.
So, I want to try and cache the required packages and dependencies using the GitHub cache action. However, I can't figure out what needs to change from the R example provided.
Here is the github action yaml which I want to cache my dependencies on:
name: R
on:
push:
branches: [ "main", "dev" ]
pull_request:
branches: [ "main", "dev" ]
permissions:
contents: read
jobs:
build:
runs-on: macos-latest
strategy:
matrix:
r-version: ['4.1.1']
env:
GITHUB_PAT: ${{secrets.REPO_KEY}}
steps:
- uses: actions/checkout#v3
- name: Set up R ${{ matrix.r-version }}
uses: r-lib/actions/setup-r#f57f1301a053485946083d7a45022b278929a78a
with:
r-version: ${{ matrix.r-version }}
- name: Install dependencies
run: |
install.packages(c("remotes", "rcmdcheck"), type = "binary", dependencies = TRUE)
remotes::install_deps(dependencies = TRUE, type = "binary")
shell: Rscript {0}
- name: Check
run: rcmdcheck::rcmdcheck(args = "--no-manual", error_on = "error")
shell: Rscript {0}
The idea is that once the code is run for the first time, so long as none of the required packages, dependencies or OS changes, then it can re-use the dependencies from last time and skip a costly part of the run time.
**Further information:** The repository this runs on is private, and requires some packages which aren't on CRAN, which adds further complexity. Some are other private repo's and the other is stored here https://inla.r-inla-download.org/R/stable/src/contrib/INLA_21.02.23.tar.gz
I have tried to use the setup-r-dependencies functionality for this, and while it works just fine on another package, it doesn't seem to work for this package. (We were unable to resolve this despite posting a bounty, see my previous post)

How to detect whether there's an OpenGL device in R?

I'm running R CMD CHECK via a Github action for the package I'm currently writing. It is ran on a Mac platform which does not have an OpenGL device, so R CMD CHECK fails because I run the rgl package in the examples. I think this will not be a problem for CRAN when I'll submit the package, I believe the CRAN platforms all have an OpenGL device, but I would like that R CMD CHECK works with the Github action. How could one detect whether there's an OpenGL device? If this is possible, I would change my examples to
if(there_is_openGL){
library(rgl)
......
}
EDIT
Thanks to user2554330's answer, I found the solution. One has to set the environment variable RGL_USE_NULL=TRUE in the yaml file. Environment variables are defined in the env section. My yaml file is as follows (in fact this is an Ubuntu platform, not a Mac platform):
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
RGL_USE_NULL: TRUE
steps:
- uses: actions/checkout#v2
- uses: r-lib/actions/setup-r#v1
with:
use-public-rspm: true
- uses: r-lib/actions/setup-r-dependencies#v1
with:
extra-packages: rcmdcheck
- uses: r-lib/actions/check-r-package#v1
I think it's hard to do what you want, because there are several ways rgl initialization can fail: you may not have X11 available, X11 may not support OpenGL on the display you have configured, etc.
If you are always running tests on the same machine, you can probably figure out where it fails and detect that in some other way, but it's easier to tell rgl not to attempt to use OpenGL before loading it.
For testing, the easiest way to do this is to set an environment variable RGL_USE_NULL=TRUE before running R, or from within R before attempting to load rgl. Within an R session you can use options(rgl.useNULL = TRUE) before loading rgl for the same result.
When rgl is not using OpenGL you can still produce displays in a browser using rglwidget(), and there are ways for displays to be updated automatically, which might be useful in RStudio or similar GUIs: use options(rgl.printRglwidget = TRUE, rgl.useNULL = TRUE).

Azure Pipeline Error: vstest.console process failed to connect to testhost process

I am converting my azure pipeline to YAML pipeline. When I trigger the build, it gets failed on the Unit test step and gives the error as below
[error]vstest.console process failed to connect to testhost process after 90 seconds. This may occur due to machine slowness, please set environment variable VSTEST_CONNECTION_TIMEOUT to increase timeout.
I could not find a way to add the VSTEST_CONNECTION_TIMEOUT value anywhere. Could you please help me with this.
Here is the sample .yml I am using
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
testFiltercriteria: 'TestCategory=Unit'
runSettingsFile: XYZ.Tests/codecoverage.runsettings
codeCoverageEnabled: true
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
I would recommend you to use the dotnetCli task instead. It is shorter, clearer and straight forward (It will have the "same" effect as executing dotnet test in your console)
- task: DotNetCoreCLI#2
displayName: 'Run tests'
inputs:
command: 'test'
Even in microsoft documentation page, they use the DotNetCoreCLI task.
If the vstest task can run successfully on your classic pipeline. It should be working in yaml pipeline too. You can check the agent pool selection and the task's settings to make sure they are same both in yaml and classic pipeline.
1,Your Unit tests seem like running on Vs2017 in the yaml pipeline. You can try running the pipeline on windows-latest agent to run the tests on Vs2019.
If your pipeline has to run on specific agent. You can use VisualStudioTestPlatformInstaller task to download the latest version. Then set the vsTestVersion: toolsInstaller for Vstest task. See below:
- task: VisualStudioTestPlatformInstaller#1
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
...
...
vsTestVersion: toolsInstaller
2,You can also check out the solution in this thread. As it mentioned in the solution deleting the entire solution folder, re-cloning the project. If you were running your pipeline on your self-hosted agent. You can try using Checkout in the yaml pipeline to clean the source folder before cloning your repo. See below:
steps:
- checkout: self
clean: true
You can also try adding below to your codecoverage.runsettings file under element <CodeCoverage> to exclude the microsoft assemblies as mentioned in the thread.
<ModulePath>.*microsoft\.codeanalysis\.csharp\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.csharp\.workspaces\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.workspaces\.dll$</ModulePath>
3,You can also try updating 'Microsoft.NET.Test.Sdk' to latest version for test projects.

Concourse unauthorized error pushing to Artifactory using docker-image-resource

I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.

upgrade all packages in a minion using state

How can I write a Salt state that will upgrade all packages installed on a system (for lab) for both centos and ubuntu?
I have an upgrades.sls that has the following:
upgrades:
pkg.upgrade:
- name: '*'
But it returns a
State 'pkg.upgrade' was not found in SLS 'dfars.patching' Reason:
'pkg.upgrade' is not available.
Do I have to specify that for centos, then use yum.pkg and for Ubuntu to use apt?
You can use pkg.uptodate for this
update_pkg:
pkg.uptodate:
- refresh : True
You are getting error because pkg.upgrade is an execution module, you are trying to run it from a state file. execution modules are the functions called by the salt command, and cannot be executed from states directly.
You can, however, use module.run that allows execution module calls to be made via states,
upgrades:
module.run:
- pkg.upgrade
....
another way is to make use of something from states.pkg, such as, states.pkg.uptodate,
salt.states.pkg.uptodate(name, refresh=False, pkgs=None, **kwargs)
Verify that the system is completely up to date.
name: The name has no functional value and is only used as a tracking
reference
refresh: refresh the package database before checking for new
upgrades
pkgs: list of packages to upgrade

Resources