How to detect whether there's an OpenGL device in R? - r

I'm running R CMD CHECK via a Github action for the package I'm currently writing. It is ran on a Mac platform which does not have an OpenGL device, so R CMD CHECK fails because I run the rgl package in the examples. I think this will not be a problem for CRAN when I'll submit the package, I believe the CRAN platforms all have an OpenGL device, but I would like that R CMD CHECK works with the Github action. How could one detect whether there's an OpenGL device? If this is possible, I would change my examples to
if(there_is_openGL){
library(rgl)
......
}
EDIT
Thanks to user2554330's answer, I found the solution. One has to set the environment variable RGL_USE_NULL=TRUE in the yaml file. Environment variables are defined in the env section. My yaml file is as follows (in fact this is an Ubuntu platform, not a Mac platform):
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
RGL_USE_NULL: TRUE
steps:
- uses: actions/checkout#v2
- uses: r-lib/actions/setup-r#v1
with:
use-public-rspm: true
- uses: r-lib/actions/setup-r-dependencies#v1
with:
extra-packages: rcmdcheck
- uses: r-lib/actions/check-r-package#v1

I think it's hard to do what you want, because there are several ways rgl initialization can fail: you may not have X11 available, X11 may not support OpenGL on the display you have configured, etc.
If you are always running tests on the same machine, you can probably figure out where it fails and detect that in some other way, but it's easier to tell rgl not to attempt to use OpenGL before loading it.
For testing, the easiest way to do this is to set an environment variable RGL_USE_NULL=TRUE before running R, or from within R before attempting to load rgl. Within an R session you can use options(rgl.useNULL = TRUE) before loading rgl for the same result.
When rgl is not using OpenGL you can still produce displays in a browser using rglwidget(), and there are ways for displays to be updated automatically, which might be useful in RStudio or similar GUIs: use options(rgl.printRglwidget = TRUE, rgl.useNULL = TRUE).

Related

Using Github cache action to store packages to speed up action run times

I have set up a GitHub action to run the rcmdcheck::rcmdcheck(args = "--no-manual", error_on = "error") function on my R package every time there is a pull request, but it's taking over 15 minutes each time to download and install all of the packages, then it needs to run through the unit tests, which are also taking a while to finish.
So, I want to try and cache the required packages and dependencies using the GitHub cache action. However, I can't figure out what needs to change from the R example provided.
Here is the github action yaml which I want to cache my dependencies on:
name: R
on:
push:
branches: [ "main", "dev" ]
pull_request:
branches: [ "main", "dev" ]
permissions:
contents: read
jobs:
build:
runs-on: macos-latest
strategy:
matrix:
r-version: ['4.1.1']
env:
GITHUB_PAT: ${{secrets.REPO_KEY}}
steps:
- uses: actions/checkout#v3
- name: Set up R ${{ matrix.r-version }}
uses: r-lib/actions/setup-r#f57f1301a053485946083d7a45022b278929a78a
with:
r-version: ${{ matrix.r-version }}
- name: Install dependencies
run: |
install.packages(c("remotes", "rcmdcheck"), type = "binary", dependencies = TRUE)
remotes::install_deps(dependencies = TRUE, type = "binary")
shell: Rscript {0}
- name: Check
run: rcmdcheck::rcmdcheck(args = "--no-manual", error_on = "error")
shell: Rscript {0}
The idea is that once the code is run for the first time, so long as none of the required packages, dependencies or OS changes, then it can re-use the dependencies from last time and skip a costly part of the run time.
**Further information:** The repository this runs on is private, and requires some packages which aren't on CRAN, which adds further complexity. Some are other private repo's and the other is stored here https://inla.r-inla-download.org/R/stable/src/contrib/INLA_21.02.23.tar.gz
I have tried to use the setup-r-dependencies functionality for this, and while it works just fine on another package, it doesn't seem to work for this package. (We were unable to resolve this despite posting a bounty, see my previous post)

R-CMD-check GitHub Actions workflow failing on warnings/notes

In the repository of my R package, I set a GitHub Actions workflow for the R CMD check command, following the examples shown in the usethis package documentation (with the usethis::use_github_actions() command).
I noticed that my workflow is marked as Fail even if only warnings and notes are found (i.e. no errors).
Is there a way to mark runs without errors as a Pass? Like a flag in the .github/workflows/R-CMD-check.yaml file
The following is a part of my current .yaml file. I tried adding the R_REMOTES_NO_ERRORS_FROM_WARNINGS: true line but the change was uneffective.
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
steps:
...
I realized the problem was in the actual part of the file calling the rcmdcheck() function, which is automatically created and uses an already implemented workflow, check-r-package.
Therefore, the problem is solved by modifying the .github/workflows/R-CMD-check.yaml file as follows:
- uses: r-lib/actions/check-r-package#v1
with:
error-on: '"error"'
In this way, we can set the arguments to the rcmdcheck::rcmdcheck(...) command which is internally run by r-lib/actions/check-r-package#v1. Under with, you can set the arguments of rcmdcheck(...) as you wish, and you can modify the internal call to the function.
Anyway, at this link https://github.com/r-lib/actions you can find the arguments/flags you can use in the workflows already implemented, also in the workflows to install the dependencies ecc.

Next.js - ERROR Build directory is not writeable on Google Cloud Build

I was trying to automate the deployment process of my Next.JS application to App Engine using Cloud Build but at the build phase it keeps on failing with:
Error: > Build directory is not writeable. https://err.sh/vercel/next.js/build-dir-not-writeable
I cant seem to figure out what to fix for this.
My current build file is and it keeps failing on step 2:
steps:
# install dependencies
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# build the container image
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# deploy to app engine
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
env:
- 'PORT=8080'
- 'NODE_ENV=production'
timeout: "1600s"
app.yaml:
runtime: nodejs12
handlers:
- url: /.*
secure: always
script: auto
env_variables:
PORT: 8080
NODE_ENV: 'production'
any help would be appreciated
Can reproduce the same behavior after upgrading to next version 9.3.3.
Cause
The issue is related to the npm dependency which is managed by google if you use gcr.io/cloud-builders/npm seems they are running your build inside of Google Cloud Build on an old node version.
Here you can find the currently supported version
https://console.cloud.google.com/gcr/images/cloud-builders/GLOBAL/npm?gcrImageListsize=30
As you can see Googles latest node version is 10.10. The newest next.js version requires at least node 10.13
Solution
Change gcr.io/cloud-builders/npm to
- name: node
entrypoint: npm
in order to use the official docker npm package which runs on node12.
After those changes your build will be successful again.
Sidenote
Switching to the official npm will increase the build duration (at least in my case). It takes around 2 minutes longer then the gcr npm.

Conda build of R package on Windows installing package locally

I am attempting to build a custom R package in Conda on Windows. The source is a local github repo, since the remote repo is private. Everything seems to go fine, but the package ends up being 9kb in size, and installs on the local machine during build time. That is to say, the install-able version that gets uploaded to Anaconda.org doesn't contain anything but the activate and deactivate scripts. So, I'd like to be able to build the package for others to use, but it appears to only be building on my local machine (to the local machine's R library folder, where it already exists!).
From lots of research, I think I need to set the prefix in either the yaml or bld.bat file, but I haven't a clue how to do this. Any help would be greatly appreciated. I am learning a lot about Conda through this process so I hope my question is sufficiently well-defined.
My meta.yaml looks like this:
{% set version = '0.0.0.9000' %}
{% set posix = 'm2-' if win else '' %}
{% set native = 'm2w64-' if win else '' %}
package:
name: my_package
version: {{ version|replace("-", "_") }}
source:
fn: my_package_{{ version }}
url: C:/_github/subdirectory/my_package
build:
# If this is a new build for the same version, increment the build number.
number: 0
# This is required to make R link correctly on Linux.
rpaths:
- lib/R/lib/
- lib/
requirements:
build:
- r-base
- r-roxygen2
- r-scales
- r-jsonlite
- r-foreign
- r-ggplot2 >=2.1.0
- r-ca
- r-openxlsx
- r-plotly
run:
- r-base
- r-roxygen2
- r-scales
- r-jsonlite
- r-foreign
- r-ggplot2 >=2.1.0
- r-ca
- r-openxlsx
- r-plotly
test:
commands:
# You can put additional test commands to be run here.
- $R -e "library('package')" # [not win]
- "\"%R%\" -e \"library('package')\"" # [win]
And the bld.bat look like this:
"%R%" CMD INSTALL --build .
if errorlevel 1 exit 1

Building R packages with Packrat and AppVeyor

Can someone point me towards a working example where packrat is used with AppVeyor to build an R package? Searching through Google and GitHub, I can't find any packrat-enable package that uses AppVeyor.
Does the appveyor.yml file need to change? Are there some settings I need to add through the AppVeyor website?
I have a very minimal package (testthat is the only dependency) that broke AppVeyor builds. Here is the code frozen for that commit. Here is the AppVeyor log.
(If this SO question sounds familiar, I'm about to ask a similar question for Travis-CI.)
Yes, the solution here is similar to the same question for Travis-CI.
Here's an example of an appveyor.yml file that will enable you to use packrat packages in your package:
# DO NOT CHANGE the "init" and "install" sections below
# Download script file from GitHub
init:
ps: |
$ErrorActionPreference = "Stop"
Invoke-WebRequest http://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "..\appveyor-tool.ps1"
Import-Module '..\appveyor-tool.ps1'
install:
ps: Bootstrap
# Adapt as necessary starting from here
environment:
global:
WARNINGS_ARE_ERRORS: 0
USE_RTOOLS: true
build_script:
- R -e "0" --args --bootstrap-packrat
test_script:
- travis-tool.sh run_tests
on_failure:
- 7z a failure.zip *.Rcheck\*
- appveyor PushArtifact failure.zip
artifacts:
- path: '*.Rcheck\**\*.log'
name: Logs
- path: '*.Rcheck\**\*.out'
name: Logs
- path: '*.Rcheck\**\*.fail'
name: Logs
- path: '*.Rcheck\**\*.Rout'
name: Logs
- path: '\*_*.tar.gz'
name: Bits
- path: '\*_*.zip'
name: Bits
The important parts that differ from the template are:
environment:
global:
WARNINGS_ARE_ERRORS: 0
USE_RTOOLS: true
build_script:
- R -e "0" --args --bootstrap-packrat
This enables Rtools for the build, and loads the packrat packages.
It's also important to note that we are excluding - travis-tool.sh install_deps because that would cause the packages you depend on to be downloaded from CRAN, rather than built from your packrat directory.
Here's an example of an appveyor build for a simple R package where this is working: https://ci.appveyor.com/project/benmarwick/javaonappveyortest/build/1.0.21

Resources