Conda build of R package on Windows installing package locally - r

I am attempting to build a custom R package in Conda on Windows. The source is a local github repo, since the remote repo is private. Everything seems to go fine, but the package ends up being 9kb in size, and installs on the local machine during build time. That is to say, the install-able version that gets uploaded to Anaconda.org doesn't contain anything but the activate and deactivate scripts. So, I'd like to be able to build the package for others to use, but it appears to only be building on my local machine (to the local machine's R library folder, where it already exists!).
From lots of research, I think I need to set the prefix in either the yaml or bld.bat file, but I haven't a clue how to do this. Any help would be greatly appreciated. I am learning a lot about Conda through this process so I hope my question is sufficiently well-defined.
My meta.yaml looks like this:
{% set version = '0.0.0.9000' %}
{% set posix = 'm2-' if win else '' %}
{% set native = 'm2w64-' if win else '' %}
package:
name: my_package
version: {{ version|replace("-", "_") }}
source:
fn: my_package_{{ version }}
url: C:/_github/subdirectory/my_package
build:
# If this is a new build for the same version, increment the build number.
number: 0
# This is required to make R link correctly on Linux.
rpaths:
- lib/R/lib/
- lib/
requirements:
build:
- r-base
- r-roxygen2
- r-scales
- r-jsonlite
- r-foreign
- r-ggplot2 >=2.1.0
- r-ca
- r-openxlsx
- r-plotly
run:
- r-base
- r-roxygen2
- r-scales
- r-jsonlite
- r-foreign
- r-ggplot2 >=2.1.0
- r-ca
- r-openxlsx
- r-plotly
test:
commands:
# You can put additional test commands to be run here.
- $R -e "library('package')" # [not win]
- "\"%R%\" -e \"library('package')\"" # [win]
And the bld.bat look like this:
"%R%" CMD INSTALL --build .
if errorlevel 1 exit 1

Related

R-CMD-check GitHub Actions workflow failing on warnings/notes

In the repository of my R package, I set a GitHub Actions workflow for the R CMD check command, following the examples shown in the usethis package documentation (with the usethis::use_github_actions() command).
I noticed that my workflow is marked as Fail even if only warnings and notes are found (i.e. no errors).
Is there a way to mark runs without errors as a Pass? Like a flag in the .github/workflows/R-CMD-check.yaml file
The following is a part of my current .yaml file. I tried adding the R_REMOTES_NO_ERRORS_FROM_WARNINGS: true line but the change was uneffective.
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
steps:
...
I realized the problem was in the actual part of the file calling the rcmdcheck() function, which is automatically created and uses an already implemented workflow, check-r-package.
Therefore, the problem is solved by modifying the .github/workflows/R-CMD-check.yaml file as follows:
- uses: r-lib/actions/check-r-package#v1
with:
error-on: '"error"'
In this way, we can set the arguments to the rcmdcheck::rcmdcheck(...) command which is internally run by r-lib/actions/check-r-package#v1. Under with, you can set the arguments of rcmdcheck(...) as you wish, and you can modify the internal call to the function.
Anyway, at this link https://github.com/r-lib/actions you can find the arguments/flags you can use in the workflows already implemented, also in the workflows to install the dependencies ecc.

How to detect whether there's an OpenGL device in R?

I'm running R CMD CHECK via a Github action for the package I'm currently writing. It is ran on a Mac platform which does not have an OpenGL device, so R CMD CHECK fails because I run the rgl package in the examples. I think this will not be a problem for CRAN when I'll submit the package, I believe the CRAN platforms all have an OpenGL device, but I would like that R CMD CHECK works with the Github action. How could one detect whether there's an OpenGL device? If this is possible, I would change my examples to
if(there_is_openGL){
library(rgl)
......
}
EDIT
Thanks to user2554330's answer, I found the solution. One has to set the environment variable RGL_USE_NULL=TRUE in the yaml file. Environment variables are defined in the env section. My yaml file is as follows (in fact this is an Ubuntu platform, not a Mac platform):
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
name: R-CMD-check
jobs:
R-CMD-check:
runs-on: ubuntu-latest
env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
RGL_USE_NULL: TRUE
steps:
- uses: actions/checkout#v2
- uses: r-lib/actions/setup-r#v1
with:
use-public-rspm: true
- uses: r-lib/actions/setup-r-dependencies#v1
with:
extra-packages: rcmdcheck
- uses: r-lib/actions/check-r-package#v1
I think it's hard to do what you want, because there are several ways rgl initialization can fail: you may not have X11 available, X11 may not support OpenGL on the display you have configured, etc.
If you are always running tests on the same machine, you can probably figure out where it fails and detect that in some other way, but it's easier to tell rgl not to attempt to use OpenGL before loading it.
For testing, the easiest way to do this is to set an environment variable RGL_USE_NULL=TRUE before running R, or from within R before attempting to load rgl. Within an R session you can use options(rgl.useNULL = TRUE) before loading rgl for the same result.
When rgl is not using OpenGL you can still produce displays in a browser using rglwidget(), and there are ways for displays to be updated automatically, which might be useful in RStudio or similar GUIs: use options(rgl.printRglwidget = TRUE, rgl.useNULL = TRUE).

is not recognized as an internal or external command

sbt-native-packager is using appveyor to test the WindowsPlugin for generating msi packages. Since a few days all our builds start to fail because sbt is no longer found.
This is the appveyor.yml
version: '{build}'
os: Windows Server 2012
install:
- ps: |
Add-Type -AssemblyName System.IO.Compression.FileSystem
if (!(Test-Path -Path "C:\sbt" )) {
(new-object System.Net.WebClient).DownloadFile(
'https://dl.bintray.com/sbt/native-packages/sbt/0.13.7/sbt-0.13.7.zip',
'C:\sbt-bin.zip'
)
[System.IO.Compression.ZipFile]::ExtractToDirectory("C:\sbt-bin.zip", "C:\sbt")
}
- cmd: SET PATH=C:\sbt\sbt\bin;%JAVA_HOME%\bin;%PATH%
- cmd: SET SBT_OPTS=-XX:MaxPermSize=2g -Xmx4g
build_script:
- sbt clean compile
test_script:
- sbt "test-only * -- -n windows"
- sbt "scripted universal/dist universal/stage windows/*"
cache:
- C:\sbt\
- C:\Users\appveyor\.m2
- C:\Users\appveyor\.ivy2
An example build can be found here. Has anything changed on appveyor's side? We haven't changed anything on ours.
cheers,
Muki
I believe this was another manifestation of this bug, which was fixed over weekend. Could you please try now?

Building R packages with Packrat and AppVeyor

Can someone point me towards a working example where packrat is used with AppVeyor to build an R package? Searching through Google and GitHub, I can't find any packrat-enable package that uses AppVeyor.
Does the appveyor.yml file need to change? Are there some settings I need to add through the AppVeyor website?
I have a very minimal package (testthat is the only dependency) that broke AppVeyor builds. Here is the code frozen for that commit. Here is the AppVeyor log.
(If this SO question sounds familiar, I'm about to ask a similar question for Travis-CI.)
Yes, the solution here is similar to the same question for Travis-CI.
Here's an example of an appveyor.yml file that will enable you to use packrat packages in your package:
# DO NOT CHANGE the "init" and "install" sections below
# Download script file from GitHub
init:
ps: |
$ErrorActionPreference = "Stop"
Invoke-WebRequest http://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "..\appveyor-tool.ps1"
Import-Module '..\appveyor-tool.ps1'
install:
ps: Bootstrap
# Adapt as necessary starting from here
environment:
global:
WARNINGS_ARE_ERRORS: 0
USE_RTOOLS: true
build_script:
- R -e "0" --args --bootstrap-packrat
test_script:
- travis-tool.sh run_tests
on_failure:
- 7z a failure.zip *.Rcheck\*
- appveyor PushArtifact failure.zip
artifacts:
- path: '*.Rcheck\**\*.log'
name: Logs
- path: '*.Rcheck\**\*.out'
name: Logs
- path: '*.Rcheck\**\*.fail'
name: Logs
- path: '*.Rcheck\**\*.Rout'
name: Logs
- path: '\*_*.tar.gz'
name: Bits
- path: '\*_*.zip'
name: Bits
The important parts that differ from the template are:
environment:
global:
WARNINGS_ARE_ERRORS: 0
USE_RTOOLS: true
build_script:
- R -e "0" --args --bootstrap-packrat
This enables Rtools for the build, and loads the packrat packages.
It's also important to note that we are excluding - travis-tool.sh install_deps because that would cause the packages you depend on to be downloaded from CRAN, rather than built from your packrat directory.
Here's an example of an appveyor build for a simple R package where this is working: https://ci.appveyor.com/project/benmarwick/javaonappveyortest/build/1.0.21

How to force pkgrepo refresh only one time per highstate?

I have a bunch of packages in a private debian repository. Following salt documentation (http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html), in a salt state I defined a pkgerepo entry like this:
my-private-repo:
pkgrepo.managed:
- hmanname: My Deb
- name: deb <url>....
- dist: my-repo
- require_in:
- pkg: pkg1
- pkg: pkg2
- pkg: ...
and in each pkg definition added the refresh: True stanza:
pkg1:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
Now, it works in the sense that I get an "apt-get update" before installing (upgrading) each package, but there are quite a few of them (around 20) and I get an update for each one. Is there a way to have apt update just once after the repo state has been tested?
Helices and Antstud answers put me in the right direction. Anyway in the end I found out some interesting things that might be helpful for others:
"refresh: True" is useless with pkg.latest, seems like 'latest' implies "refresh: True"
What's stated in SaltStack doc seems not to apply (at last with version 2014.7.1)
require_in:
Set this to a list of pkg.installed or pkg.latest to trigger the running of apt-get update prior to attempting to install these packages. Setting a require in the pkg will not work for this.
I just added
- require:
- pkgrepo: my_repo
to my pkg definition and it's working (making includes less of a mess).
I believe you can just install multiple packages with a single state by using pkgs:. It works for me, even with a custom repository:
install packages:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
- pkgs:
- pkg1
- pkg2
...
You can try to define pkg list in the pillars for every minion and than get the list in the state.
install packages:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
- pkgs:
{% for pkg in pillar.get('packages', {}).items() %}
{{pkg}}
{% endfor %}

Resources