How to enable linting in r package in gitlab ci/cd pipeline - r

I have created CI pipeline in gitlab for r-package. I need to capture the lint output and fails the job if there is any lint error. I'm unable to read the output of lintr command.
image: r-base:4.1.2
stages:
- LintR
LintR:
stage: LintR
script:
- cd ..
- R -e "capture.output(lintr::lint_package(\"./test-r/\"), file=\"./lint_output.txt\")"
- cd isp-r && mv ../lint_output.txt .
artifacts:
paths:
- ./lint_output.txt
when: always
How to capture the output in gitlab CI/CD

Try the following:
image: r-base:4.1.2
stages:
- LintR
LintR:
stage: LintR
script:
# redirect lintr stdout and stderr to file
- CI="" R -e "lintr::lint_package()" &> lint_output.txt
# fail if file is not empty
- [ ! -s lint_output.txt ]
artifacts:
paths:
- lint_output.txt

Related

Gitlab CI fails for r package

I am building an R Package on GitLab and I am trying to get the GitLab CI to work, the issues are
devtools::check fails if there is an error, warning or note. I only want it to fail on errors
Deploy pkgdown to GitLab pages, it doesn't seem to work?
Below is the .gitlab-ci-yml I am using. I used the R package template from R Studio to test it.
# .gitlab-ci.yml
image: methodsconsultants/r-packaging
variables:
DOCKER_DRIVER: overlay2
PKGNAME: "test"
R_LIBS_USER: "$CI_PROJECT_DIR/ci/lib"
CHECK_DIR: "$CI_PROJECT_DIR/ci/logs"
BUILD_LOGS_DIR: "$CI_PROJECT_DIR/ci/logs/$PKGNAME.Rcheck"
cache:
paths:
- $R_LIBS_USER
- vendor/apt
stages:
- build
- check
- test
- pages
before_script:
- mkdir -p vendor/apt
- apt-get --allow-releaseinfo-change update -qq
- apt-get remove -y libgcc-8-dev
- apt-get -o dir::cache::archives="vendor/apt" install -y libcairo2-dev -qq
buildbinary:
stage: build
script:
- r -e 'devtools::build(binary = TRUE)'
checkErrors:
stage: check
script:
- r -e 'if (!identical(devtools::check(document = FALSE, args = "--no-tests")[["errors"]], character(0))) stop("Check with Errors")'
unittests:
stage: test
script:
- r -e 'if (any(as.data.frame(devtools::test())[["failed"]] > 0)) stop("Some tests failed.")'
pages:
script:
- R -e "pkgdown::build_site()"
artifacts:
paths:
- public
only:
- master
Note
Everything works locally, devtools::check() produces warnings and notes but no errors. pkgdown builds fine. tests pass fine.
Gitlab CI is working, the build step and test stages pass fine, then fails on devtools::check() error message
I tried gitlab pages by removing the check() phrase, the pipeline finishes fine but under setting > Pages I can't see anything?
Worked it out:
devtools::check seems to raise an error on warnings on gitlab CI BUT not locally? Don't understand it, but you can set it explicitly which works devtools::check(error_on = "error")
GitLab pages I think needs to be in a deploy stage? Anyway it works now :) I did change the output folder of pkgdown to public to match gitlab pages (shown below)
Hope this helps anyone who stumbles upon this!
# .gitlab-ci.yml
image: rocker/tidyverse
variables:
DOCKER_DRIVER: overlay2
PKGNAME: "test"
R_LIBS_USER: "$CI_PROJECT_DIR/ci/lib"
CHECK_DIR: "$CI_PROJECT_DIR/ci/logs"
BUILD_LOGS_DIR: "$CI_PROJECT_DIR/ci/logs/$PKGNAME.Rcheck"
cache:
paths:
- $R_LIBS_USER
- vendor/apt
stages:
- build
- test
- deploy
before_script:
- R -e 'devtools::install_deps(dep = T)'
buildbinary:
stage: build
script:
- R -e 'devtools::check(vignettes = FALSE, error_on = "error")'
unittests:
stage: test
script:
- r -e 'if (any(as.data.frame(devtools::test())[["failed"]] > 0)) stop("Some tests failed.")'
pages:
stage: deploy
script:
- R -e "install.packages('pkgdown')"
- R -e "pkgdown::build_site()"
artifacts:
paths:
- public
expire_in: 1 day
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
# _pkgdown.yml
url: ~
template:
bootstrap: 5
destination: public/

Deploy Gatsby to Firebase using Circleci

I followed this blog to deploy my Gatsby site to Firebase using circleCI
https://circleci.com/blog/automatically-deploy-a-gatsby-site-to-firebase-hosting/
The config.yml file is as follows
# CircleCI Firebase Deployment Config
version: 2
jobs:
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: Gatsby Build
command: npm run build
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
This caused an error
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I haven't used yml files or focused on devops before so did some digging around. Found a few other people with this issue and there was a suggestion to use workspaces and workflow. So I amended my yml file to support this
# CircleCI Firebase Deployment Config
version: 2
jobs:
#build jobs
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- persist_to_workspace:
root: ./
paths:
- ./
- run:
name: Gatsby Build
command: npm run build
- persist_to_workspace:
root: ./
paths:
- ./
# deploy jobs
deploy-production:
docker:
- image: circleci/node:10
steps:
- attach_workspace:
at: ./
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
workflows:
version: 2
build:
jobs:
#build
- build
#deploy
- deploy-production:
requires:
- build
Same issue
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I assume it must be something to do with the paths and it's looking in the wrong directory? Any idea of how I can get it to find the module required?
Apparently I can't read. The fix was in the instructions
We’ll also need to install the firebase-tools package locally to our
project as a devDependency. This will come in handy later on when
integrating with CircleCI, which does not allow installing packages
globally by default. So let’s install it right now:
npm install -D firebase-tools

How to organize your Postman collections and execute them with Newman with GitLab CI?

I am thinking of organizing my Postman API tests by creating a collection per feature. So for example we have 2 features the collections will be saved in /test/FEAT-01/FEAT-01-test.postman_collection.json and another in /test/FEAT-02/FEAT-02-test.postman_collection.json. In this way I am able to compare the collections separately in git and selectively execute tests as I want.
However I want my GitLab CI to execute all my tests under my root test folder using Newman. How can I achieve that?
One thing that worked for me, was creating one job per collection in gitlab, it looks something like this:
variables:
POSTMAN_COLLECTION1: <Your collection here>.json
POSTMAN_COLLECTION2: <Your collection here>.json
POSTMAN_ENVIRONMENT: <Your environment here>.json
DATA: <Your iterator file>
stages:
- test
postman_tests:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION1} -e ${POSTMAN_ENVIRONMENT} -d ${DATA} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
postman_test2:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION2} -e ${POSTMAN_ENVIRONMENT} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
You can also create a script that reads from your root folder and executes X times per collection, something like this:
$ for collection in ./PostmanCollections/*; do newman run "$collection" --environment ./PostmanEnvironments/Test.postman_environment.json' -r cli; done

SonarQube with Travis not showing issues on dot net core 2 project

I have a simple dotnet core 2.0 project with a simple issue which is failing SonarLint with an unused variable issue.
The code is stored in a public github repository (here). A travis job (here) runs and has the SonarQube plugin and should post to SonarCloud (here).
The problem I have is that this issue is not being picked up by the analysis and published as an issue. I obviously have something set up incorrectly but I dont know what.
My .travis.yml is below
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.0.0
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
before_script:
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- ./build.sh
- ./run-tests.sh
- sonar-scanner
My sonar-project.properties file is below
# Project identification
sonar.projectKey=Core:Dibware.Salon
sonar.projectVersion=1.0.0.0
sonar.projectName=Dibware.Salon
# Info required for SonarQube
sonar.sources=./Domain
sonar.language=cs
sonar.sourceEncoding=UTF-8
C# Settings
sonar.dotnet.visualstudio.solution=Dibware.Salon.sln
# MSBuild
sonar.dotnet.buildConfiguration=Release
sonar.dotnet.buildPlatform=Any CPU
# StyleCop
sonar.stylecop.mode=
# SCM
sonar.scm.enabled=false
In the travis log I do have:
INFO: 27 files to be analyzed
WARN: Shallow clone detected, no blame information will be provided. You can convert to non-shallow with 'git fetch --unshallow'.
INFO: 0/27 files analyzed
WARN: Missing blame information for the following files:
WARN: *
.
<lots of files>
.
WARN: This may lead to missing/broken features in SonarQube
INFO: Calculating CPD for 0 files
INFO: CPD calculation finished
INFO: Analysis report generated in 216ms, dir size=381 KB
INFO: Analysis report compressed in 56ms, zip size=89 KB
INFO: Analysis report uploaded in 340ms
INFO: ANALYSIS SUCCESSFUL, you can browse https://sonarcloud.io/dashboard?id=Core%3ADibware.Salon
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at https://sonarcloud.io/api/ce/task?id=AWo0YQeAUanQDuOXxh79
INFO: Analysis total time: 11.484 s
Is this what is affecting the analysis? If so how do I resolve it? If not what else is stopping the analysis of the files, please?
EDIT:
I can see the following in the log, but it still does not get picked up by SoanrQube..
Chair.cs(17,17): warning CS0219: The variable 'a' is assigned but its value is never used
Edit 2:
I managed to getthe analzed number to go up, see below...
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=6ms
INFO: SCM provider for this project is: git
INFO: 27 files to be analyzed
INFO: 27/27 files analyzed
INFO: Calculating CPD for 0 files
... using the following in my .travis.yml
install:
- git fetch --unshallow --tags
That came from here:
https://stackoverflow.com/a/47441734/254215
Ok, I am not out of the wood yet, but am getting some analysis using the following .travis.yml
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*" /d:sonar.cs.opencover.reportsPaths="lcov.opencover.xml" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true
In the end the travis.yml file I used which worked is this:
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.cs.opencover.reportsPaths="**/coverage.opencover.xml" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*,**/Dibware.Salon.Web/**/*" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true
The build file is this.
#!/usr/bin/env bash
dotnet restore
dotnet clean -c Release
dotnet build Dibware.Salon.sln -c Release
The test is this.
# Run the tests and collate code coverage results
dotnet test -c Release --no-build --no-restore Domain/SharedKernel/Dibware.Salon.Domain.SharedKernel.UnitTests/Dibware.Salon.Domain.SharedKernel.UnitTests.csproj /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
I did not use a sonar-project.properties file.
HTH someone, one day

Bitbucket Pipeline: Deploy jar artifact to ftp

I'm trying to build and then deploy the artifacts (jar) by the bitbucket pipeline. The build is working but the deploy of the artifacts doesnt work as I want it.
When the pipeline is finished I have all code files (src/main/java etc) instead of the jar on the ftp server.
Do you see where I do the mistake? Actually I also looked for a another ftp function but failed.
Pipeline:
# This is a sample build configuration for Java (Maven).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: maven:3.3.9
pipelines:
default:
- step:
name: Build
caches:
- maven
script:
- apt-get update
- apt-get install -y openjfx
- mvn install -DskipTests
artifacts:
- /opt/atlassian/pipelines/agent/build/target/**
- target/**
# - /**.jar
- step:
name: Deploy
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $user --passwd $pw -v sftp://$host:5544/$folder
To solve this problem I added the SSH-Key to Bitbucket. Then I could do the deploy by sftp using lftp and docker images.
pipelines:
branches:
master:
- step:
name: Build
image: tgalopin/maven-javafx
caches:
- maven
script:
- mvn install
artifacts:
- target/**
- step:
name: Deploy
image: alpacadb/docker-lftp
script:
- lftp sftp://$user:$pw#$host:$port -e "put /my-file; bye"

Resources