SonarQube with Travis not showing issues on dot net core 2 project - .net-core

I have a simple dotnet core 2.0 project with a simple issue which is failing SonarLint with an unused variable issue.
The code is stored in a public github repository (here). A travis job (here) runs and has the SonarQube plugin and should post to SonarCloud (here).
The problem I have is that this issue is not being picked up by the analysis and published as an issue. I obviously have something set up incorrectly but I dont know what.
My .travis.yml is below
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.0.0
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
before_script:
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- ./build.sh
- ./run-tests.sh
- sonar-scanner
My sonar-project.properties file is below
# Project identification
sonar.projectKey=Core:Dibware.Salon
sonar.projectVersion=1.0.0.0
sonar.projectName=Dibware.Salon
# Info required for SonarQube
sonar.sources=./Domain
sonar.language=cs
sonar.sourceEncoding=UTF-8
C# Settings
sonar.dotnet.visualstudio.solution=Dibware.Salon.sln
# MSBuild
sonar.dotnet.buildConfiguration=Release
sonar.dotnet.buildPlatform=Any CPU
# StyleCop
sonar.stylecop.mode=
# SCM
sonar.scm.enabled=false
In the travis log I do have:
INFO: 27 files to be analyzed
WARN: Shallow clone detected, no blame information will be provided. You can convert to non-shallow with 'git fetch --unshallow'.
INFO: 0/27 files analyzed
WARN: Missing blame information for the following files:
WARN: *
.
<lots of files>
.
WARN: This may lead to missing/broken features in SonarQube
INFO: Calculating CPD for 0 files
INFO: CPD calculation finished
INFO: Analysis report generated in 216ms, dir size=381 KB
INFO: Analysis report compressed in 56ms, zip size=89 KB
INFO: Analysis report uploaded in 340ms
INFO: ANALYSIS SUCCESSFUL, you can browse https://sonarcloud.io/dashboard?id=Core%3ADibware.Salon
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at https://sonarcloud.io/api/ce/task?id=AWo0YQeAUanQDuOXxh79
INFO: Analysis total time: 11.484 s
Is this what is affecting the analysis? If so how do I resolve it? If not what else is stopping the analysis of the files, please?
EDIT:
I can see the following in the log, but it still does not get picked up by SoanrQube..
Chair.cs(17,17): warning CS0219: The variable 'a' is assigned but its value is never used
Edit 2:
I managed to getthe analzed number to go up, see below...
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=6ms
INFO: SCM provider for this project is: git
INFO: 27 files to be analyzed
INFO: 27/27 files analyzed
INFO: Calculating CPD for 0 files
... using the following in my .travis.yml
install:
- git fetch --unshallow --tags
That came from here:
https://stackoverflow.com/a/47441734/254215

Ok, I am not out of the wood yet, but am getting some analysis using the following .travis.yml
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*" /d:sonar.cs.opencover.reportsPaths="lcov.opencover.xml" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true

In the end the travis.yml file I used which worked is this:
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.cs.opencover.reportsPaths="**/coverage.opencover.xml" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*,**/Dibware.Salon.Web/**/*" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true
The build file is this.
#!/usr/bin/env bash
dotnet restore
dotnet clean -c Release
dotnet build Dibware.Salon.sln -c Release
The test is this.
# Run the tests and collate code coverage results
dotnet test -c Release --no-build --no-restore Domain/SharedKernel/Dibware.Salon.Domain.SharedKernel.UnitTests/Dibware.Salon.Domain.SharedKernel.UnitTests.csproj /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
I did not use a sonar-project.properties file.
HTH someone, one day

Related

Gitlab CI fails for r package

I am building an R Package on GitLab and I am trying to get the GitLab CI to work, the issues are
devtools::check fails if there is an error, warning or note. I only want it to fail on errors
Deploy pkgdown to GitLab pages, it doesn't seem to work?
Below is the .gitlab-ci-yml I am using. I used the R package template from R Studio to test it.
# .gitlab-ci.yml
image: methodsconsultants/r-packaging
variables:
DOCKER_DRIVER: overlay2
PKGNAME: "test"
R_LIBS_USER: "$CI_PROJECT_DIR/ci/lib"
CHECK_DIR: "$CI_PROJECT_DIR/ci/logs"
BUILD_LOGS_DIR: "$CI_PROJECT_DIR/ci/logs/$PKGNAME.Rcheck"
cache:
paths:
- $R_LIBS_USER
- vendor/apt
stages:
- build
- check
- test
- pages
before_script:
- mkdir -p vendor/apt
- apt-get --allow-releaseinfo-change update -qq
- apt-get remove -y libgcc-8-dev
- apt-get -o dir::cache::archives="vendor/apt" install -y libcairo2-dev -qq
buildbinary:
stage: build
script:
- r -e 'devtools::build(binary = TRUE)'
checkErrors:
stage: check
script:
- r -e 'if (!identical(devtools::check(document = FALSE, args = "--no-tests")[["errors"]], character(0))) stop("Check with Errors")'
unittests:
stage: test
script:
- r -e 'if (any(as.data.frame(devtools::test())[["failed"]] > 0)) stop("Some tests failed.")'
pages:
script:
- R -e "pkgdown::build_site()"
artifacts:
paths:
- public
only:
- master
Note
Everything works locally, devtools::check() produces warnings and notes but no errors. pkgdown builds fine. tests pass fine.
Gitlab CI is working, the build step and test stages pass fine, then fails on devtools::check() error message
I tried gitlab pages by removing the check() phrase, the pipeline finishes fine but under setting > Pages I can't see anything?
Worked it out:
devtools::check seems to raise an error on warnings on gitlab CI BUT not locally? Don't understand it, but you can set it explicitly which works devtools::check(error_on = "error")
GitLab pages I think needs to be in a deploy stage? Anyway it works now :) I did change the output folder of pkgdown to public to match gitlab pages (shown below)
Hope this helps anyone who stumbles upon this!
# .gitlab-ci.yml
image: rocker/tidyverse
variables:
DOCKER_DRIVER: overlay2
PKGNAME: "test"
R_LIBS_USER: "$CI_PROJECT_DIR/ci/lib"
CHECK_DIR: "$CI_PROJECT_DIR/ci/logs"
BUILD_LOGS_DIR: "$CI_PROJECT_DIR/ci/logs/$PKGNAME.Rcheck"
cache:
paths:
- $R_LIBS_USER
- vendor/apt
stages:
- build
- test
- deploy
before_script:
- R -e 'devtools::install_deps(dep = T)'
buildbinary:
stage: build
script:
- R -e 'devtools::check(vignettes = FALSE, error_on = "error")'
unittests:
stage: test
script:
- r -e 'if (any(as.data.frame(devtools::test())[["failed"]] > 0)) stop("Some tests failed.")'
pages:
stage: deploy
script:
- R -e "install.packages('pkgdown')"
- R -e "pkgdown::build_site()"
artifacts:
paths:
- public
expire_in: 1 day
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
# _pkgdown.yml
url: ~
template:
bootstrap: 5
destination: public/

CircleCI permission denied opening firebase-tools.json for Firebase deployment

I'm using Firebase to host my personal website and wanted to integrate CircleCI for faster integration. However I receive this error on the step for deployment:
Note
Adding sudo before the deploy command causes the build to fail also
/home/circleci/project/node_modules/configstore/index.js:52
throw error;
^
Error: EACCES: permission denied, open '/home/circleci/.config/configstore/firebase-tools.json'
You don't have access to this file.
Below is my project's yaml configuration:
---
commands:
restore_cache_cmd:
description: "Restore cached npm install"
steps:
- restore_cache:
key: 'dependency-cache-{{checksum "package.json"}}'
save_cache_cmd:
description: "Saving npm install"
steps:
- save_cache:
key: 'dependency-cache-{{ checksum "package.json"}}'
paths:
- "./node_modules"
update:
description: "Installing project's dependencies"
steps:
- checkout
- restore_cache_cmd
- run: sudo npm i -g npm#latest
- run: sudo npm i
- save_cache_cmd
build_deploy:
description: "Building project"
steps:
- run:
name: Build
command: sudo npm run build
- run:
name: Deploy
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_DEPLOY_TOKEN -- only hosting
executors:
docker-executor:
docker:
- image: "cimg/node:12.14.1"
jobs:
build_site:
executor: docker-executor
working_directory: ~/Darryls-Personal-Site
steps:
- update
- build_deploy
version: 2.1
workflows:
build_site:
jobs:
- build_site:
filters:
branches:
only: master
Steps that I have already completed from other questions:
Used firebase login:ci to obtain refresh token and placed into an environment variable within my CircleCI project environment
Used npm install --save-dev firebase-tools
I think the problem is that you run all your npm commands with sudo except the firebase deploy command.
You should definitely run everything with the current user and not the superuser.
You will see in official tutorials that nothing is run with sudo except for very specific cases.
Also, instead of doing this ./node_modules/.bin/firebase deploy you could use npx run firebase deploy which first look in the local node_modules then in the global ones.

Deploy Gatsby to Firebase using Circleci

I followed this blog to deploy my Gatsby site to Firebase using circleCI
https://circleci.com/blog/automatically-deploy-a-gatsby-site-to-firebase-hosting/
The config.yml file is as follows
# CircleCI Firebase Deployment Config
version: 2
jobs:
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: Gatsby Build
command: npm run build
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
This caused an error
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I haven't used yml files or focused on devops before so did some digging around. Found a few other people with this issue and there was a suggestion to use workspaces and workflow. So I amended my yml file to support this
# CircleCI Firebase Deployment Config
version: 2
jobs:
#build jobs
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- persist_to_workspace:
root: ./
paths:
- ./
- run:
name: Gatsby Build
command: npm run build
- persist_to_workspace:
root: ./
paths:
- ./
# deploy jobs
deploy-production:
docker:
- image: circleci/node:10
steps:
- attach_workspace:
at: ./
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
workflows:
version: 2
build:
jobs:
#build
- build
#deploy
- deploy-production:
requires:
- build
Same issue
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I assume it must be something to do with the paths and it's looking in the wrong directory? Any idea of how I can get it to find the module required?
Apparently I can't read. The fix was in the instructions
We’ll also need to install the firebase-tools package locally to our
project as a devDependency. This will come in handy later on when
integrating with CircleCI, which does not allow installing packages
globally by default. So let’s install it right now:
npm install -D firebase-tools

Bitbucket Pipeline: Deploy jar artifact to ftp

I'm trying to build and then deploy the artifacts (jar) by the bitbucket pipeline. The build is working but the deploy of the artifacts doesnt work as I want it.
When the pipeline is finished I have all code files (src/main/java etc) instead of the jar on the ftp server.
Do you see where I do the mistake? Actually I also looked for a another ftp function but failed.
Pipeline:
# This is a sample build configuration for Java (Maven).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: maven:3.3.9
pipelines:
default:
- step:
name: Build
caches:
- maven
script:
- apt-get update
- apt-get install -y openjfx
- mvn install -DskipTests
artifacts:
- /opt/atlassian/pipelines/agent/build/target/**
- target/**
# - /**.jar
- step:
name: Deploy
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $user --passwd $pw -v sftp://$host:5544/$folder
To solve this problem I added the SSH-Key to Bitbucket. Then I could do the deploy by sftp using lftp and docker images.
pipelines:
branches:
master:
- step:
name: Build
image: tgalopin/maven-javafx
caches:
- maven
script:
- mvn install
artifacts:
- target/**
- step:
name: Deploy
image: alpacadb/docker-lftp
script:
- lftp sftp://$user:$pw#$host:$port -e "put /my-file; bye"

Saltstack for "configure make install"

I'm getting my feet wet with SaltStack. I've made my first state (a Vim installer with a static configuration) and I'm working on my second one.
Unfortunately, there isn't an Ubuntu package for the application I'd like my state to install. I will have to build the application myself. Is there a "best practice" for doing "configure-make-install" type installations with Salt? Or should I just use cmd?
In particular, if I was doing it by hand, I would do something along the lines of:
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=$PREFIX && make && make install
There are state modules to abstract the first two lines, if you wish.
file.managed: http://docs.saltstack.com/ref/states/all/salt.states.file.html
archive.extracted: http://docs.saltstack.com/ref/states/all/salt.states.archive.html
But you could also just run the commands on the target minion(s).
install-foo:
cmd.run:
- name: |
cd /tmp
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=/usr/local
make
make install
- cwd: /tmp
- shell: /bin/bash
- timeout: 300
- unless: test -x /usr/local/bin/foo
Just make sure to include an unless argument to make the script idempotent.
Alternatively, distribute a bash script to the minion and execute. See:
How can I execute multiple commands using Salt Stack?
As for best practice? I would recommend using fpm to create a .deb or .rpm package and install that. At the very least, copy that tarball to the salt master and don't rely on external resources to be there three years from now.
Let's assume foo-3.4.3.tar.gz is checked into GitHub. Here is an approach that you might pursue in your state file:
git:
pkg.installed
https://github.com/nomen/foo.git:
git.latest:
- rev: master
- target: /tmp/foo
- user: nomen
- require:
- pkg: git
foo_deployed:
cmd.run:
- cwd: /tmp/foo
- user: nomen
- name: |
./configure --prefix=/usr/local
make
make install
- require:
- git: https://github.com/nomen/foo.git
Your configuration prefix location could be passed as a salt pillar. If the build process is more complicated, you may consider writing a custom state.

Resources