Concourse CI and Build number - saucelabs

I'm moving from Jenkins to using using Concourse CI to run my Sauce labs e2e tests. Sauce labs groups tests together that have the same build number string:
name: 'Chrome XS',
browserName: 'chrome',
tunnelIdentifier: process.env.TUNNEL_IDENTIFIER,
build: process.env.JENKINS_BUILD_NUMBER,
platform: 'Windows 10',
shardTestFiles: true,
maxInstances: 20,
How can I pass the build number to my script using an environment variable as shown above. The Concourse GUI uses name #number. Is there any way to retrieve this. I tried printing all the environment variables in the docker container but it's not set by default.

Metadata like the build number/ID are intentionally not provided to tasks. See https://concourse-ci.org/implementing-resources.html#resource-metadata
This sounds like potentially a use case for a Sauce Labs resource?

In Concourse, build metadata is only available for resources, not tasks.
An example on using build metadata with resources is to include it as part of build results notification emails. The following blog entry contains more information about it:
http://lmpsilva.typepad.com/cilounge/2016/10/how-to-insert-build-metadata-into-user-notifications-in-concourse.html
If you really want to use build number for versioning, you could try to create your own Concourse resource that would return the version number, however, I would use your code commit number instead. Another alternative would be to use the Semver resource in Concourse: https://github.com/concourse/semver-resource

Related

Run xUnit-Tests from API-Method

I have a solution including a xUnit-Test-Project and a API-Web-Application-Project.
In my xUnit-Project I have a couple of Integration/System-Tests.
Now I want to trigger all tests instead of command line dotnet test via HTTP-Request http://myservice.com/run-tests.
Is there any concept how to archive this?

How to make azure devops build fail when R linting issues occur

I am using lintr library in R to find linting issues in the code. I put them into an xml format like this:
<lintsuites>
<lintissue filename="/home/.../blah.R" line_number="36" column_number="1" type="style" message="Trailing blank lines are superfluous."/>
<lintissue filename="/home/.../blahblah.R" line_number="1" column_number="8" type="style" message="Only use double-quotes."/>
</lintsuites>
Now, I would like to fail the Azure devops build when issues like this occur.
I was able to get my tests in a JUnit format like this:
<testsuite name="MB Unit Tests" timestamp="2020-01-22 22:34:07" hostname="0000" tests="29" skipped="0" failures="0" errors="0" time="0.05">
<testcase time="0.01" classname="1_Unit_Tests" name="1_calculates_correctly"/>
<testcase time="0.01" classname="1_Unit_Tests" name="2_absorbed_correctly"/>
...
</testsuite>
And when i do this step in the azure pipeline, my build fails if any tests in the test suite fail:
- task: PublishTestResults#2
displayName: 'Publish Test Results'
inputs:
testResultsFiles: '**/*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)/fe'
mergeTestResults: true
failTaskOnFailedTests: true
I would like something similar for failing the build when there are linting issues. I would also like the users to see what those linting issues are in the build output.
Thanks
This is not possible to achieve a similar result for lintr xml with plishTestResults#2.
The workaround you can try is to use a powershell task to check for the content of your lintr xml file. If the content isnot empty, then fail the pipeline in the powershell task.
Below powershell task will check the content of lintr.xml(<car></car>) and will echo the content to the task logs and exit 1 to fail the task if the content is null.
- powershell: |
[xml]$XmlDocument = Get-Content -Path "$(system.defaultworkingdirectory)/lintr.xml"
if($XmlDocument.OuterXml){
echo $XmlDocument.OuterXml
}else{exit 1}
displayName: lintr result.
You can aslo use below statement in a powershell task to upload lintr xml file to the build summary page where you can download
echo "##vso[task.uploadsummary]$(system.defaultworkingdirectory)/lintr.xml"
You can check here for more logging commands.
Update:
A workaround to show the lintr results in a nice way is to create a custom extension to display html results in azure devops pipeline.
You can try creating a custom extension, and produce a html lint results. Please refer to the answer to this thread an example custom extension to display html
Other developers already submit requests to Microsoft for implementing this feature. Please vote it up here or create a new one.

VSTS/Azure DevOps: Auto-Increment NuGet Package Version on Pack

Running the .NET Core Pack task, how do I get the outputted NuGet package version to auto-increment itself?
So, for example, if my current version is 1.0.0, then the next time I call the Pack task, I would like to see 1.0.1.
I'm using environment build variables with Build.BuildNumber and getting outputs at the moment of e.g. 20180913-.2.0, etc. I would like to establish to a more traditional versioning system.
From the docs, the variable Rev:.r is the daily build revision count. The accepted "solution" would lead to one day finishing having a version of 1.0.12, then the next day it will be 1.0.1.
If you want a simple incremental and unique semver, use 1.0.$(BuildID).
$(BuildID) is an internal immutable counter for your builds, and thus far cleaner than $(BuildNumber).
BuildID will always be incrementing - no reset.
Thus after a minor bump, you'd end up having say 1.2.123 becoming 1.3.124.
If you want to perform this task well, this can be done using npm version or similar, such as pubspec_version for Dart or Flutter builds.
- script: npm version $RELEASE_TYPE
where $RELEASE_TYPE is a variable you can set based on build (ie: CI, PR etc), having a value of major, minor, patch, prerelease etc.
- script: npm version $RELEASE_TYPE
condition: startsWith(variables['build.sourceBranch'], 'refs/head/release/')
env:
releaseType: minor
Update: Bump Repo Version and Use In Build (using npm)
To have the repo version update, I ended up including npm version as a DevDependency, with it's precommit hook to bump the project version on any commit.
This technique can be applied to other project types, placing them in a subfolder - although can lead to complications with server OS requirements.
To use this version in your build, add this bash script task, which gets and exports the version as a task variable:
v=`node -p "const p = require('./package.json'); p.version;"`
echo "##vso[task.setvariable variable=packageVersion]$v"
.Net Core Task only version
Unfortunately, no repo-bump.
Workaround 1:
jobs:
- job: versionJob #reads version number from the source file
steps:
- powershell: |
$fv = Get-Content versionFile
Write-Host ("##vso[task.setvariable variable=versionFromFile;isOutput=true]$fv")
displayName: 'version from file'
name: setVersionStep
- job: buildJob # consumes version number, calculates incremental number and set version using assemblyinfo.cs
dependsOn: versionJob
variables:
versionFromFile: $[ dependencies.versionJob.outputs['setVersionStep.versionFromFile'] ] # please note that spaces required between $[ and dependencies
buildIncrementalNumber: $[ counter(dependencies.versionJob.outputs['setVersionStep.versionFromFile'],1) ] #can't use $versionFromFile here
steps:
- powershell: |
Write-Host ($env:versionFromFile)
Write-Host ($env:versionFromFile + '.' + $env:buildIncrementalNumber)
displayName: 'version from file output'
Workaround 2:
This post describes a couple of others, using version-prefix and automatically applying the BuildNumber as a version-suffix.
I may have figured it out. For anyone tearing their hair out, try this:
Pack Task:
Automatic Package Versioning: Use an environment variable
Environment variable: Build.BuildNumber
Then, up in the top menu where you have Tasks, Variables, Triggers, Options, click Options and set:
Build number format: 1.0$(Rev:.r)
Save and queue. This will produce e.g. 1.0.1.
(Please Correct me if I am wrong, or if this does not work long-term.)
If you're just looking to bump the major, minor or revision version number, using counter operator in a variable is a simple and elegant approach. It will automatically add one to the current value.
Here's what I use:
variables:
major: '1'
minor: '0'
revision: $[counter(variables['minor'], 1)] #this will get reset when minor gets bumped. The number after Counter is the seed number (in my case, I started at 1).
app_version: '$(major).$(minor).$(revision)'
If you would like to see a real-world 4-job pipeline that uses this, I have one here https://github.com/LanceMcCarthy/DevReachCompanion/blob/master/azure-pipelines.yml
For me it's enough to set Build number format on Options tab to
$(date:yyyy).$(date:MMdd)$(rev:.r)
and add next build argument:
/p:Version=1.$(Build.BuildNumber) /p:AssemblyVersion=1.$(Build.BuildNumber)
In this case we manage major version manually, but minor version and build number will be set automatically. Easy to understand what version you have deployed.
I am using the ado pipeline and a yaml build. What I've done is utilized the pipeline variables, a counter function, and an inline powershell function to create the version number. It auto-increments and has made the entire build process nice.
Another SO Post about something similar

Meteor: how to load different files based on CLI parameter?

In my Meteor (1.2) app I've separated files for development and production
e.g.
client/lib/appVars.config.PROD.js
client/lib/appVars.config.CONFIG.js
Ideally the "twin" files have the same variables, functions etc. with little differences but (global) variables and functions which are common to debug and production have the same name.
Is there a way to call meteor run with a command line parameter DEBUG_MODE = true | false so that I cad load either one or the other file, depending on the current mode (debug, production)?
Set different environmental variables and run via CLI with meteor run --settings settings.json
Then you just need a development and production (and staging?) settings.json
Example of a settings file:
{
"awsBucket": "my-example-staging",
"awsAccessKeyId": "AABBCCddEEff12123131",
"awsSecretKey": "AABBCCddEEff12123131+AABBCCddEEff12123131",
"public": {
"awsBucketUrl": "https://my-meteor-example.s3.amazonaws.com",
"environment": "staging"
},
"googleApiKey": "AABBCCddEEff12123131"
}
EDIT ADD:
To access your environmental keys, just select
Meteor.settings.awsBucket
Security Update (thanks Dave Weldon)
See https://docs.meteor.com/#/full/structuringyourapp
Re production vs development, you should have two settings.json files, the standard one for production (.config/settings.json) and a development one (.config/development/config.json) and when you boot outside of production you boot meteor --settings .config/development/settings.json
Re client side, note that if you make the key public e.g.
{
"service_id":"...",
"service_secret":"...",
"public":{
"service_name":"..."
}
}
Then only Meteor.settings.public.service_name will be accessible on the client

PHPUnit + Sonarqube exported coverage do not match with actual xml result

sonarqube: 5.1.2
sonar-runner: 2.2.1
php plugin: 2.6
PHPUnit 4.2.6
We're running phpunit over our application but I'm not able to get the correct coverage % on sonar as expected.
In my phpunit.xml we have filters defining folders that we only want to cover.
<whitelist addUncoveredFilesFromWhitelist="false">
<directory>./site-library-php/src/main/php/BabelCentral/Model/Content</directory>
</whitelist>
In my sonar properties
sonar.modules=php-module
php-module.sonar.phpUnit.coverage.analyzeOnly=true
php-module.sonar.phpUnit.analyzeOnly=true
php-module.sonar.php.tests.reportPath=site-main-php/src/test/target/reports/phpunit.xml
php-module.sonar.php.coverage.reportPath=site-main-php/src/test/target/reports/phpunit.coverage.xml
Viewing the log on jenkins, tests seem to run fine and ends with:
Tests: 1479, Assertions: 4165, Failures: 58, Incomplete: 14, Skipped: 55.
Generating code coverage report in Clover XML format ... done
Generating code coverage report in HTML format ... done
// further down
11:47:51.973 INFO - Analyzing PHPUnit test report: site-main-php/src/test/target/reports/phpunit.xml with org.sonar.plugins.php.phpunit.PhpUnitResultParser#35e7e715
11:47:52.604 INFO - Analyzing PHPUnit unit test coverage report: site-main-php/src/test/target/reports/phpunit.coverage.xml with PHPUnit Unit Test Coverage Result Parser
All seems to be working well. However, sonar would report a different result, covering the entire source folder. These are the number shown on the dashboard.
Unit Tests Coverage 1.8%
line coverage 1.8%
Unit test success 95.9%
Is there anyway to have sonar respect phpunit's configured filter?
Note:
I'm able to achieve the desired numbers on coverage if I explicitly set
php-module.sonar.sources
to the directories/files that I want. It's just that, comma-separated config is hard to manage than phpunit's xml config.
Defining the set of source files which should be analysed by SonarQube is crucial: it's on this set of files that coding rules are applied and that metrics are computed.
sonar.sources is therefore a mandatory property and is the main way to configure the set of source files.
Other properties may be used to refine the set of source files:
You may configure sonar.tests: test files are automatically excluded from sonar.sources.
You may have to exclude some other PHP files like dependencies or generated code with sonar.exclusions.
You should also be aware that several plugins may contribute to the global coverage of the project: if you installed the JavaScript plugin and if sonar.sources contains JavaScript files, they will also be taken into account when calculating the coverage metrics.
If you're only interested in adjusting coverage, you can exclude files from coverage metrics with sonar.coverage.exclusions.

Resources