As TravisCI.org is no longer free for small open source projects, I am trying to setup CircleCI and CodeCov.
Creating the Coverage report in CircleCI seems to work:
But uploading to CodeCov fails, claming report cannot be found:
I followed the instructions at https://circleci.com/docs/2.0/code-coverage/#codecov
Used orb codecov/codecov#1.0.2
Allowed unprivate orbs
Using CircleCI 2.1
Generating phpdbg
I tried with store_artificats and without, unclear to me if this shall be used with codecov, but both fail
Thats my config.yml:
# PHP CircleCI 2.0 configuration file
# See: https://circleci.com/docs/2.0/language-php/
version: 2.1
orbs:
codecov: codecov/codecov#1.0.2
# Define a job to be invoked later in a workflow.
# See: https://circleci.com/docs/2.0/configuration-reference/#jobs
jobs:
build:
# Specify the execution environment. You can specify an image from Dockerhub or use one of our Convenience Images from CircleCI's Developer Hub.
# See: https://circleci.com/docs/2.0/configuration-reference/#docker-machine-macos-windows-executor
docker:
# Specify the version you desire here
- image: circleci/php:7.2-node-browsers
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# Using the RAM variation mitigates I/O contention
# for database intensive operations.
# - image: circleci/mysql:5.7-ram
#
# - image: redis:2.8.19
# Add steps to the job
# See: https://circleci.com/docs/2.0/configuration-reference/#steps
steps:
- checkout
- run: sudo apt update # PHP CircleCI 2.0 Configuration File# PHP CircleCI 2.0 Configuration File sudo apt install zlib1g-dev libsqlite3-dev
- run: sudo docker-php-ext-install zip
# Download and cache dependencies
- restore_cache:
keys:
# "composer.lock" can be used if it is committed to the repo
- v1-dependencies-{{ checksum "composer.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: composer install -n --prefer-dist
- save_cache:
key: v1-dependencies-{{ checksum "composer.json" }}
paths:
- ./vendor
# run tests with phpunit or codecept
#- run: ./vendor/bin/phpunit
- run:
name: "Run tests"
command: phpdbg -qrr vendor/bin/phpunit --coverage-html build/coverage-report
- codecov/upload:
file: build/coverage-report
Here is the failing build:
https://app.circleci.com/pipelines/github/iwasherefirst2/laravel-multimail/25/workflows/57e6a71c-7614-4a4e-a7cc-53f015b3d437/jobs/35
Codecov is not able to process HTML coverage reports. You should ask phpunit to output XML as well by either changing or appending your command to read --coverage-clover coverage.xml
You can view a list of the supported and unsupported coverage formats at https://docs.codecov.com/docs/supported-report-formats
[1] Saved https://web.archive.org/web/20220113142241/https://docs.codecov.com/docs/supported-report-formats
Related
I need to build a Docker image for Windows platform containing an ASP.Net Framework 4.8 application using Github Action.
In build step, it throws an error
Building Docker image *** with tags latest...
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 0.2s done
#1 creating container buildx_buildkit_builder-b7ba0de1-52b2-435b-aa7d-8311039928160 done
#1 ERROR: Error response from daemon: Windows does not support privileged mode
------
> [internal] booting buildkit:
------
ERROR: Error response from daemon: Windows does not support privileged mode
In the post
Github Action for Windows platform "Windows does not support privileged mode"
the author says he solved the problem by changing the build action to mr-smithers-excellent/docker-build-push#v5 but it didn't work for me.
My script:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v3
- name: Setup MSBuild
uses: microsoft/setup-msbuild#v1
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v2.0.0
with:
install: true
- name: Build Docker images
uses: mr-smithers-excellent/docker-build-push#v5.8
with:
image: ******
tags: latest
registry: *****
dockerfile: *****
My Dockerfile
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
cmd
In my server, if I run
sudo ./svc.sh status
I got this
It says status is active. But Runner listener exited with error code null
In my, Github accounts actions page runner is offline.
The runner should be Idle as far as I know.
This is my workflow
name: Node.js CI
on:
push:
branches: [ dev ]
jobs:
build:
runs-on: self-hosted
strategy:
matrix:
node-version: [ 12.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
Here I don't have any build scripts because I directly push the build folder to Github repo.
How do I fix this error?
I ran into this issue when I first installed the runner application under $USER1 but configured it as a service when I was $USER2.
In the end, I ran cd /home/$USER1/actions-runner && sudo ./svc.sh uninstall to remove the service.
Changed the owner of the all files in the /home/$USER1/actions-runner/ by
sudo chown -R $USER2 /home/$USER1/actions-runner.
And ran ./config.sh remove --token $HASHNUBER (you can get this by following this page) to removed all the runner application.
I also removed it from github settings runner page.
In the end, I installed eveything again as the same user, and it worked out.
I ran into this same issue and it turned out to be a disk space problem. Make sure your server has enough allocated storage.
Set up:
Upon merge to master codefresh build job builds image and pushes it to docker registry
Codefresh test run job picks up new image and runs the test
By the end of test run CF job, allure report building step runs
Results:
3rd step fails with message in a title only if job ran all the way through pipeline
It passes fine if I rerun the job manually(no step 1, 2 are executed in this case)
Notes:
Manually adding that tag does not help
Test execution pipeline:
stages:
- "clone"
- "create"
- "run"
- "get_results"
- "clean_up"
steps:
clone:
title: "Cloning repository"
type: "git-clone"
repo: "repo/repo"
# CF_BRANCH value is auto set when pipeline is triggered
revision: "${{CF_BRANCH}}"
git: "github"
stage: "clone"
create:
title: "Spin up ec2 server on aws"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
- aws cloudformation create-stack --stack-name yourStackName --template-body file://cloudformation.yaml --parameters ParameterKey=keyName,ParameterValue=qaKeys
stage: "create"
run:
title: "Wait for results"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
# wait for results in s3
- apk update
- apk upgrade
- apk add bash
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
- chmod +x ./wait-for-aws.sh
- ./wait-for-aws.sh
# copy results ojbects from s3
- aws s3 cp s3://${S3_BUCKETNAME}/ ./ --recursive
- cp -r -f ./_result_/allure-raw $CF_VOLUME_PATH/allure-results
- cat test-result.txt
stage: "run"
get_results:
title: Generate test reporting
image: codefresh/cf-docker-test-reporting
tag: "${{CF_BRANCH_TAG_NORMALIZED}}"
working_directory: '${{CF_VOLUME_PATH}}/'
environment:
- BUCKET_NAME=yourName
- CF_STORAGE_INTEGRATION=integrationName
stage: "get_results"
clean_up:
title: "Remove cf stack and files from s3"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
# wait for results in s3
- apk update
- apk upgrade
- apk add bash
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
# delete stack
- aws cloudformation delete-stack --stack-name stackName
# remove all files from s3
# - aws s3 rm s3://bucketName --recursive
stage: "clean_up"```
Adding CF_BRANCH_TAG_NORMALIZED as a tag won't help in that case.
CF_BRANCH_TAG_NORMALIZED needs to be set as an environment variable for this step.
Taking a look at the source code of codefresh/cf-docker-test-reporting,
https://github.com/codefresh-io/cf-docker-test-reporting/blob/master/config/index.js
env: {
// bucketName - only bucket name, with out subdir path
bucketName: ConfigUtils.getBucketName(),
// bucketSubPath - parsed path to sub folder inside bucket
bucketSubPath: ConfigUtils.getBucketSubPath(),
// originBucketName - origin value that can contain subdir need to use it in some cases
originBucketName: process.env.BUCKET_NAME,
apiKey: process.env.CF_API_KEY,
buildId: process.env.CF_BUILD_ID,
volumePath: process.env.CF_VOLUME_PATH,
branchNormalized: process.env.CF_BRANCH_TAG_NORMALIZED,
storageIntegration: process.env.CF_STORAGE_INTEGRATION,
logLevel: logLevelsMap[process.env.REPORT_LOGGING_LEVEL] || INFO,
sourceReportFolderName: (allureDir || 'allure-results').trim(),
reportDir: ((reportDir || '').trim()) || undefined,
reportIndexFile: ((reportIndexFile || '').trim()) || undefined,
reportWrapDir: _.isNumber(reportWrapDir) ? String(reportWrapDir) : '',
reportType: _.isString(reportType) ? reportType.replace(/[<>]/g, 'hackDetected') : 'default',
allureDir,
clearTestReport
},
you can see that CF_BRANCH_TAG_NORMALIZED is taken directly from the environment.
My assumption is that whatever triggers your build normally will not set this environment variable. It is usually set automatically when you have a git trigger e.g. from Github.
When you start your pipeline manually you probably set the variable and that's why it's running then.
You should check how your pipelines are usually triggered and if the variable is set (automatically or manually).
Here's some more documentation about these variables:
https://codefresh.io/docs/docs/codefresh-yaml/variables/#system-provided-variables
I have a successful bitbucket pipeline calling out to aws CodeDeploy, but I'm wondering if I can add a step that will check and wait for CodeDeploy success, otherwise fail the pipeline. Would this just be possible with a script that loops through a CodeDeploy call that continues to monitor the status of the CodeDeploy push? Any idea what CodeDeploy call that would be?
bitbucket-pipline.yml
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE
appspec.yml
version: 0.0
os: linux
files:
- source: thejar.jar
destination: /home/ec2-user/the-server/
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: scripts/server_stop.sh
timeout: 60
runas: ec2-user
ApplicationStart:
- location: scripts/server_start.sh
timeout: 60
runas: ec2-user
ValidateService:
- location: scripts/server_validate.sh
timeout: 120
runas: ec2-user
Unfortunately it doesn't seem like Bitbucket is waiting for the ValidateService to complete, so I'd need a way in Bitbucket to confirm before marking the build a success.
AWS CLI already has a deployment-successful method which checks the status of a deployment every 15 seconds. You just need to pipe the output of create-deployment to deployment-successful.
In your specific case, it should look like this:
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE > deployment.json
- aws deploy wait deployment-successful --cli-input-json file://deployment.json
aws deploy create-deployment is an asynchronous call, and BitBucket has no idea that it needs to know about the success of your deployment. Adding a script to your CodeDeploy application will have no effect on BitBucket knowing about your deployment.
You have one (maybe two) options to fix this issue.
#1 Include a script that waits for your deployment to finish
You need to add a script to your BitBucket pipeline to check the status of your deployment to finish. You can either use SNS notifications, or poll the CodeDeploy service directly.
The pseudocode would look something like this:
loop
check_if_deployment_complete
if false, wait and retry
if true && deployment successful, return 0 (success)
if true && deployment failed, return non-zero (failure)
You can use the AWS CLI or your favorite scripting language. Add it at the end of your bitbucket-pipline.yml script. Make sure you use a wait between calls to CodeDeploy to check the status.
#2 (the maybe) Use BitBucket AWS CodeDeploy integration directly
BitBucket integrates with AWS CodeDeploy directly, so you might be able to use their integration rather than your script to integration properly. I don't know if this is supported or not.
for my tests I need a dummy server which outputs sample/test code. I use a node http server for that and start it before my scripts with node ./test/server.js.
I can start it, but the issue is that it's taking up the instance and therefore can't run the tests now.
So the question is, how can I run the server in the background/new instance so it doesn't conflict with that? I do stop the server with the end_script so I don't have to terminate it.
This is my travis-config so far:
language: node_js
node_js:
- "6.1"
cache:
directories:
— node_modules
install:
- npm install
before_script:
- sh -e node ./test/server.js
script:
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/jelly.html
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/form.html
- ./node_modules/mocha/bin/mocha ./test/node/jelly.js
after_script:
- curl http://localhost:5555/close
You can background the process by appending a &:
before_script:
- node ./test/server.js &