CodeDeploy Bitbucket - How to Fail Bitbucket on CodeDeploy Failure - aws-code-deploy

I have a successful bitbucket pipeline calling out to aws CodeDeploy, but I'm wondering if I can add a step that will check and wait for CodeDeploy success, otherwise fail the pipeline. Would this just be possible with a script that loops through a CodeDeploy call that continues to monitor the status of the CodeDeploy push? Any idea what CodeDeploy call that would be?
bitbucket-pipline.yml
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE
appspec.yml
version: 0.0
os: linux
files:
- source: thejar.jar
destination: /home/ec2-user/the-server/
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: scripts/server_stop.sh
timeout: 60
runas: ec2-user
ApplicationStart:
- location: scripts/server_start.sh
timeout: 60
runas: ec2-user
ValidateService:
- location: scripts/server_validate.sh
timeout: 120
runas: ec2-user
Unfortunately it doesn't seem like Bitbucket is waiting for the ValidateService to complete, so I'd need a way in Bitbucket to confirm before marking the build a success.

AWS CLI already has a deployment-successful method which checks the status of a deployment every 15 seconds. You just need to pipe the output of create-deployment to deployment-successful.
In your specific case, it should look like this:
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE > deployment.json
- aws deploy wait deployment-successful --cli-input-json file://deployment.json

aws deploy create-deployment is an asynchronous call, and BitBucket has no idea that it needs to know about the success of your deployment. Adding a script to your CodeDeploy application will have no effect on BitBucket knowing about your deployment.
You have one (maybe two) options to fix this issue.
#1 Include a script that waits for your deployment to finish
You need to add a script to your BitBucket pipeline to check the status of your deployment to finish. You can either use SNS notifications, or poll the CodeDeploy service directly.
The pseudocode would look something like this:
loop
check_if_deployment_complete
if false, wait and retry
if true && deployment successful, return 0 (success)
if true && deployment failed, return non-zero (failure)
You can use the AWS CLI or your favorite scripting language. Add it at the end of your bitbucket-pipline.yml script. Make sure you use a wait between calls to CodeDeploy to check the status.
#2 (the maybe) Use BitBucket AWS CodeDeploy integration directly
BitBucket integrates with AWS CodeDeploy directly, so you might be able to use their integration rather than your script to integration properly. I don't know if this is supported or not.

Related

CircleCI upload to CodeCov cannot find coverage report?

As TravisCI.org is no longer free for small open source projects, I am trying to setup CircleCI and CodeCov.
Creating the Coverage report in CircleCI seems to work:
But uploading to CodeCov fails, claming report cannot be found:
I followed the instructions at https://circleci.com/docs/2.0/code-coverage/#codecov
Used orb codecov/codecov#1.0.2
Allowed unprivate orbs
Using CircleCI 2.1
Generating phpdbg
I tried with store_artificats and without, unclear to me if this shall be used with codecov, but both fail
Thats my config.yml:
# PHP CircleCI 2.0 configuration file
# See: https://circleci.com/docs/2.0/language-php/
version: 2.1
orbs:
codecov: codecov/codecov#1.0.2
# Define a job to be invoked later in a workflow.
# See: https://circleci.com/docs/2.0/configuration-reference/#jobs
jobs:
build:
# Specify the execution environment. You can specify an image from Dockerhub or use one of our Convenience Images from CircleCI's Developer Hub.
# See: https://circleci.com/docs/2.0/configuration-reference/#docker-machine-macos-windows-executor
docker:
# Specify the version you desire here
- image: circleci/php:7.2-node-browsers
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# Using the RAM variation mitigates I/O contention
# for database intensive operations.
# - image: circleci/mysql:5.7-ram
#
# - image: redis:2.8.19
# Add steps to the job
# See: https://circleci.com/docs/2.0/configuration-reference/#steps
steps:
- checkout
- run: sudo apt update # PHP CircleCI 2.0 Configuration File# PHP CircleCI 2.0 Configuration File sudo apt install zlib1g-dev libsqlite3-dev
- run: sudo docker-php-ext-install zip
# Download and cache dependencies
- restore_cache:
keys:
# "composer.lock" can be used if it is committed to the repo
- v1-dependencies-{{ checksum "composer.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: composer install -n --prefer-dist
- save_cache:
key: v1-dependencies-{{ checksum "composer.json" }}
paths:
- ./vendor
# run tests with phpunit or codecept
#- run: ./vendor/bin/phpunit
- run:
name: "Run tests"
command: phpdbg -qrr vendor/bin/phpunit --coverage-html build/coverage-report
- codecov/upload:
file: build/coverage-report
Here is the failing build:
https://app.circleci.com/pipelines/github/iwasherefirst2/laravel-multimail/25/workflows/57e6a71c-7614-4a4e-a7cc-53f015b3d437/jobs/35
Codecov is not able to process HTML coverage reports. You should ask phpunit to output XML as well by either changing or appending your command to read --coverage-clover coverage.xml
You can view a list of the supported and unsupported coverage formats at https://docs.codecov.com/docs/supported-report-formats
[1] Saved https://web.archive.org/web/20220113142241/https://docs.codecov.com/docs/supported-report-formats

AWS credentials not found for celery-k8s deployment

I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.

Github actions Runner listener exited with error code null

In my server, if I run
sudo ./svc.sh status
I got this
It says status is active. But Runner listener exited with error code null
In my, Github accounts actions page runner is offline.
The runner should be Idle as far as I know.
This is my workflow
name: Node.js CI
on:
push:
branches: [ dev ]
jobs:
build:
runs-on: self-hosted
strategy:
matrix:
node-version: [ 12.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
Here I don't have any build scripts because I directly push the build folder to Github repo.
How do I fix this error?
I ran into this issue when I first installed the runner application under $USER1 but configured it as a service when I was $USER2.
In the end, I ran cd /home/$USER1/actions-runner && sudo ./svc.sh uninstall to remove the service.
Changed the owner of the all files in the /home/$USER1/actions-runner/ by
sudo chown -R $USER2 /home/$USER1/actions-runner.
And ran ./config.sh remove --token $HASHNUBER (you can get this by following this page) to removed all the runner application.
I also removed it from github settings runner page.
In the end, I installed eveything again as the same user, and it worked out.
I ran into this same issue and it turned out to be a disk space problem. Make sure your server has enough allocated storage.

Airflow live executor logs with DaskExecutor

I have an Airflow installation (on Kubernetes). My setup uses DaskExecutor. I also configured remote logging to S3. However when the task is running I cannot see the log, and I get this error instead:
*** Log file does not exist: /airflow/logs/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Fetching from: http://airflow-worker-74d75ccd98-6g9h5:8793/log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-74d75ccd98-6g9h5', port=8793): Max retries exceeded with url: /log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7d0668ae80>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Once the task is done, the log is shown correctly.
I believe what Airflow is doing is:
for finished tasks read logs from s3
for running tasks, connect to executor's log server endpoint and show that.
Looks like Airflow is using celery.worker_log_server_port to connect to my dask executor to fetch logs from there.
How to configure DaskExecutor to expose log server endpoint?
my configuration:
core remote_logging True
core remote_base_log_folder s3://some-s3-path
core executor DaskExecutor
dask cluster_address 127.0.0.1:8786
celery worker_log_server_port 8793
what i verified:
- verified that the log file exists and is being written to on the executor while the task is running
- called netstat -tunlp on executor container, but did not find any extra port exposed, where logs could be served from.
UPDATE
have a look at serve_logs airflow cli command - I believe it does exactly the same.
We solved the problem by simply starting a python HTTP handler on a worker.
Dockerfile:
RUN mkdir -p $AIRFLOW_HOME/serve
RUN ln -s $AIRFLOW_HOME/logs $AIRFLOW_HOME/serve/log
worker.sh (run by Docker CMD):
#!/usr/bin/env bash
cd $AIRFLOW_HOME/serve
python3 -m http.server 8793 &
cd -
dask-worker $#

Travis start server and continue with scripts

for my tests I need a dummy server which outputs sample/test code. I use a node http server for that and start it before my scripts with node ./test/server.js.
I can start it, but the issue is that it's taking up the instance and therefore can't run the tests now.
So the question is, how can I run the server in the background/new instance so it doesn't conflict with that? I do stop the server with the end_script so I don't have to terminate it.
This is my travis-config so far:
language: node_js
node_js:
- "6.1"
cache:
directories:
— node_modules
install:
- npm install
before_script:
- sh -e node ./test/server.js
script:
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/jelly.html
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/form.html
- ./node_modules/mocha/bin/mocha ./test/node/jelly.js
after_script:
- curl http://localhost:5555/close
You can background the process by appending a &:
before_script:
- node ./test/server.js &

Resources