Trying to test artifactory-resource by running through the example pipeline.
groups:
- name: all
jobs:
- set-pipeline
- trigger-when-new-file-is-added-to-artifactory
jobs:
- name: set-pipeline
serial: true
plan:
- in_parallel:
- get: ea-terraform-module-aws-rds
trigger: true
- set_pipeline: deploying-rds-instance-from-jfrog-artifact
file: ea-terraform-module-aws-rds/examples/concourse/ea-terraform-module-aws-rds.yml
- name: trigger-when-new-file-is-added-to-artifactory
plan:
- get: ea-rds-jfrog-repo
- task: use-new-file
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
inputs:
- name: ea-rds-jfrog-repo
run:
path: cat
args:
- "./ea-rds-jfrog-repo/ea-terraform-module-aws-rds*.zip"
resource_types:
- name: artifactory
type: docker-image
source:
repository: pivotalservices/artifactory-resource
resources:
- name: ea-rds-jfrog-repo
type: artifactory
check_every: 1m
source:
endpoint: https://xxx.jfrog.io/artifactory
repository: "ea-terraform-module-aws-rds-1.4.0.zip"
regex: "ea-terraform-module-aws-rds-(?<version>.*).zip"
username: ${JF_USER}
password: ${JF_PASSWORD}
- name: ea-terraform-module-aws-rds
type: git
source:
private_key: ((github_private_key))
uri: git#github.com:xxx/xxx
branch: SAAS-27134
Concourse Error: pipeline path -> deploying-rds-isntance-from-jfrog-artifact/ea-rds-jfrog-repo
enter image description here
Repo on JFrog Artifactory
enter image description here
tried adding a version parameter
The following error indicates that the concourse resource is making a call to the Artifactory API, but instead of receiving a JSON structure, it gets a null response instead. The resource passes the null response to the jq utility that assumes an iterable object.
So why does it get a null response from the API?
It looks like, at minimum, the repository: in the ea-rds-jfrog-repo resource definition is incorrect.
Based on the second snapshot:
I'm going to guess it should be set to
repository: "/ea-terraform-module-aws-rds/terraform-module/aws"
I recommend using the https://github.com/spring-io/artifactory-resource which is in active development, instead of the unmaintained one from pivotalservices
Is there a way to instruct the codeDeploy to move on to the next deployment event if a script execution failed in one of the steps defined in appspec.yml?
For example, if the script stop_service.sh called by an ApplicationStop event failed, I would like the error to be ignored and start_service.sh in the ApplicationStart to be executed instead. please ignore the malformat in snipet below.
version: 0.0
os: linux
files:
- source: /
destination: /opt/custody_register/platform/3streamer/dev
overwrite: true
file_exists_behavior: OVERWRITE
permissions:
- object: /deployment/
pattern: "**"
owner: root
group: root
mode: 777
type:
- file
hooks:
ApplicationStop:
- location: ../../../../../custody_register/platform/3streamer/dev/deployment/stop_service.sh
timeout: 40
runas: root
allow_failure: true
ApplicationStart:
- location: ../../../../../custody_register/platform/3streamer/dev/deployment/start_service.sh
timeout: 40
runas: root
allow_failure: true
I installed datadog-agent using helm upgrade/install and provided the -f datadog.yaml parameter. The datadog.yaml contains this entry:
...
agents:
enabled: true
useConfigMap: true
customAgentConfig:
# Autodiscovery for Kubernetes
listeners:
- name: kubelet
config_providers:
- name: kubelet
polling: true
- name: docker
polling: true
apm_config:
enabled: false
apm_non_local_traffic: true
dogstatsd_mapper_profiles:
- name: airflow
prefix: "airflow."
mappings:
- match: "airflow.*_start"
name: "airflow.job.start"
tags:
job_name: "$1"
- match: "airflow.*_end"
name: "airflow.job.end"
tags:
job_name: "$1"
- match: "airflow.operator_failures_*"
...
But I don't see DD_DOGSTATSD_MAPPER_PROFILES env variable in the datadog-agent pod.
How can I inject this env variable to the datadog-agents?
Update 2/24/2022: I do see it's been added as a ConfigMap but it does not look like it is being mounted to the datadog-agent pod.
Update 3/4/2022: This yaml is working and I see the metrics in datadog dashboard. I do see it got mounted on the datadog-agent pod as config-map.
Use this in your datadog's helm chart:
datadog.env:
- name: DD_DOGSTATSD_MAPPER_PROFILES
value: >
[{"prefix":"airflow.","name":"airflow","mappings":[{"name":"airflow.job.start","match":"airflow.*_start","tags":{"job_name":"$1"}},{"name":"airflow.job.end","match":"airflow.*_end","tags":{"job_name":"$1"}},{"name":"airflow.job.heartbeat.failure","match":"airflow.*_heartbeat_failure","tags":{"job_name":"$1"}},{"name":"airflow.operator_failures","match":"airflow.operator_failures_*","tags":{"operator_name":"$1"}},{"name":"airflow.operator_successes","match":"airflow.operator_successes_*","tags":{"operator_name":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_runtime","match":"airflow\\.dag_processing\\.last_runtime\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_run.seconds_ago","match":"airflow\\.dag_processing\\.last_run\\.seconds_ago\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag.loading_duration","match":"airflow\\.dag\\.loading-duration\\.(.*)","tags":{"dag_file":"$1"}},{"name":"airflow.dagrun.first_task_scheduling_delay","match":"airflow.dagrun.*.first_task_scheduling_delay","tags":{"dag_id":"$1"}},{"name":"airflow.pool.open_slots","match":"airflow.pool.open_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.queued_slots","match":"pool.queued_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.running_slots","match":"pool.running_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.used_slots","match":"airflow.pool.used_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.starving_tasks","match":"airflow.pool.starving_tasks.*","tags":{"pool_name":"$1"}},{"match_type":"regex","name":"airflow.dagrun.dependency_check","match":"airflow\\.dagrun\\.dependency-check\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task.duration","match":"airflow\\.dag\\.(.*)\\.([^.]*)\\.duration","tags":{"dag_id":"$1","task_id":"$2"}},{"match_type":"regex","name":"airflow.dag_processing.last_duration","match":"airflow\\.dag_processing\\.last_duration\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.success","match":"airflow\\.dagrun\\.duration\\.success\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.failed","match":"airflow\\.dagrun\\.duration\\.failed\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.schedule_delay","match":"airflow\\.dagrun\\.schedule_delay\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.scheduler.tasks.running","match":"scheduler.tasks.running"},{"name":"airflow.scheduler.tasks.starving","match":"scheduler.tasks.starving"},{"name":"airflow.sla_email_notification_failure","match":"sla_email_notification_failure"},{"match_type":"regex","name":"airflow.dag.task_removed","match":"airflow\\.task_removed_from_dag\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task_restored","match":"airflow\\.task_restored_to_dag\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.task.instance_created","match":"airflow.task_instance_created-*","tags":{"task_class":"$1"}},{"name":"airflow.ti.start","match":"ti.start.*.*","tags":{"dagid":"$1","taskid":"$2"}},{"name":"airflow.ti.finish","match":"ti.finish.*.*.*","tags":{"dagid":"$1","state":"$3","taskid":"$2"}}]}]
I was setting up Argo in my k8s cluster in Argo namespace.
I also Installed MinIO as an Artifact repository (https://github.com/argoproj/argo-workflows/blob/master/docs/configure-artifact-repository.md).
I am configuring a workflow which tries to access that Non-Default Artifact Repository as:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-passing-
spec:
entrypoint: artifact-example
templates:
- name: artifact-example
steps:
- - name: generate-artifact
template: whalesay
- - name: consume-artifact
template: print-message
arguments:
artifacts:
# bind message to the hello-art artifact
# generated by the generate-artifact step
- name: message
from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello world | tee /tmp/hello_world.txt"]
outputs:
artifacts:
# generate hello-art artifact from /tmp/hello_world.txt
# artifacts can be directories as well as files
- name: hello-art
path: /tmp/hello_world.txt
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
- name: print-message
inputs:
artifacts:
# unpack the message input artifact
# and put it at /tmp/message
- name: message
path: /tmp/message
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
container:
image: alpine:latest
command: [sh, -c]
args: ["cat /tmp/message"]
I created the workflow in argo namespace by:
argo submit --watch artifact-passing-nondefault-new.yaml -n argo
But the workflow fails with an error:
STEP PODNAME DURATION MESSAGE
✖ artifact-passing-z9g64 child 'artifact-passing-z9g64-150231068' failed
└---⚠ generate-artifact artifact-passing-z9g64-150231068 12s failed to save outputs: Get https://argo-artifacts-minio.argo:9000/my-bucket/?location=: http: server gave HTTP response to HTTPS client
Can someone help me to solve this error?
Since the minio setup runs without TLS configured, the workflow should specify that it should connect to an insecure artifact repository.
Including a field insecure: true in the s3 definition section of the workflow solves the issue.
s3:
endpoint: argo-artifacts-minio.argo:9000
insecure: true
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
We are managing web sites with saltstack. These sites run on PHP-FPM, and we have several fpm pools. Each pool is configured with dedicated file in php-fpm.d/ directory.
Current, we have a file.managed state with check_cmd: php-fpm -ty to check if the configuration is valid.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
It works fine, until mistake was made to a pool file (say, fpm-pool-a). Though the fpm-conf state blocks the update to the main fpm config file, the a.conf has been contaminated with erroneous configuration.
Is there a way to prevent this from happening? It seems that check_cmd can't be used in this case.
How to guarantee that a series of files are all valid before updating?
One workaround is recovering the original pool files if any mistakes were made.
Here is an example, I'd suggest start using jinja if this state start to get any larger.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/a.conf
- backup_id: 0
- onfail:
- file: fpm-conf
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/b.conf
- backup_id: 0
- onfail:
- file: fpm-conf
Notice the - backup: minion addition, this will backup the file locally to /var/cache/salt/minion/file_backup/...
So in case the main config fails, fpm-pool-a-recover and fpm-pool-b-recover will fire and recover the most recent backup of the original file.