I installed datadog-agent using helm upgrade/install and provided the -f datadog.yaml parameter. The datadog.yaml contains this entry:
...
agents:
enabled: true
useConfigMap: true
customAgentConfig:
# Autodiscovery for Kubernetes
listeners:
- name: kubelet
config_providers:
- name: kubelet
polling: true
- name: docker
polling: true
apm_config:
enabled: false
apm_non_local_traffic: true
dogstatsd_mapper_profiles:
- name: airflow
prefix: "airflow."
mappings:
- match: "airflow.*_start"
name: "airflow.job.start"
tags:
job_name: "$1"
- match: "airflow.*_end"
name: "airflow.job.end"
tags:
job_name: "$1"
- match: "airflow.operator_failures_*"
...
But I don't see DD_DOGSTATSD_MAPPER_PROFILES env variable in the datadog-agent pod.
How can I inject this env variable to the datadog-agents?
Update 2/24/2022: I do see it's been added as a ConfigMap but it does not look like it is being mounted to the datadog-agent pod.
Update 3/4/2022: This yaml is working and I see the metrics in datadog dashboard. I do see it got mounted on the datadog-agent pod as config-map.
Use this in your datadog's helm chart:
datadog.env:
- name: DD_DOGSTATSD_MAPPER_PROFILES
value: >
[{"prefix":"airflow.","name":"airflow","mappings":[{"name":"airflow.job.start","match":"airflow.*_start","tags":{"job_name":"$1"}},{"name":"airflow.job.end","match":"airflow.*_end","tags":{"job_name":"$1"}},{"name":"airflow.job.heartbeat.failure","match":"airflow.*_heartbeat_failure","tags":{"job_name":"$1"}},{"name":"airflow.operator_failures","match":"airflow.operator_failures_*","tags":{"operator_name":"$1"}},{"name":"airflow.operator_successes","match":"airflow.operator_successes_*","tags":{"operator_name":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_runtime","match":"airflow\\.dag_processing\\.last_runtime\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_run.seconds_ago","match":"airflow\\.dag_processing\\.last_run\\.seconds_ago\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag.loading_duration","match":"airflow\\.dag\\.loading-duration\\.(.*)","tags":{"dag_file":"$1"}},{"name":"airflow.dagrun.first_task_scheduling_delay","match":"airflow.dagrun.*.first_task_scheduling_delay","tags":{"dag_id":"$1"}},{"name":"airflow.pool.open_slots","match":"airflow.pool.open_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.queued_slots","match":"pool.queued_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.running_slots","match":"pool.running_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.used_slots","match":"airflow.pool.used_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.starving_tasks","match":"airflow.pool.starving_tasks.*","tags":{"pool_name":"$1"}},{"match_type":"regex","name":"airflow.dagrun.dependency_check","match":"airflow\\.dagrun\\.dependency-check\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task.duration","match":"airflow\\.dag\\.(.*)\\.([^.]*)\\.duration","tags":{"dag_id":"$1","task_id":"$2"}},{"match_type":"regex","name":"airflow.dag_processing.last_duration","match":"airflow\\.dag_processing\\.last_duration\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.success","match":"airflow\\.dagrun\\.duration\\.success\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.failed","match":"airflow\\.dagrun\\.duration\\.failed\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.schedule_delay","match":"airflow\\.dagrun\\.schedule_delay\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.scheduler.tasks.running","match":"scheduler.tasks.running"},{"name":"airflow.scheduler.tasks.starving","match":"scheduler.tasks.starving"},{"name":"airflow.sla_email_notification_failure","match":"sla_email_notification_failure"},{"match_type":"regex","name":"airflow.dag.task_removed","match":"airflow\\.task_removed_from_dag\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task_restored","match":"airflow\\.task_restored_to_dag\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.task.instance_created","match":"airflow.task_instance_created-*","tags":{"task_class":"$1"}},{"name":"airflow.ti.start","match":"ti.start.*.*","tags":{"dagid":"$1","taskid":"$2"}},{"name":"airflow.ti.finish","match":"ti.finish.*.*.*","tags":{"dagid":"$1","state":"$3","taskid":"$2"}}]}]
I'm trying to connect dapr with nats with jetstream functionality enabled.
I want to start everything with docker-compose. Nats service is started and when I run nats-cli with command nats -s "nats://localhost:4222" server check jetstream, I get OK JetStream | memory=0B memory_pct=0%;75;90 storage=0B storage_pct=0%;75;90 streams=0 streams_pct=0% consumers=0 consumers_pct=0% indicating nats with jetstream is working ok.
Unfortunately, dapr returns first warning then error
warning: error creating pub sub %!s(*string=0xc0000ca020) (pubsub.jetstream/v1): couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
error: process component conversation-pubsub error: couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
I followed instructions on official site.
docker-compose.yaml
version: '3.4'
services:
conversation-api1:
image: ${DOCKER_REGISTRY-}conversationapi1
build:
context: .
dockerfile: Conversation.Api1/Dockerfile
ports:
- "5010:80"
conversation-api1-dapr:
container_name: conversation-api1-dapr
image: "daprio/daprd:latest"
command: [ "./daprd", "--log-level", "debug", "-app-id", "conversation-api1", "-app-port", "80", "--components-path", "/components", "-config", "/configuration/conversation-config.yaml" ]
volumes:
- "./dapr/components/:/components"
- "./dapr/configuration/:/configuration"
depends_on:
- conversation-api1
- redis
- nats
network_mode: "service:conversation-api1"
nats:
container_name: "Nats"
image: nats
command: [ "-js", "-m", "8222" ]
ports:
- "4222:4222"
- "8222:8222"
- "6222:6222"
# OTHER SERVICES...
conversation-pubsub.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: conversation-pubsub
namespace: default
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://host.docker.internal:4222" # already tried with nats for host
- name: name
value: "conversation"
- name: durableName
value: "conversation-durable"
- name: queueGroupName
value: "conversation-group"
- name: startSequence
value: 1
- name: startTime # in Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
conversation-config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: config
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
The problem was in old Dapr version. I used version 1.3.0, Jetstream support is introduced in 1.4.0+. Pulling latest version of daprio/daprd fixed my problem. Also no need for nats://host.docker.internal:4222, nats://nats:4222 works as expected.
my issue is that I cannot set the basic authentication for my frontend app throught traefik
This is how I have configured my traefik
traefik.yml
global:
checkNewVersion: true
sendAnonymousUsage: false
entryPoints:
https:
address: :443
http:
address: :80
traefik:
address: :8080
tls:
options:
foo:
minVersion: VersionTLS12
cipherSuites:
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_RSA_WITH_AES_256_GCM_SHA384"
providers:
providersThrottleDuration: 2s
docker:
watch: true
endpoint: unix:///var/run/docker.sock
exposedByDefault: false
network: web
api:
insecure: true
dashboard: true
log:
level: INFO
certificatesResolvers:
default:
acme:
storage: /acme.json
httpChallenge:
entryPoint: http
docker-compose.yml
version: '3'
services:
traefik:
image: traefik:v2.0
restart: always
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/srv/traefik/traefik.yml:/etc/traefik/traefik.yml"
- "/srv/traefik/acme.json:/acme.json"
networks:
- web
networks:
web:
external: true
And here is where I have my frontend app running as a traefik provider and where I have my basic auth label
version: '3.7'
services:
frontend:
image: git.xxxx.com:7000/dockerregistry/registry/xxxx
restart: "always"
networks:
- web
volumes:
- "/srv/config/api.js:/var/www/htdocs/api.js"
- "/srv/efs/workspace:/var/www/htdocs/stock"
labels:
- traefik.enable=true
- traefik.http.routers.frontend-http.rule=Host(`test.xxxx.com`)
- traefik.http.routers.frontend-http.service=frontend
- traefik.http.routers.frontend-http.entrypoints=http
- traefik.http.routers.frontend.tls=true
- traefik.http.routers.frontend.tls.certresolver=default
- traefik.http.routers.frontend.entrypoints=http
- traefik.http.routers.frontend.rule=Host(`test.xxxx.com`)
- traefik.http.routers.frontend.service=frontend
- traefik.http.middlewares.frontend.basicAuth.users=test:$$2y$$05$$c45HvbP0Sq9EzcfaXiGNsuuWMfPhyoFZVYgiTylpMMLtJY2nP1P6m
- traefik.http.services.frontend.loadbalancer.server.port=8080
networks:
web:
external: true
I cannot get the login prompt, so Im wondering if I missing some container label for this.
Thanks in advance! Joaquin
firstly , the labels should be in quotation marks like this ""
secondly, I think you are missing a label in the frontend app .
when using basic auth it takes two steps and should look like this :
- "traefik.http.routers.frontend.middlewares=frontend-auth"
- "traefik.http.middlewares.frontend-auth.basicauth.users=test:$$2y$$05$$c45HvbP0Sq9EzcfaXiGNsuuWMfPhyoFZVYgiTylpMMLtJY2nP1P6m"
In your Docker Compose file don't add the "middlewares" label for traefik, instead do it using a traefik.yml file passing the providers.file option, where you should define the routers, services, middlewares, etc. In that "providers file" you should set middlewares under http.routes.traefik – This may sound super confuse at the beginning but is not that hard, trust me.
Let's do a YAML case (you can convert it to "TOML" here).
This example assumes you have a Docker Compose file specifically for Traefik – I haven't tried using the same Docker Compose file with any other services in it (like Wordpress, databases or whatever) since I already have a different path for those files.
docker-compose.yml
version: '3.1'
services:
reverse-proxy:
image: traefik:v2.4
[ ... ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
# Map the dynamic conf into the container
- ./traefik/config.yml:/etc/traefik/config.yml:ro
# Map the static conf into the container
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
# Note you don't use "traefik.http.routers.<service>.middlewares etc." here
[ ... ]
In this case I set/get the config files for Traefik in ./traefik (relative to the docker-compose.yml file).
./traefik/config.yml
http:
routers:
traefik:
middlewares: "basicauth"
[ ... ]
middlewares:
basicauth:
basicAuth:
removeHeader: true
users:
- <user>:<password>
# password should be generated using `htpasswd` (md5, sha1 or bcrypt)
[ ... ]
Here you can set the basicauth name as you wish (since that's the middleware name you'll see in the Dashboard), so you could do:
http:
routers:
traefik:
middlewares: "super-dashboard-auth"
[ ... ]
middlewares:
super-dashboard-auth:
basicAuth:
removeHeader: true
users:
- <user>:<password>
# password should be generated using `htpasswd` (md5, sha1 or bcrypt)
[ ... ]
Note that basicAuth must remain as is. Also, here you don't need to use the "double dollar method" to scape it (as in the label approach), so after creating the user password you should enter it exactly like htpasswd created it.
# BAD
user:$$2y$$10$$nRLqyZT.64JI/CD/ym65UGDn8HaY0D6CBTxhe6JXf9u4wi5bEMdh.
# GOOD
user:$2y$10$nRLqyZT.64JI/CD/ym65UGDn8HaY0D6CBTxhe6JXf9u4wi5bEMdh.
Of course you may want to get this data from an .env file and not hardcode those strings, in that case you need to pass the environmental variable from the docker-compose.yml using environment like this:
services:
reverse-proxy:
image: traefik:v2.4
container_name: traefik
[ ... ]
environment:
TRAEFIK_DASHBOARD_USER: "${TRAEFIK_DASHBOARD_USER}"
TRAEFIK_DASHBOARD_PWD: "${TRAEFIK_DASHBOARD_PWD}"
# And any other env. var. you may need
[ ... ]
and use it like this in you traefik/config.yml file:
[ ... ]
middlewares:
super-dashboard-auth:
basicAuth:
removeHeader: true
users:
- "{{env "TRAEFIK_DASHBOARD_USER"}}:{{env "TRAEFIK_DASHBOARD_PWD"}}"
[ ... ]
After that include the previous file in the providers.file.filename
./traefik/traefik.yml
[ ... ]
api:
dashboard: true
insecure: false
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
[ ... ]
file:
filename: /etc/traefik/config.yml
watch: true
[ ... ]
And then simply docker-compose up -d
I configure it this way:
generate password by apache2-utils e.g.
htpasswd -nb admin secure_password
setup traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.lets-encrypt.acme]
email = "your_email#your_domain"
storage = "acme.json"
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
setup traefik_dynamic.toml
[http.middlewares.simpleAuth.basicAuth]
users = [
"admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
]
[http.routers.api]
rule = "Host(`monitor.your_domain`)"
entrypoints = ["websecure"]
middlewares = ["simpleAuth"]
service = "api#internal"
[http.routers.api.tls]
certResolver = "lets-encrypt"
setup traefik service
services:
reverse-proxy:
image: traefik:v2.3
restart: always
command:
- --api.insecure=true
- --providers.docker
ports:
- "80:80"
- "443:443"
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./traefik_dynamic.toml:/traefik_dynamic.toml
- ./acme.json:/acme.json
Regarding this part of the documentation.
If you are using Docker scripts for settings.
Configure as the following.
For example:
labels:
- "traefik.http.middlewares.foo-add-prefix.addprefix.prefix=/foo"
- "traefik.http.routers.router1.middlewares=foo-add-prefix#docker"
I had same issue and I was missing namespace name #docker in the middleware name.
im playing around with saltstack and wanted to use the apache-formula from github/saltstack-formulas.
my pillar looks like following:
top.sls
base:
'ubuntu-xenial-salt':
- systems.ubuntu-xenial-salt
systems/ubuntu-xenial-salt.sls
include:
- setups.apache.prod
apache:
sites:
ubuntu-salt-xenial:
enabled: True
template_file: salt://apache/vhosts/standard.tmpl
template_engine: jinja
interface: '*'
port: '80'
exclude_listen_directive: True # Do not add a Listen directive in httpd.conf
ServerName: ubuntu-salt-xenial
ServerAlias: ubuntu-salt-xenial
ServerAdmin: minion#ubuntu-salt-xenial.com
LogLevel: debug
ErrorLog: /var/log/apache2/example.com-error.log
CustomLog: /var/log/apache2/example.com-access.log
DocumentRoot: /var/www/ubuntu-salt-xenial/
Directory:
default:
Options: -Indexes +FollowSymLinks
Require: all granted
AllowOverride: None
setups/apache/prod.sls
include:
- applications.apache
# ``apache`` formula configuration:
apache:
register-site:
# any name as an array index, and you can duplicate this section
UNIQUE_VALUE_HERE:
name: 'PROD'
path: 'salt://path/to/sites-available/conf/file'
state: 'enabled'
# Optional - use managed file as Jinja Template
#template: true
#defaults:
# custom_var: "default value"
modules:
enabled: # List modules to enable
- rewrite
- ssl
disabled: # List modules to disable
- ldap
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
keepalive: 'On'
security:
# can be Full | OS | Minimal | Minor | Major | Prod
# where Full conveys the most information, and Prod the least.
ServerTokens: Prod
# ``apache.mod_remoteip`` formula additional configuration:
mod_remoteip:
RemoteIPHeader: X-Forwarded-For
RemoteIPTrustedProxy:
- 10.0.8.0/24
- 127.0.0.1
# ``apache.mod_security`` formula additional configuration:
mod_security:
crs_install: True
# If not set, default distro's configuration is installed as is
manage_config: True
sec_rule_engine: 'On'
sec_request_body_access: 'On'
sec_request_body_limit: '14000000'
sec_request_body_no_files_limit: '114002'
sec_request_body_in_memory_limit: '114002'
sec_request_body_limit_action: 'Reject'
sec_pcre_match_limit: '15000'
sec_pcre_match_limit_recursion: '15000'
sec_debug_log_level: '3'
rules:
enabled:
modsecurity_crs_10_setup.conf:
rule_set: ''
enabled: True
modsecurity_crs_20_protocol_violations.conf:
rule_set: 'base_rules'
enabled: False
custom_rule_files:
# any name as an array index, and you can duplicate this section
UNIQUE_VALUE_HERE:
file: 'PROD'
path: 'salt://path/to/modsecurity/custom/file'
enabled: True
applications/apache.sls
apache:
lookup:
version: '2.4'
default_charset: 'UTF-8'
global:
AllowEncodedSlashes: 'On'
name_virtual_hosts:
- interface: '*'
port: 80
- interface: '*'
port: 443
Using this pillar configuration and calling highstate for my minion ubuntu-xenial-salt run without any error, however the setup is not the same as declared in the pillar:
for example:
the enabled rewrite module is not there.
the virtual-host config is not as setup in the pillar.
everything seems to be pretty much standard configuration from example.pillar.
When i call
salt 'ubuntu-xenial-salt' pillar.data
i get the pillar data just like i modified it... i cant understand what is happening...
ubuntu-xenial-salt:
----------
apache:
----------
keepalive:
On
lookup:
----------
default_charset:
UTF-8
global:
----------
AllowEncodedSlashes:
On
name_virtual_hosts:
|_
----------
interface:
*
port:
80
|_
----------
interface:
*
port:
443
version:
2.4
mod_remoteip:
----------
RemoteIPHeader:
X-Forwarded-For
RemoteIPTrustedProxy:
- 10.0.8.0/24
- 127.0.0.1
mod_security:
----------
crs_install:
True
custom_rule_files:
----------
UNIQUE_VALUE_HERE:
----------
enabled:
True
file:
PROD
path:
salt://path/to/modsecurity/custom/file
manage_config:
True
rules:
----------
enabled:
None
modsecurity_crs_10_setup.conf:
----------
enabled:
True
rule_set:
modsecurity_crs_20_protocol_violations.conf:
----------
enabled:
False
rule_set:
base_rules
sec_debug_log_level:
3
sec_pcre_match_limit:
15000
sec_pcre_match_limit_recursion:
15000
sec_request_body_access:
On
sec_request_body_in_memory_limit:
114002
sec_request_body_limit:
14000000
sec_request_body_limit_action:
Reject
sec_request_body_no_files_limit:
114002
sec_rule_engine:
On
modules:
----------
disabled:
- ldap
enabled:
- ssl
- rewrite
register-site:
----------
UNIQUE_VALUE_HERE:
----------
name:
PROD
path:
salt://path/to/sites-available/conf/file
state:
enabled
security:
----------
ServerTokens:
Prod
sites:
----------
ubuntu-salt-xenial:
----------
CustomLog:
/var/log/apache2/example.com-access.log
Directory:
----------
default:
----------
AllowOverride:
None
Options:
-Indexes +FollowSymLinks
Require:
all granted
DocumentRoot:
/var/www/ubuntu-salt-xenial/
ErrorLog:
/var/log/apache2/example.com-error.log
LogLevel:
debug
ServerAdmin:
minion#ubuntu-salt-xenial.com
ServerAlias:
ubuntu-salt-xenial
ServerName:
ubuntu-salt-xenial
enabled:
True
exclude_listen_directive:
True
interface:
*
port:
80
template_engine:
jinja
template_file:
salt://apache/vhosts/standard.tmpl
Do someone knows what's happening here and can help me get it running?
We are managing web sites with saltstack. These sites run on PHP-FPM, and we have several fpm pools. Each pool is configured with dedicated file in php-fpm.d/ directory.
Current, we have a file.managed state with check_cmd: php-fpm -ty to check if the configuration is valid.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
It works fine, until mistake was made to a pool file (say, fpm-pool-a). Though the fpm-conf state blocks the update to the main fpm config file, the a.conf has been contaminated with erroneous configuration.
Is there a way to prevent this from happening? It seems that check_cmd can't be used in this case.
How to guarantee that a series of files are all valid before updating?
One workaround is recovering the original pool files if any mistakes were made.
Here is an example, I'd suggest start using jinja if this state start to get any larger.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/a.conf
- backup_id: 0
- onfail:
- file: fpm-conf
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/b.conf
- backup_id: 0
- onfail:
- file: fpm-conf
Notice the - backup: minion addition, this will backup the file locally to /var/cache/salt/minion/file_backup/...
So in case the main config fails, fpm-pool-a-recover and fpm-pool-b-recover will fire and recover the most recent backup of the original file.