deploy multiple file through saltstack only if all files are valid - salt-stack

We are managing web sites with saltstack. These sites run on PHP-FPM, and we have several fpm pools. Each pool is configured with dedicated file in php-fpm.d/ directory.
Current, we have a file.managed state with check_cmd: php-fpm -ty to check if the configuration is valid.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
It works fine, until mistake was made to a pool file (say, fpm-pool-a). Though the fpm-conf state blocks the update to the main fpm config file, the a.conf has been contaminated with erroneous configuration.
Is there a way to prevent this from happening? It seems that check_cmd can't be used in this case.
How to guarantee that a series of files are all valid before updating?

One workaround is recovering the original pool files if any mistakes were made.
Here is an example, I'd suggest start using jinja if this state start to get any larger.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/a.conf
- backup_id: 0
- onfail:
- file: fpm-conf
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/b.conf
- backup_id: 0
- onfail:
- file: fpm-conf
Notice the - backup: minion addition, this will backup the file locally to /var/cache/salt/minion/file_backup/...
So in case the main config fails, fpm-pool-a-recover and fpm-pool-b-recover will fire and recover the most recent backup of the original file.

Related

How can I create a public single-user jupyter notebook-server?

I have setup a Jupyterhub running on K8s
It authenticates and launches private user notebook-servers (pods) in the K8s
But these pods are private to K8s networking, and I want to connect to it from Local VSCode via its Remote Kernel Connection
I tried to find resources, but there isn't much available that matches my setup, can anyone help me redirect to the setup. Also attaching the jupyterhub-config.yaml I am using currently to create single user pods as a notebook-server.
singleuser:
extraContainers:
- name: "somename"
image: "{{ jupyter_notebook_image_name }}:{{ jupyter_notebook_tag }}"
command: ["/usr/local/bin/main.sh"]
securityContext:
runAsUser: 0
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
env:
- name: JUPYTERHUB_USER
value: '{unescaped_username}'
volumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
uid: 0
storage:
capacity: 1Gi
homeMountPath: /home/jovyan/{username}
extraVolumes:
- name: projects
persistentVolumeClaim:
claimName: projects--hub-pvc
- name: home-projects-dir
- name: kernels-path
extraVolumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
dynamic:
storageClassName: jupyter
pvcNameTemplate: '{username}--hub-pvc'
volumeNameTemplate: '{username}--hub-pv'
storageAccessModes: [ReadWriteMany]
image:
name: {{ jupyter_notebook_image_name }}
tag: {{ jupyter_notebook_tag }}
pullSecrets:
xxxkey

Salt stack master reactors no action

I want to have Linux patches schedule and restart the minions if updates are success. I have created state for OS update process and sending event to event bus. Then I have created reactors to listen event tag and reboot server if tag success but reactor not react to anything and no action.
# /srv/salt/os/updates/linuxupdate.sls
uptodate:
pkg.uptodate:
- refresh: True
event-completed:
event.send:
- name: 'salt/minion-update/success'
- require:
- pkg: uptodate
event-failed:
event.send:
- name: 'salt/minion-update/failed'
- onfail:
- pkg: uptodate
# /etc/salt/master.d/reactor.conf
reactor:
- 'salt/minion-update/success':
- /srv/reactor/reboot.sls
# /srv/reactor/reboot.sls
reboot_server:
local.state.sls:
- tgt: {{ data['id'] }}
- arg:
- os.updates.reboot-server

I can't exlude the directory from my pre-commit

My goal it's exlude migrations/ from my pre-commit.
I have my .pre-commit-config.yaml which begin with
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
exclude: ^(tests/fixtures/|migrations/)
- repo: https://github.com/asottile/reorder_python_imports
rev: v3.8.4
hooks:
- id: reorder-python-imports
args: [--application-directories, '.:src', --py36-plus]
- repo: https://github.com/asottile/pyupgrade
rev: v3.1.0
hooks:
- id: pyupgrade
args: [--py36-plus]
- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
args: [--line-length=119]
- repo: https://github.com/PyCQA/flake8
rev: 5.0.4
hooks:
- id: flake8
args: [--max-line-length=119]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.982
hooks:
- id: mypy
exclude: ^(docs/|example-plugin/|migrations/)
But my pre-commit going to migrations and validate my rubbishcode)
I get:
mypy.....................................................................Failed
- hook id: mypy
- exit code: 1
support/ticket/migrations/0001_initial.py:10: error: Need type annotation
for "dependencies" (hint: "dependencies: List[<type>] = ...")
Found 1 error in 1 file (checked 30 source files)
What i should to do?
Use pass_filenames: false in your pre-commit config and specify entry to the exact mypy command you want to run.

Basic auth is not working for Traefik v2.1

my issue is that I cannot set the basic authentication for my frontend app throught traefik
This is how I have configured my traefik
traefik.yml
global:
checkNewVersion: true
sendAnonymousUsage: false
entryPoints:
https:
address: :443
http:
address: :80
traefik:
address: :8080
tls:
options:
foo:
minVersion: VersionTLS12
cipherSuites:
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_RSA_WITH_AES_256_GCM_SHA384"
providers:
providersThrottleDuration: 2s
docker:
watch: true
endpoint: unix:///var/run/docker.sock
exposedByDefault: false
network: web
api:
insecure: true
dashboard: true
log:
level: INFO
certificatesResolvers:
default:
acme:
storage: /acme.json
httpChallenge:
entryPoint: http
docker-compose.yml
version: '3'
services:
traefik:
image: traefik:v2.0
restart: always
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/srv/traefik/traefik.yml:/etc/traefik/traefik.yml"
- "/srv/traefik/acme.json:/acme.json"
networks:
- web
networks:
web:
external: true
And here is where I have my frontend app running as a traefik provider and where I have my basic auth label
version: '3.7'
services:
frontend:
image: git.xxxx.com:7000/dockerregistry/registry/xxxx
restart: "always"
networks:
- web
volumes:
- "/srv/config/api.js:/var/www/htdocs/api.js"
- "/srv/efs/workspace:/var/www/htdocs/stock"
labels:
- traefik.enable=true
- traefik.http.routers.frontend-http.rule=Host(`test.xxxx.com`)
- traefik.http.routers.frontend-http.service=frontend
- traefik.http.routers.frontend-http.entrypoints=http
- traefik.http.routers.frontend.tls=true
- traefik.http.routers.frontend.tls.certresolver=default
- traefik.http.routers.frontend.entrypoints=http
- traefik.http.routers.frontend.rule=Host(`test.xxxx.com`)
- traefik.http.routers.frontend.service=frontend
- traefik.http.middlewares.frontend.basicAuth.users=test:$$2y$$05$$c45HvbP0Sq9EzcfaXiGNsuuWMfPhyoFZVYgiTylpMMLtJY2nP1P6m
- traefik.http.services.frontend.loadbalancer.server.port=8080
networks:
web:
external: true
I cannot get the login prompt, so Im wondering if I missing some container label for this.
Thanks in advance! Joaquin
firstly , the labels should be in quotation marks like this ""
secondly, I think you are missing a label in the frontend app .
when using basic auth it takes two steps and should look like this :
- "traefik.http.routers.frontend.middlewares=frontend-auth"
- "traefik.http.middlewares.frontend-auth.basicauth.users=test:$$2y$$05$$c45HvbP0Sq9EzcfaXiGNsuuWMfPhyoFZVYgiTylpMMLtJY2nP1P6m"
In your Docker Compose file don't add the "middlewares" label for traefik, instead do it using a traefik.yml file passing the providers.file option, where you should define the routers, services, middlewares, etc. In that "providers file" you should set middlewares under http.routes.traefik – This may sound super confuse at the beginning but is not that hard, trust me.
Let's do a YAML case (you can convert it to "TOML" here).
This example assumes you have a Docker Compose file specifically for Traefik – I haven't tried using the same Docker Compose file with any other services in it (like Wordpress, databases or whatever) since I already have a different path for those files.
docker-compose.yml
version: '3.1'
services:
reverse-proxy:
image: traefik:v2.4
[ ... ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
# Map the dynamic conf into the container
- ./traefik/config.yml:/etc/traefik/config.yml:ro
# Map the static conf into the container
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
# Note you don't use "traefik.http.routers.<service>.middlewares etc." here
[ ... ]
In this case I set/get the config files for Traefik in ./traefik (relative to the docker-compose.yml file).
./traefik/config.yml
http:
routers:
traefik:
middlewares: "basicauth"
[ ... ]
middlewares:
basicauth:
basicAuth:
removeHeader: true
users:
- <user>:<password>
# password should be generated using `htpasswd` (md5, sha1 or bcrypt)
[ ... ]
Here you can set the basicauth name as you wish (since that's the middleware name you'll see in the Dashboard), so you could do:
http:
routers:
traefik:
middlewares: "super-dashboard-auth"
[ ... ]
middlewares:
super-dashboard-auth:
basicAuth:
removeHeader: true
users:
- <user>:<password>
# password should be generated using `htpasswd` (md5, sha1 or bcrypt)
[ ... ]
Note that basicAuth must remain as is. Also, here you don't need to use the "double dollar method" to scape it (as in the label approach), so after creating the user password you should enter it exactly like htpasswd created it.
# BAD
user:$$2y$$10$$nRLqyZT.64JI/CD/ym65UGDn8HaY0D6CBTxhe6JXf9u4wi5bEMdh.
# GOOD
user:$2y$10$nRLqyZT.64JI/CD/ym65UGDn8HaY0D6CBTxhe6JXf9u4wi5bEMdh.
Of course you may want to get this data from an .env file and not hardcode those strings, in that case you need to pass the environmental variable from the docker-compose.yml using environment like this:
services:
reverse-proxy:
image: traefik:v2.4
container_name: traefik
[ ... ]
environment:
TRAEFIK_DASHBOARD_USER: "${TRAEFIK_DASHBOARD_USER}"
TRAEFIK_DASHBOARD_PWD: "${TRAEFIK_DASHBOARD_PWD}"
# And any other env. var. you may need
[ ... ]
and use it like this in you traefik/config.yml file:
[ ... ]
middlewares:
super-dashboard-auth:
basicAuth:
removeHeader: true
users:
- "{{env "TRAEFIK_DASHBOARD_USER"}}:{{env "TRAEFIK_DASHBOARD_PWD"}}"
[ ... ]
After that include the previous file in the providers.file.filename
./traefik/traefik.yml
[ ... ]
api:
dashboard: true
insecure: false
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
[ ... ]
file:
filename: /etc/traefik/config.yml
watch: true
[ ... ]
And then simply docker-compose up -d
I configure it this way:
generate password by apache2-utils e.g.
htpasswd -nb admin secure_password
setup traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.lets-encrypt.acme]
email = "your_email#your_domain"
storage = "acme.json"
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
setup traefik_dynamic.toml
[http.middlewares.simpleAuth.basicAuth]
users = [
"admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
]
[http.routers.api]
rule = "Host(`monitor.your_domain`)"
entrypoints = ["websecure"]
middlewares = ["simpleAuth"]
service = "api#internal"
[http.routers.api.tls]
certResolver = "lets-encrypt"
setup traefik service
services:
reverse-proxy:
image: traefik:v2.3
restart: always
command:
- --api.insecure=true
- --providers.docker
ports:
- "80:80"
- "443:443"
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./traefik_dynamic.toml:/traefik_dynamic.toml
- ./acme.json:/acme.json
Regarding this part of the documentation.
If you are using Docker scripts for settings.
Configure as the following.
For example:
labels:
- "traefik.http.middlewares.foo-add-prefix.addprefix.prefix=/foo"
- "traefik.http.routers.router1.middlewares=foo-add-prefix#docker"
I had same issue and I was missing namespace name #docker in the middleware name.

Multiple file.line in single state in Salt

I would like to have a Salt state for managing my SSH file. This requires multiple file.line operations. How can I do this?
UPDATE: See bottom of the question for my current workaround
What I have is this:
Secure SSH:
file:
- name: /etc/ssh/sshd_config
- line:
- match: "^PasswordAuthentication "
- content: "PasswordAuthentication no"
- mode: ensure
- line:
- match: "^PubkeyAuthentication "
- content: "PubkeyAuthentication yes"
- mode: ensure
- line:
- match: "^Port "
- content: "Port 8888"
- mode: ensure
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
but this fails with
Data failed to compile:
----------
No function declared in state 'file' in SLS u'xyz'
Actually my first attempt was this:
Secure SSH:
file.line:
- name: /etc/ssh/sshd_config
- match: "^PasswordAuthentication "
- content: "PasswordAuthentication no"
- mode: ensure
file.line:
- name: /etc/ssh/sshd_config
- match: "^PubkeyAuthentication "
- content: "PubkeyAuthentication yes"
- mode: ensure
file.line:
- name: /etc/ssh/sshd_config
- match: "^Port "
- content: "Port 8888"
- mode: ensure
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
but this fails with
Data failed to compile:
----------
Rendering SLS 'base:xyz' failed: Conflicting ID 'file.line'
I understand this error because every state function is a dictionary key, but it does look very clean.
The Salt documentation is very unhelpful in this because it does not say anything about what to do when just maybe you want to modify multiple things to one file, and it conveniently only gives very trivial examples in its documentation.
UPDATE:
I got it to work by using a separate state for each line (and I also changed file.line to file.replace but that was another issue). I think this is rather unwieldy plus isn't the service reloaded after every step?
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Separating the file.replace to multiple states is the way to go.
To avoid redundancy you should move the service.running to its own state as well. Plus: when using watch (or watch_in) you'll need to specify the name of state you are watching after the file: part.
The result will look like this:
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
- watch_in:
- service: ssh_service
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
- watch_in:
- service: ssh_service
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
- watch_in:
- service: ssh_service
ssh_service:
service.running:
- name: sshd
I would recommend checking out listen instead of watch. Watch will restart sshd 3 times, once after each time the file is changed.
If you use listen, it will only restart it once at the very end. But you have to do
Put the service.running at the very end with it's own stateid and listen to all of the changes.
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
Start SSHD:
service.running:
- name: sshd
- listen:
- file: /etc/ssh/sshd_config
You might also find it worthwhile to check out the augeas state. It makes making changes like this a lot easier and look better in the state files.
sshd_config:
augeas.change:
- context: /files/etc/ssh/sshd_config
- changes:
- set Port 8888
- set PasswordAuthentication yes
- set PubkeyAuthentication yes
service.running:
- name: sshd
- listen:
- augeas: sshd_config

Resources