How to trigger an action upon change of state? - salt-stack

I am testing salt as a management system, using ansible so far.
How can I trigger an action (specifically, a service reload) when a state has changed?
In Ansible this is done via notify but browsing salt documentation I cannot find anything similar.
I found watch, which works the other way round: "check something, and if it changed to this and that".
there is also listen which seems to be closer to my needs (the documentation mentions a service reload) but I cannot put together the pieces.
To set an example, how the following scenario would work in salt: check a git repo (= create it if not existing or pull from it otherwise) and if it has changed, reload a service? The Ansible equivalent is
- name: clone my service
git:
clone: yes
dest: /opt/myservice
repo: http://git.example.com/myservice.git
version: master
force: yes
notify:
- restart my service if needed
- name: restart my service if needed
systemd:
name: myservice
state: restarted
enabled: True
daemon_reload: yes

Your example:
ensure my service:
git.latest:
- name: http://git.example.com/myservice.git
- target: /opt/myservice
service.running:
- watch:
- git: http://git.example.com/myservice.git
When there will be change in repo (clone for the first time, update etc.)
the state will be marked as "having changes" thus the dependent states -
service.running in this case - will require changes, for service it means to restart
What you are asking is covered in salt quickstart

Related

How do I add an nginx load balancer to a kubernetes cluster on Jelastic?

I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.

How to restart a systemd service with salt?

I am trying to build an .sls file which will always restart a service:
systemd-resolved:
service.running:
- restart: True
When deployed, this gives
ID: systemd-resolved
Function: service.running
Result: True
Comment: The service systemd-resolved is already running
Started: 23:46:49.999789
Duration: 53.068 ms
Changes:
This is correct, the service is already running. What I was trying to convey with this command is to restart it. How to do that?
Note: I would like to avoid, if possible, an explicit command to be ran (as I feel it i snot very salt-like - this should rather be handled by the appropriate module):
'systemctl restart systemd-resolved':
cmd.run
If you want your service to reload you need to set reload: True instead.
Beside, If you only want to restart the service if there is any change in any other state, you need to use watch instead.
for instance,
systemd-resolved:
service.running:
- enable: True
- reload: True
- watch:
- pkg: <abc>

What is the proper way to upgrade installed packages with Ansible

Let's assume I have the next simple Ansible playbook:
---
tasks:
- name: Upgrade installed packages
become: true
apt:
upgrade: safe
- name: Install NGINX web server
become: true
apt:
name: nginx
state: latest
notify:
- Restart NGINX
handlers:
- name: Restart NGINX
become: true
service:
name: nginx
state: restarted
As you see, I upgrade installed APT packages first and only then make sure I have the latest Nginx version. The problem is that if there's an update for Nginx, it will be installed in the first task and if so, the second task won't be marked as changed and the handler won't be fired. Is it true? Or Ansible is clever enough to somehow fire this handle only when Nginx was upgraded in the first task?
I wonder about the best practice for this case. Is there a better way than move all the separate installation tasks (which should fire handlers on their change) before the task which upgrades all the installed packages?
Thanks!
This is not "The Ansible way", but it is an option.
one why you can do it is by using lsof to find all the pid's which need restart and pass this information to systemd to get the service name for each pid. And then go over the list of services and restart each one of them.
some one all ready wrote a perl-script like that
- see example here: https://rwmj.wordpress.com/2014/07/10/which-services-need-restarting-after-an-upgrade/
another option is but the same is the restart-services script from the debian-goodies repo/package.

Reload nginx config from salt state, only if the configtest passes

I've recently written a salt state which handles the nginx config for a number of servers from some static variables in pillar. I wanted to roll this out to all the servers, but before I did this I wanted to make sure before the config is applied on a server it has first been tested.
Nginx has an inbuilt configtest which I use frequently on command line, and I found that salt has an nginx module which can be used to run configtest.
I have the following in my state file:
reload-nginx:
service.running:
- enabled: True
- reload: True
- watch:
- pkg: nginx
- file: /etc/nginx/sites-available/*
- file: /etc/nginx/nginx.conf
This should reload nginx if the config files change, or if the nginx install is upgraded/changed. I believe I can run a config test using the following in my state file (untested):
nginx-config-test:
module.run:
- name: nginx.configtest
And I believe if I add this state to the watch in the reload-nginx state it would reload if the configtest passed.
However, I want the reload to happen only if either of the config files have changed AND the config test passes, or if nginx changes AND the configtest passes. I see I can use onlyif to run a state if ALL of the things are True, and from experience you can' have multiple uses of the same method (so I can't have 3 different onlyif's - correct me if I am wrong).
But I don't see any way to reload nginx only if the config files have changed (or nginx has been updated) and the configtest has passed.
Is this possible?
Have the reload state watch the config-test state; have the config-test state watch the config files state and the pkg state. The test will only run if something changes, and the reload will only occur if the test runs and passes.
Caveat: Structurally this will work, but I've never used nginx.configtest, so I can't promise it behaves the way you think.
You will also need to use module.wait rather than module.run; watch statements don't work with .run. Reference here.
So that becomes:
reload-nginx:
service.running:
- name: nginx
- enable: True
- reload: True
- watch:
- module: nginx-config-test
nginx-config-test:
module.wait:
- name: nginx.configtest
- watch:
- file: /etc/nginx/sites-available/*

What are "states" when using SaltStack?

I'm trying SaltStack after using Puppet for a while, but I can't understand their use of the word "state".
My understanding is that, for example, a light switch has 2 possible states - on or off. When I write my SLS configuration I am describing what state a server should be in. When I ask SaltStack to provision a server I issue the command salt '*' state.highstate. I understand that a server can be in a highstate (as described in my config) or not. All good so far.
But this page describes other states. It describes lowstate, highstate and overstate (amongst others) as layers. Does this mean a server passes through several states to get to a highstate? Or all states are maintained simultaneously as layers? Or can I configure multiple possible states in my SLS and have SaltStack switch between them? Or are they just layers to SaltStack that have 'state' in the name and I'm confused?
I'm probably missing something obvious, if anyone can nudge me in the right direction I think a lot of the documentation will become clear to me!
Here, top.sls wihch contain,
# cat top.sls
base:
'*':
- httpd_require
and,
# cat httpd_require.sls
install_httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
- require:
- file: install_httpd
file.managed:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
High state:
We can see all the aspects of high state system while working with state files( .sls), There are three specific components.
High data:
SLS file:
High State
Each individual State represents a piece of high data(pkg.installed:'s block), Salt will compile all relevant SLS inside the top.sls, When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.
# salt 'remote_minion' state.show_highstate --out yaml
remote_minion:
install_httpd:
__env__: base
__sls__: httpd_require
file:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
- managed
- order: 10002
pkg:
- name: httpd
- installed
- order: 10000
service:
- name: httpd
- enable: true
- require:
- file: install_httpd
- running
- order: 10001
First, an order is declared, All States that are set to be first will have their order adjusted accordingly. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered.
Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from, Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.
"In other words, "High" data refers generally to data as it is seen by the user."
Low States:
""Low" data refers generally to data as it is ingested and used by Salt."
Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:
Low State
Low chunks
State module
Execution module(s)
# salt 'remote_minion' state.show_lowstate --out yaml
remote_minion:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: httpd
order: 10000
state: pkg
- __env__: base
__id__: install_httpd
__sls__: httpd_require
enable: true
fun: running
name: httpd
order: 10001
require:
- file: install_httpd
state: service
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: managed
group: root
mode: 644
name: /var/www/html/index.html
order: 10002
require:
- pkg: install_httpd
source: salt://index1.html
state: file
user: root
Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: http
order: 10000
state: pkg
Each low chunk maps to a State module (in this case, pkg) and a function inside that State module (in this case, installed). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work.
salt '\*' state.highstate
'*' refers to all the minions connected to the master.
'state.highstate' is used to run all modules / scripts mentioned in top.sls defined in master
To invoke a specific module / script on all minions, use the following salt command where the state information is defined in state.sls for apache in the example given below.
salt '\*' state.sls apache
To invoke the above salt call only on a specific minion, use the below command.
salt 'minion-name' state.sls apache
I don't know all levels of state, but when you run :
salt '*' state.highstate
Saltstack apply the states you provide in /srv/salt/top.sls.
If you write nothing in it, you can't apply an highstate.
You can apply other state with this command :
salt '*' state.sls state.example
A highstate is just the collection of states that is applied to your server. There is a process in the background where Salt's "state compiler" goes through several stages preparing the data in order to produce the highstate, but you don't really need to worry about those.
Things like the lowstate can help with debugging, but aren't necessary for day to day usage. The highstate is only applied once.

Resources