Restart process on event using Salt Stack and beacons - nginx

I successfully configured salt master (with reactor) and minion (with beacon). On minion I have nginx and beacon configuration that watch the process:
beacons:
service:
nginx:
onchangeonly: True
uncleanshutdown: /run/nginx.pid
Event is send and reactor get that event. I try to restart nginx:
{% set nginx_running = data['data']['nginx']['running'] %}
{% if not nginx_running %}
restart_nginx:
local.cmd.run:
- tgt: {{ data['data']['id'] }}
- arg:
- 'pkill nginx'
- 'systemctl restart nginx'
{% endif %}
Problem:
is that the right way to do it?
I want to send pkill because if nginx is killed - only root process, worker process are still working
I got information "ERROR: Specified cmd 'pkill nginx' either not absolute or does not exist

Related

Not able to compile data with Salstack state file

I am trying to using Salt state files to configure network devices. I will briefly describe my current setup:
I have pillar ntp.sls file saved as /etc/salt/pillar/ntp.sls and it looks like this:
ntp.servers:
- 11.1.1.1
- 2.2.2.2
Then I have Jinja template saved as /etc/salt/states/ntp/templates/ntp.jinja and looks like this:
{%- for server in servers %}
ntp {{ server }}
{%- endfor %}
Finally I have state file saved as /etc/salt/states/ntp/init.sls as this:
ntp_example:
netconfig.managed:
- template_name: salt://ntp/templates/ntp.jinja
- debug: true
- servers: {{ salt.pillar.get('ntp.servers') }}
I am getting the following error while trying to run the command: sudo salt sw state.sls ntp, where sw is the proxy minion, so here is the error:
sw:
Data failed to compile:
ID ntp.servers in SLS ntp is not a dictionary
Command to get data from pillar is working, command: sudo salt sw pillar.get ntp.servers
Output:
sw:
- 11.1.1.1
- 2.2.2.2
Any suggetions what could be wrong and how to fix it?
Thanks
I think you should declare in /etc/salt/pillar/ntp.sls something like:
ntp-servers:
- 11.1.1.1
- 2.2.2.2
and than load these values with:
- servers: {{ salt.pillar.get('ntp-servers') }}
The . is a directory separator in SaltStack.

Accessing Mine data immediately after install

I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.

prevent a Salt state from running when condition is not met

I have a Salt state that I only want to be executed when the target operating system is not RedHat; if the OS is RedHat, then I'd like to return just an error message.
In order to do that, I've been adding this at the top of the .sls file:
{% if grains['os'] == RedHat %}
RedHat not supported
{% endif %}
The above works because the message I've inserted is not a valid entry and then it failed to compile when the target operating system is RedHat, but I feel this is just a hack; I'd like to know if there's a more elegant solution to this problem, any ideas?
With the code below RedHat servers will only run the test.succeed_without_changes state.
The state: test.succeed_without_changes will ensure you that the minion has executed this job and that it has no changes which is useful in your logging.
Only servers with another OS are executing the real states in you statefile.
Code:
{% if grains['os'] == RedHat %}
RedHat-server-logging-state:
test.succeed_without_changes:
- name: RedHat OS detected
{% else %}
Execution-state-1:
test.succeed_with_changes:
- name: State 1 executed on non RedHat server
Execution-state-2:
test.succeed_with_changes:
- name: State 2 executed on non RedHat server
{% endif %}

Salt: Condition based on systemd being available or not

I want to install this file via salt-stack.
# /etc/logrotate.d/foo
/home/foo/log/foo.log {
compress
# ...
postrotate
systemctl restart foo.service
endscript
}
Unfortunately there are some old machines which don't have systemd yet.
For those machines I need this postrotate script:
/etc/init.d/foo restart
How to get this done in salt?
I guess I need something like this:
postrotate
{% if ??? %}
/etc/init.d/foo restart
{% else %}
systemctl restart foo.service
{% endif %}
endscript
But how to implement ??? ?
We can discover this by taking advantage of the service module, which is a virtual module that is ultimately implemented by the specific module appropriate for the machine.
From the command line we can discover the specific module being used with test.provider. Here is an example:
$ sudo salt 'some.*' test.provider service
some.debian.8.machine:
systemd
some.debian.7.machine:
debian_service
some.redhat.5.machine:
rh_service
To discover this in a template we can use:
{{ salt["test.provider"]("service") }}
So, you could use something like:
postrotate
{% if salt["test.provider"]("service") != "systemd" %}
/etc/init.d/foo restart
{% else %}
systemctl restart foo.service
{% endif %}
endscript
NOTE:
The possible return value of test.provider will vary across platform. From the source, these appear to be the currently available providers:
$ cd salt/modules && grep -l "__virtualname__ = 'service'" *.py
debian_service.py
freebsdservice.py
gentoo_service.py
launchctl.py
netbsdservice.py
openbsdrcctl.py
openbsdservice.py
rest_service.py
rh_service.py
smf.py
systemd.py
upstart.py
win_service.py
I'd just call out directly to Salt's service module which will do the right thing based on the OS.
postrotate
salt-call service.restart foo
endscript
A more "salty" way of doing this would be something like this:
my_file:
file.managed:
- source: salt://logrotate.d/foo
- name: /etc/logrotate.d/foo
- watch_in:
- service: my_foo_service
my_foo_service:
service.running:
- name: foo
This will lay down the file for you and then restart the foo service if any changes are made.

Salt changing /etc/hosts, but still caching old one?

Is salt caching /etc/hosts?
I'm in a situation where I change /etc/hosts such that the FQDN points to the external IP address instead of 127.0.0.1
The problem is that in the first run, the fqdn_ipv4 stays 127.0.0.1 and I need to rerun salt '*' state.highstate to get the right values. This leads to problems like this, which cost me a lot of time.
Is salt rendering everything before execution (or caches DNS)? How do I address this problem?
The state file looks like this:
127.0.0.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.0.1
127.0.1.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.1.1
{% for minion, items in salt['mine.get']('environment:' + environment, 'grains.item', expr_form='grain')|dictsort %}
{{ minion }}:
host.present:
- ip: {{ items['ip_addr'] }}
- names:
- {{ minion }}
- {{ minion.split('.')[0] }}
{% endfor %}
And the code that uses the IP looks like this:
{% set ipv4 = salt['config.get']('fqdn_ip4') -%}
# IP Address that Agent should listen on
listening_ip={{ ipv4[0] }}
Salt is caching the values of grains. Therfore the salt['config.get']('fqdn_ip4') will retrieve the value from the beginning of the script.
Use the following in your state file to refresh the grain information:
refreshgrains:
module.run:
- name: saltutil.sync_grains
Salt will render the state before executing it, so you might not be able to use any new grain information inside the state file itself.
But you will be able to use the new grain values in Jinja templates for files. I assume the second code snippet is from a template that is used by Salt's file.managed, so you should be safe here.

Resources