I've just inherited an Nginx proxy/app server setup that makes use of Consul and Consul Template for service discovery and registration. The Nginx proxy has a config file with an entry like this to register the downstream app servers:
<snip>
upstream appservers {
{{ range service "my-app-servers" }}
server {{ .Address }}.{{ .Port }};
{{ end }}
}
<snip>
I have consul-template running in the background to catch any updates to my-app-servers, update the nginx.conf file appropriately, and then reload the nginx config. This all works great, and we're able to add and remove app servers from the mix as needed. That said, if there is a scenario where we have no app servers available, we end up with an empty upstream block and that causes nginx to fail the reload.
Is there a way in consul-template to have "if service my-app-servers exists, then..." and "if not, then..." logic? I'd like to be able to have my nginx.conf file have one configuration for scenarios where upstream servers exist, and another contingency setup that displays error pages when the upstream servers do not exist. I'm still getting up to speed on consul-template and haven't seen any examples that show the syntax for such logic. Any help?
You can achieve this by storing the result of the service lookup in a variable, then using a conditional that only outputs the upstream block if the variable is not empty.
{{- $upstream_services := service "my-app-servers" -}}
{{- if $upstream_services -}}
upstream appservers {
{{- range $upstream_services }}
server {{ .Address }}.{{ .Port }};
{{- end }}
}
{{- end }}
Related
I am trying to using Salt state files to configure network devices. I will briefly describe my current setup:
I have pillar ntp.sls file saved as /etc/salt/pillar/ntp.sls and it looks like this:
ntp.servers:
- 11.1.1.1
- 2.2.2.2
Then I have Jinja template saved as /etc/salt/states/ntp/templates/ntp.jinja and looks like this:
{%- for server in servers %}
ntp {{ server }}
{%- endfor %}
Finally I have state file saved as /etc/salt/states/ntp/init.sls as this:
ntp_example:
netconfig.managed:
- template_name: salt://ntp/templates/ntp.jinja
- debug: true
- servers: {{ salt.pillar.get('ntp.servers') }}
I am getting the following error while trying to run the command: sudo salt sw state.sls ntp, where sw is the proxy minion, so here is the error:
sw:
Data failed to compile:
ID ntp.servers in SLS ntp is not a dictionary
Command to get data from pillar is working, command: sudo salt sw pillar.get ntp.servers
Output:
sw:
- 11.1.1.1
- 2.2.2.2
Any suggetions what could be wrong and how to fix it?
Thanks
I think you should declare in /etc/salt/pillar/ntp.sls something like:
ntp-servers:
- 11.1.1.1
- 2.2.2.2
and than load these values with:
- servers: {{ salt.pillar.get('ntp-servers') }}
The . is a directory separator in SaltStack.
I want to install this file via salt-stack.
# /etc/logrotate.d/foo
/home/foo/log/foo.log {
compress
# ...
postrotate
systemctl restart foo.service
endscript
}
Unfortunately there are some old machines which don't have systemd yet.
For those machines I need this postrotate script:
/etc/init.d/foo restart
How to get this done in salt?
I guess I need something like this:
postrotate
{% if ??? %}
/etc/init.d/foo restart
{% else %}
systemctl restart foo.service
{% endif %}
endscript
But how to implement ??? ?
We can discover this by taking advantage of the service module, which is a virtual module that is ultimately implemented by the specific module appropriate for the machine.
From the command line we can discover the specific module being used with test.provider. Here is an example:
$ sudo salt 'some.*' test.provider service
some.debian.8.machine:
systemd
some.debian.7.machine:
debian_service
some.redhat.5.machine:
rh_service
To discover this in a template we can use:
{{ salt["test.provider"]("service") }}
So, you could use something like:
postrotate
{% if salt["test.provider"]("service") != "systemd" %}
/etc/init.d/foo restart
{% else %}
systemctl restart foo.service
{% endif %}
endscript
NOTE:
The possible return value of test.provider will vary across platform. From the source, these appear to be the currently available providers:
$ cd salt/modules && grep -l "__virtualname__ = 'service'" *.py
debian_service.py
freebsdservice.py
gentoo_service.py
launchctl.py
netbsdservice.py
openbsdrcctl.py
openbsdservice.py
rest_service.py
rh_service.py
smf.py
systemd.py
upstart.py
win_service.py
I'd just call out directly to Salt's service module which will do the right thing based on the OS.
postrotate
salt-call service.restart foo
endscript
A more "salty" way of doing this would be something like this:
my_file:
file.managed:
- source: salt://logrotate.d/foo
- name: /etc/logrotate.d/foo
- watch_in:
- service: my_foo_service
my_foo_service:
service.running:
- name: foo
This will lay down the file for you and then restart the foo service if any changes are made.
Suppose I have different credentials in two different environments, but that's the only thing that differs between them, and I don't want to make extra pillar files for a single item.
Suppose I attack the problem like this:
{%- set deployment = grains.get('deployment') %}
{%- load_yaml as credentials %}
prod: prodpassword
test: testpassword
dev: devpassword
{%- endload %}
some_app:
user: someuser
password: {{ credentials[deployment] }}
...more configuration here...
This works as expected. But can a minion in test theoretically get the password for prod? That depends on whether the dict lookup happens before or after data is sent to the client, I think, which in turn depends on when the jinja is rendered. Does the master render it first and then send the resulting data, or does the minion receive the pillar file as-is, then render it itself?
Pillar data is always rendered on the master, never the minion. The master does have access to the minion's grains, however, which is why your example works.
Given a Pillar SLS file with the following contents:
test: {{ grains['id'] }}
The following pillar data will result:
# salt testminion pillar.item test
testminion:
----------
test:
testminion
Source: I'm a SaltStack core developer.
I successfully configured salt master (with reactor) and minion (with beacon). On minion I have nginx and beacon configuration that watch the process:
beacons:
service:
nginx:
onchangeonly: True
uncleanshutdown: /run/nginx.pid
Event is send and reactor get that event. I try to restart nginx:
{% set nginx_running = data['data']['nginx']['running'] %}
{% if not nginx_running %}
restart_nginx:
local.cmd.run:
- tgt: {{ data['data']['id'] }}
- arg:
- 'pkill nginx'
- 'systemctl restart nginx'
{% endif %}
Problem:
is that the right way to do it?
I want to send pkill because if nginx is killed - only root process, worker process are still working
I got information "ERROR: Specified cmd 'pkill nginx' either not absolute or does not exist
Is salt caching /etc/hosts?
I'm in a situation where I change /etc/hosts such that the FQDN points to the external IP address instead of 127.0.0.1
The problem is that in the first run, the fqdn_ipv4 stays 127.0.0.1 and I need to rerun salt '*' state.highstate to get the right values. This leads to problems like this, which cost me a lot of time.
Is salt rendering everything before execution (or caches DNS)? How do I address this problem?
The state file looks like this:
127.0.0.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.0.1
127.0.1.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.1.1
{% for minion, items in salt['mine.get']('environment:' + environment, 'grains.item', expr_form='grain')|dictsort %}
{{ minion }}:
host.present:
- ip: {{ items['ip_addr'] }}
- names:
- {{ minion }}
- {{ minion.split('.')[0] }}
{% endfor %}
And the code that uses the IP looks like this:
{% set ipv4 = salt['config.get']('fqdn_ip4') -%}
# IP Address that Agent should listen on
listening_ip={{ ipv4[0] }}
Salt is caching the values of grains. Therfore the salt['config.get']('fqdn_ip4') will retrieve the value from the beginning of the script.
Use the following in your state file to refresh the grain information:
refreshgrains:
module.run:
- name: saltutil.sync_grains
Salt will render the state before executing it, so you might not be able to use any new grain information inside the state file itself.
But you will be able to use the new grain values in Jinja templates for files. I assume the second code snippet is from a template that is used by Salt's file.managed, so you should be safe here.