SaltStack: is it possible to allow only state.apply for minions? - salt-stack

I know that there is Blackout mode for salt-minions (https://docs.saltstack.com/en/latest/topics/blackout/index.html) but it is configured on a master side. My need is to configure minion to do not run any commands (I mean any commands like salt '*' test.ping or salt '*' file.mkdir "/tmp/foobar") from a salt-master and only allow to run salt '*' state.apply with local state files.
Is it possible?

While I discover the codebase of Salt I found that there is solution for my issue there in upstream. I found that they added minion_blackout config to grains not only to pillar, so now I can configure that on my minion and allow only state.apply in minion_blackout_whitelist config.

Related

Standalone minion to Master minion

I have a host with standalone minion configured. It has all the required configurations in /srv/salt. Executing it using salt-call, and it works as expected. Now I wanted have a master to have control over this minion.
So I have created a salt master in one another host, and updated the /etc/salt/minion configuration file in the minion host to connect to master.Then restarted the minion and accepted key in server.
Now I could do some basic checks like salt 'minion-host' test.ping . But salt 'minion-host' state.highstate is failing with minion not responding. I could I execute the minion with its configuration from the master.
What is the proper way to execute the salt-call over minion, using the available minion configurations inside minion host.
If salt minion test.ping works that's a good start, you could also use:
salt-run manage.up
That should give a list of minions currently up.
Keep in mind that the minion needs to run the salt agent (https://docs.saltstack.com/en/getstarted/system/communication.html)
Test with this:
salt minion state.highstate test=True
Or get a more verbose output using:
salt minion state.highstate -l debug

Using cloud-init to change resolv.conf

I want my setup of openstack to work such that when I boot a new instance, 8.8.8.8 should be added to dns-nameservers.
This is my old /etc/resolv.conf (in the new VM which was spawned in openstack)-
nameserver 10.0.0.2
search openstacklocal
And this is the new resolv.conf that I want -
nameserver 8.8.8.8
nameserver 10.0.0.2
search openstacklocal
I followed this tutorial, and
I have added the necessary info. of resolv conf to my config file(/etc/cloud/cloud.cfg) of cloud-init -
manage_resolv_conf: true
resolv_conf:
nameservers: ['8.8.4.4', '8.8.8.8']
searchdomains:
- foo.example.com
- bar.example.com
domain: example.com
options:
rotate: true
timeout: 1
These changes are made in /etc/cloud/cloud.cfg file of the openstack host.
However, the changes don't seem to get reflected.
Any suggestions?
It will not work because cloud-init networking configuration occurs too early in the boot process.
See cloud-init stages: https://cloudinit.readthedocs.io/en/latest/topics/boot.html
Network configuration is done in the "Local" stage, but the user-data from Openstack is only downloaded at the "Config" stage after network is up. At this stage, the network configuration is ignored.
Instead, you need to edit networking files then bring interfaces up by passing commands to cloud-init with runcmd.
Cloud-init overwrites the entry of /etc/sysconfig/network file as well as resolv.conf . To disable this you can create a custom rule for cloud-init config by creating a file /etc/cloud/cloud.cfg.d/custom-network-rule.cfg which contains -
network: {config: disabled}

Salt grains reported by minion differ form those reported to master

I have a minion with id app1. It has the following grains as reported locally via salt-call -g and which are corroborated by its minion file:
id: app1
datacenter: dc1
master: saltmaster1
prototypes: application
...
where datacenter and prototypes are custom grains.
From saltmaster1, I run salt 'app1' cmd.run "echo 'yo' | wall" to make sure that I am talking to the correct minion and I see the wall message on app1. I then test that I can ping app1 and that it will respond salt 'app1' test.ping and it responds True. Now I run salt 'app1' grains.items from saltmaster1 and it displays the following values:
id: app1
master: saltmaster1
prototypes: application
...
The datacenter grain is missing! Why?
I restarted the salt-minion service and waited a few minutes.

saltstack dynamically update etc hosts

How to dynamically update /etc/hosts file with saltstack.
There is example that works with ansible great, but don't know how to do it with salt.
http://xmeblog.blogspot.fr/2013/06/ansible-dynamicaly-update-etchosts.html
- name: add hostname in /etc/hosts
lineinfile: dest=/etc/hosts regexp='.*{{ item }}$' line="{{ hostvars[item]['ansible_default_ipv4']['address'] }} {{item}}" state=present
when: hostvars[item]['ansible_default_ipv4']['address'] is defined
with_items: groups['all']
This will update /etc/hosts with all ansible hosts-ip and host address available in inventory file.
How it is possible with salt?
I want to collect all minions ip address and hostname and update it on all minions /etc/hosts.
minion1 => ip (192.168.1.1) hostname is (example1.net)
minion2 => ip (192.168.1.2) hostname is (example2.net)
minion3 => ip (192.168.1.3) hostname is (example3.net)
In all minions /etc/hosts file entry should be like:
127.0.0.1 localhost
::1 localhost
192.168.1.1 example1.net
192.168.1.2 example2.net
192.168.1.3 example3.net
Please take a look into https://github.com/saltstack-formulas/hostsfile-formula, hopefully it suits your needs.
This particular formula allows to 'automagically' create /etc/hosts records for all known minions.
Please no, I've noticed the formula link to Formula Documentation has been broken, try this one instead Salt Formulas installation and usage instructions.
Salt Formulas explained
Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring and starting a service, setting up users or permissions, and many other common tasks.

How to communicate with salt-master

I am trying to access salt master from salt-minion. But, I am unable to get the keys on salt-master.
On my VM, I installed salt-master and on my Windows, I installed salt-minion. I have given master IP address on my minion vi salt\conf\minion
master: master ip address
I tried to run the command below:
c:\salt\salt-minion.exe -l debug -c c:\salt\conf
I am getting a message like below:
[DEBUG ] Reading configuration from c:\salt\conf\minion
[INFO ] Using cached minion ID from c:\salt\conf\minion_id: HoroppaLabs
[DEBUG ] Configuration file path: c:\salt\conf\minion
[INFO ] Setting up the Salt Minion "HoroppaLabs"
[DEBUG ] Created pidfile: c:\salt\var\run\salt-minion.pid
[DEBUG ] Reading configuration from c:\salt\conf\minion
[DEBUG ] Attempting to authenticate with the Salt Master at 172.31.16.131
[DEBUG ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[DEBUG ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[WARNING ] SaltReqTimeoutError: Waited 60 seconds
[INFO ] Waiting for minion key to be accepted by the master.
[DEBUG ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[WARNING ] SaltReqTimeoutError: Waited 60 seconds
[INFO ] Waiting for minion key to be accepted by the master.
[DEBUG ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
I didn't get anything else, just the above.
On master, I tried to run the below command
sudo salt-key -L
Accepted Keys:
Unaccepted Keys:
Rejected Keys:
I didn't get any keys on master to accept
Can any one help, how can I communicate with salt-master?
This could be due to default incoming port on master (4505 and 4506) for salt communication are blocked
Since minions connect to masters, the only firewall configuration that must be done is on the master. By default master must be able to accept incoming connection on port 4505 and 4506
If your master is on centos or RHEL try below command to add ports to your firewall settings
1. firewall-cmd --get-active-zones
It will say either public, dmz, or something else.
You should only apply to the zones required.
firewall-cmd --permanent --zone= --add-port=4505-4506/tcp
firewall-cmd --reload (to open port 4505 and 4506)
You need to add your salt minion to your master. To do that run following command on you master:
salt-key -A <your_minions_hostname_or_ip>
For example in my case I did
salt-key -A virtual#192.168.56.101
For reference have a look here.

Resources