I'd like to assign a role to various minions based on a process that is running. For example I can do this
salt '*' cmd.run 'ps ax |grep [n]amed'
This returns servers that are running bind, but since it runs against everyone (*) not only do I get the dns servers I also get everyone else, albeit with blank return data. Is there a way to only return the servers where this is true, and then pipe it to grains.setval role nameserver?
Thanks!
Related
I have to manage a cluster of ~600 ubuntu (16.04-20.04) servers using Saltstack 3002.
I decided to install a multi-master setup for load-distribution and fault-tolerance. salt-syndic appeared not the right choice for me. Instead I thought the salt-minions should pick a master from a list by random (?) at minion start. So my config looks as follows (excerpts):
master:
auto_accept: True
master_sign_pubkey: True
master_use_pubkey_signature: True
minion:
master:
- saltmaster001
- saltmaster002
- saltmaster003
verify_master_pubkey_sign: True
retry_dns: 0
master_type: failover
random_master: True
(three salt masters as you can see). I basically followed this tutorial: https://docs.saltstack.com/en/latest/topics/tutorials/multimaster_pki.html
Now, it doesn't work really well... For various reasons:
salt 'tnscass*' test.ping
tnscass011.mo-mobile-prod.ams2.cloud:
True
tnscass010.mo-mobile-prod.ams2.cloud:
True
tnscass004.mo-mobile-prod.ams2.cloud:
True
tnscass005.mo-mobile-prod.ams2.cloud:
Minion did not return. [Not connected]
tnscass003.mo-mobile-prod.ams2.cloud:
Minion did not return. [Not connected]
tnscass007.mo-mobile-prod.ams2.cloud:
Minion did not return. [Not connected]
Salt runs on the master work only if the targeted minions by accident are connected to the master on which you issue the salt command and not to any other master. In the above example the response would be True for different minions if you ran it on a different master.
So the only way is to use salt-call on a particular minion. Not very useful. And even that is not working well, e.g.:
root#minion:~# salt-call state.apply
[WARNING ] Master ip address changed from 10.48.40.93 to 10.48.42.32
[WARNING ] Master ip address changed from 10.48.42.32 to 10.48.42.35
So the minion decides to switch to another master and the salt-call takes ages... The rules that determine under which condition a minion decides to switch are not explained (at least I couldn't find anything)... Is it the load on the master? The number of connected minions?...
Another problem is the salt mines. I'm using code as follows:
salt.saltutil.runner('mine.get', tgt='role:mopsbrokeraggr', fun='network.get_hostname', tgt_type='grain')
Unfortunately, the values of the mines differ badly from minion to minion, so also mines are unusable.
I should mention that my masters are big machines with 16 cores and 128GB RAM, so this is not a matter of resource shortage.
To me, the scenario described in https://docs.saltstack.com/en/latest/topics/tutorials/multimaster_pki.html does simply not work at all.
So if anybody could tell me how to create a proper setup with 3 saltmasters for load distribution?
Is salt-syndic actually the better approach?
Can salt-syndic be used with randomly assigning the minions to the masters based on load or whatever?
what is the purpose of the mentioned tutorial? Or do I just have overlooked anything?
There are a couple of statements worth noticing in the documentation about this method. Quoting from the link in the question:
The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master.
Then
A test.version on the master the minion is currently connected to should be run to test connectivity.
So this seems to indicate that the minion is connected to one master at a time. Which means only that master can run test.version on that minion (and not any other master).
One of the primary objectives of your question can be met with a different method of multi-master setup: https://docs.saltproject.io/en/latest/topics/tutorials/multimaster.html
In a nutshell, you configure more than 1 master with the same PKI keypair. In the below explanation I have a multi-master setup with 2 servers. I use the below files from my first/primary server on the second server.
/etc/salt/pki/master/master.pub
/etc/salt/pki/master/master.pem
Then configure salt-minion for multiple masters in /etc/salt/minion:
masters:
- master1
- master2
Once the respective services have been restarted, you can check that all minions are available on both masters with salt-key -L:
# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
minion1
minion2
minion3
...
Rejected Keys:
Once all minions' keys are accepted on both masters, we can run salt '*' test.version from either of the masters and reach all minions.
There are other considerations on how to keep the file_roots, pillar_roots, minion keys, and configuration consistent between the masters in the link referenced above.
I modified my salt architecture from one salt master to multiple salt-master/syndic.
I set a high level master of masters where syndics are connected, via syndic_master.
It works well, when I run salt '*' test.ping, minions from differents masters are returned.
Now I would like to add a second master of masters, my syndic config is now like that
id: salt-syndic1
syndic_master:
- 10.30.2.37
- 10.30.2.38
If I now run salt '*' test.ping on both master of masters, returns seems to be split, a mom returns minions from one syndic and the other from other syndic. For minions which did not respond to each not, I get this error :
Minion did not return. [No response]
The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
salt-run jobs.lookup_jid 20201119145521842618
So we can see that command is well sent to all minions from both mom, but just one syndic returns result per mom.
I set master_id configs only on master of masters servers.
I also test to share jobs cache between moms without success.
You can try this in your syndic config
syndic_forward_all_events: True
Ref: https://docs.saltstack.com/en/latest/ref/configuration/master.html#syndic-forward-all-events
I might be asking a dumb question and I am apologizing for it.
Suppose I have two unix machine, for eg: machine1 and machine2.
First, I have logged into machine1 using userid1.
I am connecting to the machine2 using below SSH command and it is successful.
"ssh userid2#machine2"
In the machine2, I want to know the from which userid of machine1 the SSH was performed
i.e I want to know the userid1 value or name.
Based on which userid1 it was logged in to the machine2 I have to run particular commands.
If parent user = userid1 then
source ./temp_file
endif
So, is there a way to implement this?
Thanking you in advance.
I am trying to run a multi-master setup in our dev environment.
The idea is that every dev team has their own salt master. However, all minions in the entire dev environment should be able to receive salt commands from all salt master servers.
Since not every team needs their salt master 24/7, most of them are turned off for several days during the week.
I'm running 2016.11.4 on the masters, as well as on the minions.
However, I run into the following problem: If one of the hosts that are listed in the mininons config file is shut down, the minion will not always report back on a 'test.ping' command (not even with -t 60)
My experience is, that the more master servers are offline, the longer the lag of the minion is to answer requests.
Especially if you execute a 'test.ping' on MasterX while the minions' log is at this point:
2017-05-19 08:31:44,819 [salt.minion ][DEBUG ][5336] Connecting to master. Attempt 4 (infinite attempts)
If I trigger a 'test.ping' at this point, chances are 50/50 that I will get a 'minion did not return' on my master.
Obviously though, I always want a return to my 'test.ping', regardless from which master I send it.
Can anybody tell me if what I try is feasible with salt? Because all the articles about salt multi master setup that I could find would only say: 'put a list of master servers into the minion config and that's it!'
The comment from gtmanfred solved my question:
That is not really the way multi master is meant to work. It is supposed to be used more for failover and not for separating out teams.
I am aware that nodes can be started from the shell. What I am looking for is a way to start a remote node from within a module. I have searched, but have been able to find nothing.
Any help is appreciated.
There's a pool(3) facility:
pool can be used to run a set of
Erlang nodes as a pool of
computational processors. It is
organized as a master and a set of
slave nodes..
pool:start/1,2 starts a new pool.
The file .hosts.erlang is read to
find host names where the pool nodes
can be started. The slave nodes are
started with slave:start/2,3,
passing along Name and, if provided,
Args. Name is used as the first
part of the node names, Args is used
to specify command line arguments.
With pool you get load distribution facility for free.
Master node may be started this way:
erl -sname poolmaster -rsh ssh
Key -rsh here specifies an alternative to rsh for starting a slave node on a remote host. We used SSH here. Make sure your box have working SSH keys, and you can authenticate to the remote hosts using these keys.
If there are no hosts in the file .hosts.erlang, then no slave nodes are started, and you can use slave:start/2,3 to start slave nodes manually passing arguments if needed.
You could, for example start a remote node:
Arg = "-mnesia_dir " ++ M,
slave:start(H, Name, Arg).
Ensure epmd(1) is up and running on the remote boxes in order to start Erlang nodes.
Hope that helps.
A bit more low level that pool is the slave(3) module. Pool builds upon the functionality in slave.
Use slave:start to start a new slave.
You should probably also specify -rsh ssh on the command-line.
So use pool if you need the kind of functionality it offers, if you need something different you can build it yourself out of slave.