I am trying to test salt-cloud saltify to deploy/install salt-minions on target machines.
I created three vagrant machines and names them master, minion-01and minion-02.
all the machines were same like this;
root#master:/home/vagrant# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
then on master I followed this http://repo.saltstack.com/#ubuntu
to install salt-master(manually ofcourse).
then in master I added these three files.
in /etc/salt/cloud.providers:
root#master:/etc/salt/cloud.providers.d# cat bare_metal.conf
my-saltify-config:
minion:
master: 192.168.33.10
driver: saltify
in /etc/salt/cloud.profiles.d:
root#master:/etc/salt/cloud.profiles.d# cat saltify.conf
make_salty:
provider: my-saltify-config
script_args: git v2016.3.1
/etc/salt/saltify-map
root#master:/etc/salt# cat saltify-map
make_salty:
- minion-01:
ssh_host: 192.168.33.11
ssh_username: vagrant
password: vagrant
- minion-02:
ssh_host: 192.168.33.12
ssh_username: vagrant
password: vagrant
then on minion I ran salt-cloud -m /etc/salt/saltify-map
It was very slow but It ran without errors.
keys of both minion-01 and minion-02 was accepted by salt master.
I could do this:
root#master:/home/vagrant# salt 'minion*' test.ping
minion-01:
True
minion-02:
True
and this;
root#master:/home/vagrant# salt-key
Accepted Keys:
minion-01
minion-02
Denied Keys:
Unaccepted Keys:
Rejected Keys:
The Problem;
Now when I again executed this salt-cloud -m /etc/salt/saltify-map
salt-master re-ran the whole execution and then I had this;
root#master:/home/vagrant# salt 'minion*' test.ping
minion-02:
Minion did not return. [No response]
minion-01:
Minion did not return. [No response]
and this;
root#master:/etc/salt# salt-key
Accepted Keys:
minion-01
minion-02
Denied Keys:
minion-01
minion-02
Unaccepted Keys:
Rejected Keys:
In short salt-cloud is not acting idempotent.
What am I doing wrong ?
The second problem is, though on the first run salt-cloud -m /etc/salt/saltify-map installs and accepts key of minion-01 and minion-02 on salt-master, but the minion machines have all these things installed along with salt-minion
root#minion-02:/home/vagrant# salt
salt salt-call salt-cp salt-master salt-proxy salt-ssh salt-unity
salt-api salt-cloud salt-key salt-minion salt-run salt-syndic
How do I make sure that only salt-minion gets installed.
Thanks.
PS:
root#master:/etc/salt# salt-master --version
salt-master 2016.3.1 (Boron)
You write: "It was very slow":
You have set script_args to values that install everything from Github from source. You might want to remove the parameters (or use different parameters) to have a fast installation of a pre-packaged version. Please see https://github.com/saltstack/salt-bootstrap and specifically bootstrap-salt.sh for the available options.
You write: "salt-cloud is not acting idempotent":
You're doing everything correctly. salt-cloud is not idempotent. As far as I know it is not designed to be idempotent.
You write: "the minion machines have all these things installed along with salt-minion"
It might be the case due to using the git parameters and installing it from source. Please try a pre-packaged version of Salt.
Vagrant doesn't destroy the machines between runs, does it?
By the looks of it, on the second run the salt minions have started with new keys and re-registered with the master. Because they've got the same names it's all got confused .....
Try "vagrant destroy" before running, so that the machines are fresh each time?
FWIW, this is also being tracked as a bug in Salt though at present, it cannot be duplicated: https://github.com/saltstack/salt/issues/34687
It appears its a saltify bug. So, can't do anything about this question. The bug is reported and hopefully this will get solved in a future release.
Related
Is there any way to suppress these log messages in Symfony 4:
cache.WARNING: Failed to save key "%5B%5BC%5DApp%5CController%5CAgencyController%23about%5D%5B1%5D" '(integer) {"key":"%5B%5BC%5DApp%5CController%5CAgencyController%23about%5D%5B1%5D","type":"integer","exception":"[object] (ErrorException(code: 0): touch(): Utime failed: Operation not permitted at /mnt/c/Users/...../vendor/symfony/cache/Traits/FilesystemCommonTrait.php:95)"} []
There are hundredes of them in log (monolog) per each request which is really annoying! I have tried to change permissions to 777 as similar question answers suggested but that does no effect (maybe since I'm on WSL). Also I do not have APC installed.
Are you sure you are using php 7+? Seems like the file your are accessing in a windows filesystem. touch() will fail with php 5.4 (or 5.3 don't remember) on windows filesystems. Also, try changing your cache files owner, (not just 777) wo they are owned by your webserver user. sudo chown -R user:usergroup directory/
Are you using vagrant?
I answered the same here
I had the same problem.
All you need to do is change the type of the synced_folder to nfs, but that option only works with Mac hosts.
To be able to use it in Windows, you need to install vagrant-winnfsd
$ vagrant plugin install vagrant-winnfsd
Then change the type of the synchronisation in your Vagrantfile
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/var/www", type: "nfs"
end
The documentation says that it is also needed to change the type of the network to dhcp, but I didn't need to do that to solve my problem.
config.vm.network "private_network", type: "dhcp"
I hope this helped
I created a simple SaltStack Cluster with a master and minion. Then I manually added a customized grains to the minion in the file /etc/salt/grains.
mykey: hello-key
I did see this key when running salt '*' grains.items in the master
...
localhost:
ip-172-31-24-109.us-west-2.compute.internal
lsb_distrib_codename:
CentOS Linux 7 (Core)
lsb_distrib_id:
CentOS Linux
machine_id:
b30d0f2110ac3807b210c19ede3ce88f
manufacturer:
Xen
master:
ec2-54-186-104-181.us-west-2.compute.amazonaws.com
mdadm:
mem_total:
15883
mykey:
hello-key
...
Now the weird part is when I tried to target this minion through my customized grains, it doesn't work while every other way works!
[root#ip-172-31-28-130 ~]# salt '*' saltutil.refresh_modules
ip-172-31-24-109.us-west-2.compute.internal:
True
[root#ip-172-31-28-130 ~]# salt '*' test.ping
ip-172-31-24-109.us-west-2.compute.internal:
True
[root#ip-172-31-28-130 ~]# salt -G 'mem_total:*' test.ping
ip-172-31-24-109.us-west-2.compute.internal:
True
[root#ip-172-31-28-130 ~]# salt -G 'mykey:hello-key' test.ping
ip-172-31-24-109.us-west-2.compute.internal:
Minion did not return. [No response]
Does anyone have any ideas or suggestions?
It seems that you really have a connectivity problem, not a targetting one.
Your targetting is OK, if it was not you would have a message like
$ salt 'minion' test.ping
No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received
First the facts:
Debian GNU/Linux 8.6 (jessie)
salt-master 2016.3.3 (Boron)
salt-minion 2016.3.3 (Boron)
I tried to use nodegroups as described here.
nodegroups:
web: 'salt-master1,salt-master2'
If I run ...
salt -N web test.ping
... it results in:
No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received
Changed my nodegroup to:
nodegroups:
web: 'salt-master1'
Voila ...
salt-minion1:
True
I also tried the other notations to define a nodegroup as described in the linked documentation.
How i get it work with more than one host?
I recognized my fault.
Did not realize the L# notation is explicit for a list of hosts.
Solution for interesting people:
nodegroups:
web: 'L#salt-master1,salt-master2'
results in:
salt-minion1:
True
salt-minion2:
True
I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa
I am a total beginner with SaltStack but I have managed to setup some states on a machine and run them on a minion.
What I have right now is a Debian machine setup with salt-master as well as another Debian setup as salt-minion.
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
Is there a command I can run to apply the states on the master? (so far I was unable to find it)
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Thanks!
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
You can do that by following the following steps:
Install salt-minion on your development machine
Edit /etc/salt/minion to point to your master (vi /etc/salt/minion and change the following : master: salt -> master: 127.0.0.1)
(optional) Edit /etc/salt/minion_id to something that is meaningful to you
Start up your salt-minion
Use salt-key to accept your minion's key
Use your salt-master to control your minion as if it were any other salt-minion
Is there a command I can run to apply the states on the master?
The salt-master doesn't really run the the state files, the salt-minions do. If you followed the above steps then you can target your salt-master to run highstate with the following command:
salt 'the_value_of_/etc/salt/minion_id' state.highstate
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Yup. I think you have an idea as to what you need to do and just need a push in the right direction.
Install both Minion and Master on single node
I call such node Master Minion. No steps provided - you already know it based on the question.
Some conceptual info instead:
In short, Master never applies states. Instead, it triggers Minions (the local Master Minion in this case).
Salt Minion and Master are two separate services with independent runtime & configuration.
While instances use common software, runtime talk over the network (location-independent).
If you can apply states on remote Minion, the same mechanism will be used for local Minion one as well.
Additional info
There are two ways to apply states:
Master-side salt command to "push" states to multiple remote minions.
rpm -qf $(which salt)
salt-master-2015.5.3-4.fc22.noarch
Minion-side salt-call command to "pull" states on single local minion.
rpm -qf $(which salt-call)
salt-minion-2015.5.3-4.fc22.noarch
Until more than one minion is involved, it's better to use salt-call for the same effect:
salt-call state.highstate
Minion-side salt-call provides advantages especially for testing, isolation, troubleshooting:
It makes network issues (if any) more obvious.
It safely applies states only on single local minion (no way to specify more than one).
It shows debug output directly in the local terminal:
salt-call -l debug test.ping
The last point, salt-call--local can also be used in masterless setup using no network.
Now it's near end of 2015. Let's review some more possibilities to salt master self-control:
Install a minion aside with salt master on the same box
This one has been widely discussed as above two answers.
Use salt-ssh + salt-run state.orchestrate
Setup steps:
Step 1: install salt-ssh
Step 2: modify roster file (e.g. /etc/salt/roster in CentOS 6). The default installation already provide you some example. Since you probably ssh into salt master, of course username / password / private key setup should not be a problem for you. For example to control salt master vagrant box, this sample should do:
localhost:
host: 127.0.0.1
user: vagrant
passwd: vagrant
sudo: True
Now, steal from official tutorial with a little bit twist:
# /srv/salt/orch/cleanfoo.sls
cmd.run:
salt.function:
- tgt: 'localhost'
- ssh: 'true'
- arg:
- touch /tmp/test.txt
And run it with:
salt-run state.orchestrate orch.cleanfoo
Check your salt master vagrant box /tmp directory if test.txt file is there.
This approach should also work for state. Either way you need to install something. I prefer the second way since in general, calling salt master self control (to provision some work) is just a step before I actually call minion to process other state(s).