SaltStack : is it possible to apply states on the master and if so, how? - salt-stack

I am a total beginner with SaltStack but I have managed to setup some states on a machine and run them on a minion.
What I have right now is a Debian machine setup with salt-master as well as another Debian setup as salt-minion.
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
Is there a command I can run to apply the states on the master? (so far I was unable to find it)
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Thanks!

Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
You can do that by following the following steps:
Install salt-minion on your development machine
Edit /etc/salt/minion to point to your master (vi /etc/salt/minion and change the following : master: salt -> master: 127.0.0.1)
(optional) Edit /etc/salt/minion_id to something that is meaningful to you
Start up your salt-minion
Use salt-key to accept your minion's key
Use your salt-master to control your minion as if it were any other salt-minion
Is there a command I can run to apply the states on the master?
The salt-master doesn't really run the the state files, the salt-minions do. If you followed the above steps then you can target your salt-master to run highstate with the following command:
salt 'the_value_of_/etc/salt/minion_id' state.highstate
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Yup. I think you have an idea as to what you need to do and just need a push in the right direction.

Install both Minion and Master on single node
I call such node Master Minion. No steps provided - you already know it based on the question.
Some conceptual info instead:
In short, Master never applies states. Instead, it triggers Minions (the local Master Minion in this case).
Salt Minion and Master are two separate services with independent runtime & configuration.
While instances use common software, runtime talk over the network (location-independent).
If you can apply states on remote Minion, the same mechanism will be used for local Minion one as well.
Additional info
There are two ways to apply states:
Master-side salt command to "push" states to multiple remote minions.
rpm -qf $(which salt)
salt-master-2015.5.3-4.fc22.noarch
Minion-side salt-call command to "pull" states on single local minion.
rpm -qf $(which salt-call)
salt-minion-2015.5.3-4.fc22.noarch
Until more than one minion is involved, it's better to use salt-call for the same effect:
salt-call state.highstate
Minion-side salt-call provides advantages especially for testing, isolation, troubleshooting:
It makes network issues (if any) more obvious.
It safely applies states only on single local minion (no way to specify more than one).
It shows debug output directly in the local terminal:
salt-call -l debug test.ping
The last point, salt-call--local can also be used in masterless setup using no network.

Now it's near end of 2015. Let's review some more possibilities to salt master self-control:
Install a minion aside with salt master on the same box
This one has been widely discussed as above two answers.
Use salt-ssh + salt-run state.orchestrate
Setup steps:
Step 1: install salt-ssh
Step 2: modify roster file (e.g. /etc/salt/roster in CentOS 6). The default installation already provide you some example. Since you probably ssh into salt master, of course username / password / private key setup should not be a problem for you. For example to control salt master vagrant box, this sample should do:
localhost:
host: 127.0.0.1
user: vagrant
passwd: vagrant
sudo: True
Now, steal from official tutorial with a little bit twist:
# /srv/salt/orch/cleanfoo.sls
cmd.run:
salt.function:
- tgt: 'localhost'
- ssh: 'true'
- arg:
- touch /tmp/test.txt
And run it with:
salt-run state.orchestrate orch.cleanfoo
Check your salt master vagrant box /tmp directory if test.txt file is there.
This approach should also work for state. Either way you need to install something. I prefer the second way since in general, calling salt master self control (to provision some work) is just a step before I actually call minion to process other state(s).

Related

Ubuntu (Oracle VM) - Mounted Samba shares hang indefinitely

I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.

Camera (/dev/video0) dependencies in systemd service Ubuntu 16.04

I need to run some services on boot up which I have successfully accomplished using systemd services. (Lots of answers already available).
Now, one of my service requires access to /dev/video0 while bootup when a certain user is logged in. (I am doing auto login which is working fine).
So how do I check that whether the /dev/video0 is available before starting my systemd service while bootup.
I came across something called udev for doing this, I followed this link
but I am not getting desired output as after editing /lib/udev/rules.d/99-systemd.rules files as mentioned in the link and starting my service manually it's not starting, any help is appreciated.
Finally after struggling for a day I found the answer -
I made a script in /etc/systemd/system which contains
[Unit]
Description='some description of my file write according to you'
[Service]
Type=forking
ExecStart='path to script'
[Install]
WantedBy=multi-user.target
and It executes a script which contains
#!/bin/bash
modprobe uvcvideo
Now after rebooting all the services are running properly
mod probe uvcvideo command check for running video driver and enable it at the time of bootup so that It is available for my systemd process
Thanks

salt-master salt-cloud not acting idempotent

I am trying to test salt-cloud saltify to deploy/install salt-minions on target machines.
I created three vagrant machines and names them master, minion-01and minion-02.
all the machines were same like this;
root#master:/home/vagrant# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
then on master I followed this http://repo.saltstack.com/#ubuntu
to install salt-master(manually ofcourse).
then in master I added these three files.
in /etc/salt/cloud.providers:
root#master:/etc/salt/cloud.providers.d# cat bare_metal.conf
my-saltify-config:
minion:
master: 192.168.33.10
driver: saltify
in /etc/salt/cloud.profiles.d:
root#master:/etc/salt/cloud.profiles.d# cat saltify.conf
make_salty:
provider: my-saltify-config
script_args: git v2016.3.1
/etc/salt/saltify-map
root#master:/etc/salt# cat saltify-map
make_salty:
- minion-01:
ssh_host: 192.168.33.11
ssh_username: vagrant
password: vagrant
- minion-02:
ssh_host: 192.168.33.12
ssh_username: vagrant
password: vagrant
then on minion I ran salt-cloud -m /etc/salt/saltify-map
It was very slow but It ran without errors.
keys of both minion-01 and minion-02 was accepted by salt master.
I could do this:
root#master:/home/vagrant# salt 'minion*' test.ping
minion-01:
True
minion-02:
True
and this;
root#master:/home/vagrant# salt-key
Accepted Keys:
minion-01
minion-02
Denied Keys:
Unaccepted Keys:
Rejected Keys:
The Problem;
Now when I again executed this salt-cloud -m /etc/salt/saltify-map
salt-master re-ran the whole execution and then I had this;
root#master:/home/vagrant# salt 'minion*' test.ping
minion-02:
Minion did not return. [No response]
minion-01:
Minion did not return. [No response]
and this;
root#master:/etc/salt# salt-key
Accepted Keys:
minion-01
minion-02
Denied Keys:
minion-01
minion-02
Unaccepted Keys:
Rejected Keys:
In short salt-cloud is not acting idempotent.
What am I doing wrong ?
The second problem is, though on the first run salt-cloud -m /etc/salt/saltify-map installs and accepts key of minion-01 and minion-02 on salt-master, but the minion machines have all these things installed along with salt-minion
root#minion-02:/home/vagrant# salt
salt salt-call salt-cp salt-master salt-proxy salt-ssh salt-unity
salt-api salt-cloud salt-key salt-minion salt-run salt-syndic
How do I make sure that only salt-minion gets installed.
Thanks.
PS:
root#master:/etc/salt# salt-master --version
salt-master 2016.3.1 (Boron)
You write: "It was very slow":
You have set script_args to values that install everything from Github from source. You might want to remove the parameters (or use different parameters) to have a fast installation of a pre-packaged version. Please see https://github.com/saltstack/salt-bootstrap and specifically bootstrap-salt.sh for the available options.
You write: "salt-cloud is not acting idempotent":
You're doing everything correctly. salt-cloud is not idempotent. As far as I know it is not designed to be idempotent.
You write: "the minion machines have all these things installed along with salt-minion"
It might be the case due to using the git parameters and installing it from source. Please try a pre-packaged version of Salt.
Vagrant doesn't destroy the machines between runs, does it?
By the looks of it, on the second run the salt minions have started with new keys and re-registered with the master. Because they've got the same names it's all got confused .....
Try "vagrant destroy" before running, so that the machines are fresh each time?
FWIW, this is also being tracked as a bug in Salt though at present, it cannot be duplicated: https://github.com/saltstack/salt/issues/34687
It appears its a saltify bug. So, can't do anything about this question. The bug is reported and hopefully this will get solved in a future release.

Installing Java/Tomcat via SaltStack using non-root user on Ubuntu

I am able to do following
install salt master, minion (using root user)
login in master machine and execute salt command to install java / tomcat into minion server
result : java/tomcat is installed via root user
What i want to do is
install java / tomcat in minion server by user name 'tomcatuser'
As per my understanding only way of doing this is if i install my minion via tomcatuser.
Is my understanding correct ?
Any other way ?
I think you mix up the saltstack controller and how it control the application configuration.
For salt master and minion to communicate, you need to start both services as root, to control most of the configuration process. Then from there on, you can specify the user and group for application deployment inside your sls configuration.
Now come to your Tomcat/java/whatever package, you can refer to the salt stack configuration, to specify your own user group of the configuration and even startup(with other modification). e.g.
Deploy foo configuration:
file.managed:
- name: /etc/foo.conf
- source:
- salt://foo.conf
- user: foo
- group: users
- mode: 644
Then to startup your tomcat, you can do the similar by using a crontab and specify the user you want (as long as it is not load under service port smaller than 1024) . Or you can check whether salt.states.tomcat is helpful to start the services : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html

Why is my Symfony 2.0 site running slowly on Vagrant with Linux host?

I've got a Symfony 2.0 application running using Vagrant with a Linux guest and host O/S (Ubuntu). However, it runs slowly (e.g. several seconds for a page to load, often more than 10s) and I can't work out why. My colleagues who are running the site locally rather than on Vagrant VM have it running faster.
I've read elsewhere that Vagrant VMs run very slowly if NFS isn't enabled, but I have enabled that. I'm also using the APC cache to try and speed things up, but still the problems remain.
I ran xdebug against my site using the instructions at http://webmozarts.com/2009/05/01/speedup-performance-profiling-for-your-symfony-app, but I'm not clear where to start with analysing the data from this. I've got as far as opening it in KCacheGrind and looking for the high numbers under "Incl." and "Self", but this has just shown that php::session_start takes quite a long time.
Any suggestions as to what I should be trying here? Sorry for the slightly broad question, but I'm stumped!
I've seen similar problem on my OS X host, I forgot to enable NFS!
On windows Host, the performance impact is less true...
For my very small website, I have kickly 12649 files... So the 1000+ files limit is quite easily reached.
So my two cents: enable NFS like this in your Vagrantfile:
config.vm.share_folder "v-root", "/vagrant", ".." , :nfs => true
And from the experts:
It’s a long known issue that VirtualBox shared folder performance degrades quickly as the number of files in the shared folder increases. As a project reaches 1000+ files, doing simple things like running unit tests or even just running an app server can be many orders of magnitude slower than on a native filesystem (e.g. from 5 seconds to over 5 minutes).
If you’re seeing this sort of performance drop-off in your shared folders, NFS shared folders can offer a solution. Vagrant will orchestrate the configuration of the NFS server on the host and will mount of the folder on the guest for you.
Note: NFS is not supported on Windows hosts. According to VirtualBox, shared folders on Windows shouldn’t suffer the same performance penalties as on unix-based systems. If this is not true, feel free to use our support channels and maybe we can help you out.
Edit:
On windows, I have found another solution, I am using symlinks (ln -fs) on vendor folders within my projects that links to non-shared folders. This reduce the amount of files seen by the windows host, the antivirus, etc.
Where I work, we've tried out two solutions to the problem of Vagrant + Symfony being slow. I recommend the second one (nfs and bind mounts).
The rsync approach
To begin with, we used rsync. Our approach was slightly different to that outlined in AdrienBrault's answer. Rather, we had code like the following in our Vagrantfile:
config.vm.define :myproj01 do |myproj|
# Networking & Port Forwarding
myproj.vm.network :private_network, type: "dhcp"
# NFS Share
myproj.vm.synced_folder ".", "/home/vagrant/current", type: 'rsync', rsync__exclude: [
"/.git/",
"/vendor/",
"/app/cache/",
"/app/logs/",
"/app/uploads/",
"/app/downloads/",
"/app/bootstrap.php.cache",
"/app/var",
"/app/config/parameters.yml",
"/composer.phar",
"/web/bundles",
"/web/uploads",
"/bin/behat",
"/bin/doctrine*",
"/bin/phpunit",
"/bin/webunit",
]
# update VM sooner after files changed
# see https://github.com/smerrill/vagrant-gatling-rsync#working-with-this-plugin
config.gatling.latency = 0.5
end
As you might notice from the above, we kept files in sync using the Vagrant gatling rsync plugin.
The improved NFS approach, using bind mounts (recommended solution)
The rsync approach solves the speed issue, but we found some problems with it. In particular, the one-way nature of it (as opposed to sharing folders) was annoying when files (like composer.lock or Doctrine migrations) were generated on the VM, or when we wanted to access code in /vendor. We had to SFTP in to copy things back - and, in the case of new files, do it before they were cleared by the next run of the gatling plugin!
Therefore, we moved to a solution which uses binding mounts to handle folders like cache and logs differently. Not having those shared increased the speed dramatically.
The relevant bits of the Vagrantfile are as follows:
# Binding mounts for folders with dynamic data in them
# This must happen before provisioning, and on every subsequent reboot, hence run: "always"
config.vm.provision "shell",
inline: "/home/vagrant/current/bin/bind-mounts",
run: "always"
The bind-mounts script referenced above looks like this:
#!/bin/bash
mkdir -p ~vagrant/current/app/downloads/
mkdir -p ~vagrant/current/app/uploads/
mkdir -p ~vagrant/current/app/var/
mkdir -p ~vagrant/current/app/cache/
mkdir -p ~vagrant/current/app/logs/
mkdir -p ~vagrant/shared/app/downloads/
mkdir -p ~vagrant/shared/app/uploads/
mkdir -p ~vagrant/shared/app/var/
mkdir -p ~vagrant/shared/app/cache/
mkdir -p ~vagrant/shared/app/logs/
sudo mount -o bind ~vagrant/shared/app/downloads/ ~/current/app/downloads/
sudo mount -o bind ~vagrant/shared/app/uploads/ ~/current/app/uploads/
sudo mount -o bind ~vagrant/shared/app/var/ ~/current/app/var/
sudo mount -o bind ~vagrant/shared/app/cache/ ~/current/app/cache/
sudo mount -o bind ~vagrant/shared/app/logs/ ~/current/app/logs/
NFS + binding mounts is the approach I'd recommend.
ATM, basically, do not put your website code on the /vagrant shared folder.
As it's shared between your VM and host O/S, it's slower; and I didn't find any efficient solution to get good performance.
The solution we're using is to serve our developments apps from the classic /var/www, and keep them in sync with our local copy with rsync.
By following the instructions on this article Speedup Symfony2 on Vagrant boxes helped me out to solve this issue reducing the page loads from 6-10 seconds to 1 second on my Symfony2 project. Basically all the fix is to set the sync type between the host and the guest (the vagrant VM box) with NFS instead of using the VirtualBox shared folder system which is very slow.
Also adding this code below to AppKernel.php on the Symfony2 project, changes the cache and log directory to the shared memory directory (/dev/shm) on the vagrant box instead of writing those to the NFS share, so it improves the page load speed even better.
<?php
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->getEnvironment();
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
I use sshfs for sharing directories between host OS and VM (Expan drive for windows)
It is much faster than native VBox directory sharing

Resources