How can I configure custom grains for Windows VM? I tried few things as shown below but none of them works. It works fine for Linux servers. I have many linux servers with custom grains in a separate file. It just doesn't work for Windows servers.
I tried configuring custom grains in a separate file and it's not working
c:\salt\grains:
environment: production
I also tried setting the grains in minion config and restarted salt-minion yet I can't see the custom grains when I run the command salt 'wintest-001' grains.items on master server.
C:\salt\minion:
hash_type: sha256
id: wintest-001
ipc_mode: tcp
log_level: info
master: 10.10.10.10
master_finger: 51:81:19:f0:dd:30:35:4c:8c:26:5d:c7:a6:0d
multiprocessing: false
pki_dir: /conf/pki/minion
root_dir: c:\salt
grains:
environment: production
Any idea what is the proper way to define custom grains for windows?
This is a known bug in 3005.x onedir Windows installations
Related
I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf
I am trying to create a simple hosting platform for my clients. I am deploying all of my apps via docker on a VPS behind nginx-proxy. For wordpress applications I want to be able to limit disk-space so that my clients do not use too much and affect other applications. I bind mount all volumes to a single directory so that I can back-up easily with cron.
I've change the file system to overlay2 and am on centos 7.
[root#my-ip ~]# docker info
Server:
Containers: 12
Running: 12
Paused: 0
Stopped: 0
Images: 11
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
When I run a wordpress container with the --storage-opt size=10G I get the following error:
docker: Error response from daemon: --storage-opt is supported only for overlay over xfs with 'pquota' mount option.
This is an example of the bind mount I am using:
-v /DOCKER_VOLUMES/wordpress/appname/www/html:/var/www/html
How do I fix this? Can you please provide a full list of instructions to enable it?
from the Docs:
This (size) will allow to set the container rootfs size to 120G at creation time. This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size less than the backing fs size.
so the pquota should be enabled on your system
you can edit the file /etc/default/grub like so, and restart your machine:
GRUB_CMDLINE_LINUX_DEFAULT="rootflags=uquota,pquota"
and try to rerun your command with --storage-opt size=10G
Scenario:
salt-minion original version: salt-minion 2015.8.8.2 (Beryllium)
salt-minion updated version: salt-minion 2016.11.2 (Carbon)
running a state which uses grains.host breaks
Checks:
salt 'minion' grains.item host originally returned the hostname configured in /etc/hostname (eg: minion)
After the update, it returns localhost
I tried restarting the minion (as I had to change the master url anyway), also tried the undocumented sanitized=True, which only hides it away.
Any help is appreciated, couldn't find anything on the site.
I found recently that sometimes it does matter what are your settings in /etc/hosts.
I had a set of hosts with FQDN like host01.company.com. And despite the same settings, one of them was having the same issues like you have. I went to /etc/hosts and removed record like
127.0.0.1 host01 host01.company.com
And after this and a minion restart, everything was fine immediately.
Looks like there is some kind of reverse-lookup happening to set some grains.
Hope this helps.
I am able to do following
install salt master, minion (using root user)
login in master machine and execute salt command to install java / tomcat into minion server
result : java/tomcat is installed via root user
What i want to do is
install java / tomcat in minion server by user name 'tomcatuser'
As per my understanding only way of doing this is if i install my minion via tomcatuser.
Is my understanding correct ?
Any other way ?
I think you mix up the saltstack controller and how it control the application configuration.
For salt master and minion to communicate, you need to start both services as root, to control most of the configuration process. Then from there on, you can specify the user and group for application deployment inside your sls configuration.
Now come to your Tomcat/java/whatever package, you can refer to the salt stack configuration, to specify your own user group of the configuration and even startup(with other modification). e.g.
Deploy foo configuration:
file.managed:
- name: /etc/foo.conf
- source:
- salt://foo.conf
- user: foo
- group: users
- mode: 644
Then to startup your tomcat, you can do the similar by using a crontab and specify the user you want (as long as it is not load under service port smaller than 1024) . Or you can check whether salt.states.tomcat is helpful to start the services : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html
I am a total beginner with SaltStack but I have managed to setup some states on a machine and run them on a minion.
What I have right now is a Debian machine setup with salt-master as well as another Debian setup as salt-minion.
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
Is there a command I can run to apply the states on the master? (so far I was unable to find it)
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Thanks!
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
You can do that by following the following steps:
Install salt-minion on your development machine
Edit /etc/salt/minion to point to your master (vi /etc/salt/minion and change the following : master: salt -> master: 127.0.0.1)
(optional) Edit /etc/salt/minion_id to something that is meaningful to you
Start up your salt-minion
Use salt-key to accept your minion's key
Use your salt-master to control your minion as if it were any other salt-minion
Is there a command I can run to apply the states on the master?
The salt-master doesn't really run the the state files, the salt-minions do. If you followed the above steps then you can target your salt-master to run highstate with the following command:
salt 'the_value_of_/etc/salt/minion_id' state.highstate
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Yup. I think you have an idea as to what you need to do and just need a push in the right direction.
Install both Minion and Master on single node
I call such node Master Minion. No steps provided - you already know it based on the question.
Some conceptual info instead:
In short, Master never applies states. Instead, it triggers Minions (the local Master Minion in this case).
Salt Minion and Master are two separate services with independent runtime & configuration.
While instances use common software, runtime talk over the network (location-independent).
If you can apply states on remote Minion, the same mechanism will be used for local Minion one as well.
Additional info
There are two ways to apply states:
Master-side salt command to "push" states to multiple remote minions.
rpm -qf $(which salt)
salt-master-2015.5.3-4.fc22.noarch
Minion-side salt-call command to "pull" states on single local minion.
rpm -qf $(which salt-call)
salt-minion-2015.5.3-4.fc22.noarch
Until more than one minion is involved, it's better to use salt-call for the same effect:
salt-call state.highstate
Minion-side salt-call provides advantages especially for testing, isolation, troubleshooting:
It makes network issues (if any) more obvious.
It safely applies states only on single local minion (no way to specify more than one).
It shows debug output directly in the local terminal:
salt-call -l debug test.ping
The last point, salt-call--local can also be used in masterless setup using no network.
Now it's near end of 2015. Let's review some more possibilities to salt master self-control:
Install a minion aside with salt master on the same box
This one has been widely discussed as above two answers.
Use salt-ssh + salt-run state.orchestrate
Setup steps:
Step 1: install salt-ssh
Step 2: modify roster file (e.g. /etc/salt/roster in CentOS 6). The default installation already provide you some example. Since you probably ssh into salt master, of course username / password / private key setup should not be a problem for you. For example to control salt master vagrant box, this sample should do:
localhost:
host: 127.0.0.1
user: vagrant
passwd: vagrant
sudo: True
Now, steal from official tutorial with a little bit twist:
# /srv/salt/orch/cleanfoo.sls
cmd.run:
salt.function:
- tgt: 'localhost'
- ssh: 'true'
- arg:
- touch /tmp/test.txt
And run it with:
salt-run state.orchestrate orch.cleanfoo
Check your salt master vagrant box /tmp directory if test.txt file is there.
This approach should also work for state. Either way you need to install something. I prefer the second way since in general, calling salt master self control (to provision some work) is just a step before I actually call minion to process other state(s).