restrict commands that salt-minion is able to publish - salt-stack

configured the salt-stack environment like below:
machine1 -> salt-master
machine2 -> salt-minion
machine3 -> salt-minion
This setup is working for me and I can publish i.e. the command "ls -l /tmp/" from machine2 to machine3 with
salt-call publish.publish 'machine3' cmd.run 'ls - /tmp/'
How it's possible to restrict the commands that are able to be published?
In the currently setup it's possible to execute every command on machine3 and that we would be very risky. I was looking in the salt-stack documentation but unfortunately, I didn't find any example how to configure it accordingly.
SOLUTION:
on machine1 create file /srv/salt/_modules/testModule.py
insert some code like:
#!/usr/bin/python
import subprocess
def test():
return __salt__['cmd.run']('ls -l /tmp/')
if __name__ == "__main__":
test()
to distribute the new module to the minions run:
salt '*' saltutil.sync_modules
on machine2 run:
salt-call publish.publish 'machine3' testModule.test

The peer configuration in the salt master config can limit what commands certain minion can publish, e.g.
peer:
machine2:
machine1:
- test.*
- cmd.run
machine3:
- test.*
- disk.usage
- network.interfaces
This will allow minion machine2 to publish test.* and cmd.run commands.
P.S. Allowing minions to publish cmd.run command is not a good idea generally, just put it here as example.

Related

cloud-init instance cloud config runcmd commands not executed in openstack

I am creating the instance in openstack with centos 8 hardened image. The configuration script as follows:
#cloud-config
users:
- name: clouduser
password: password
sudo: ['ALL=(ALL) ALL']
groups: sudo
shell: /bin/bash
ssh_pwauth: True
lock_passwd: False
plain_text_passwd: password
runcmd:
- mkdir /run/test
here the user is created and I am able to login the instance but the commands in runcmd is not executed . even the runcmd log in /var/log/cloud-init.log is ran successfully but there is no folder is created in the /run/ folder and /etc/cloud/cloud.cfg is no change (runcmd module in cloud-config and script-user in cloud-finish are there and its executed successfully) but no commands got executed. the same commands if I run inside the instance its working fine. commands in bootcmd is also working but not with runcmd? I can't figure out why it's not being executed?

How do you access Airflow Web Interface?

Hi I am taking a datacamp class on how to use Airflow and it shows how to create dags once you have access to an Airflow Web Interface.
Is there an easy way to create an account in the Airflow Web Interface? I am very lost on how to do this or is this just an enterprise tool where they provide you access to it once you pay?
You must do this on terminal. Run these commands:
export AIRFLOW_HOME=~/airflow
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
airflow standalone
Then, in there, you can see the username and password provided.
Then, open Chrome and search for:
localhost:8080
And write the username and password.
airflow has a web interface as well by default and default user pass is : airflow/airflow
you can run it by using :
airflow webserver --port 8080
then open the link : http://localhost:8080
if you want to make a new username by this command:
airflow create_user [-h] [-r ROLE] [-u USERNAME] [-e EMAIL] [-f FIRSTNAME]
[-l LASTNAME] [-p PASSWORD] [--use_random_password]
learn more about Running Airflow locally
You should install it , it is a python package not a website to register on.
The easiest way to install Airflow is:
pip install apache-airflow
if you need extra packages with it:
pip install apache-airflow[postgres,gcp]
finally run the webserver and the scheduler in different cmd :
airflow webserver # it is by default 8080
airflow scheduler

How to seamlessly rename a minion?

I am considering to move to salt (currently using ansible) to manage a set of standalone IoT devices (Raspberry Pi in practical terms).
The devices would be installed with a generic image, to which I would add on top the installation of salt (client side) as well as a configuration file pointing to salt-master, which is going to serve state files to be consumed by the minions.
The state files include an HTTP query for a name, which is then applied to the device (as its hostname). The obvious problem is that at that stage the minion has already registered with salt-master under the previous (generic) name.
How to handle such a situation? Specifically: how to propagate the new hostname to salt-master? (just changing the hostname and rebooting did not help, I assume the hostname is bundled, on the server, with the ID of the minion).
The more general question would be whether salt is the right product for such a situatiuon (where setting the state of the minion changes its name, among others)
Your Minion ID is based on the hostname during the installation. When you change the hostname after you installed the salt-minion, the Minion ID will not change.
The Minion ID is specified in /etc/salt/minion_id. When you change the Minion ID:
The Minion will identify itself with the new ID to the Master and stops listening to the old ID.
The Master will detect the new Minion ID as a new Minion and it shows a new key in Unaccepted Keys.
After accepting the key on the Master you will be able to use the Minion with the new key only. The old key is still accepted on the Master but doesn't work anymore.
I can come up with two solutions for your situation:
Use salt-ssh to provision your minions. The Master will connect to your Raspberry PI using SSH. It will setup the correct hostname, install and configure salt-minion. After this is finished, your minion will connect to the master with the correct ID. But this requires the Master to know when and where a minion is available...
You mentioned the state where the hostname is set. Change the Minion ID and restart the minion service in the same state. This will change the Minion ID but you need to accept the new key afterwards. Note that the minion will never report a state as successfully finished when you restart the salt-minion service in it.
Here is a short script to change the hostname/minion_id. It should also work well for batch jobs. Simply call the script like so: sudo ./change-minion-id oldminionid newminionid
change-minion-id:
#! /bin/bash
echo $2; salt "$1" cmd.run "echo $2> /etc/hostname && hostname $2 && hostname > /etc/salt/minion_id" && salt "$1" service.restart "salt-minion" && salt-key -d $1 -y && salt-key -a $2 -y
My answer is a direct rip off from user deput_d. I modified it a bit for what I needed.
Even my code will return a Minion did not return. [No response] (due to the salt-minion restart) but this error should be ignored, just let it run. I wait for 40 seconds just to be sure that the minion has re-connected:
#!/bin/bash
echo "salt rename $1->$2"; salt "$1" cmd.run "echo $2> /etc/hostname && hostname $2 && hostname > /etc/salt/minion_id" && salt "$1" cmd.run "/etc/init.d/salt-minion restart &" || true
salt-key -d $1 -y && echo "Will sleep for 40 seconds while minion reconnects" && sleep 40 && salt-key -a $2 -y

how to run a shell script on remote using salt-ssh

my web server generates a shell script with more than 100 lines of code based on complex user selections. I need to orchestrate this over several machines using salt-ssh. what I need is to copy this shell script to remote and execute it from there for all devices. how to achieve this with salt-ssh ?. I can not install minions on the remote device.
Just as with normal minion. Write a state...
add script:
file.managed:
- name: file.sh
- source: /path/to/file.sh
run script:
cmd.run:
- name: file.sh
...and apply it
salt-ssh 'minions*' state.apply mystate.sls

Salt minion returns no response after being accepted

After setting up the salt-master and one minion, I am able to accept the key on the master. Running sudo salt-key -L shows that it is accepted. However, when I try the test.ping command, the master shows:
Minion did not return. [No response]
On the master, the log shows:
[ERROR ][1642] Authentication attempt from minion-01 failed, the public key in pending did not match. This may be an attempt to compromise the Salt cluster.
On the minion, the log shows:
[ERROR ][1113] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
I have tried disconnecting and reconnecting, including rebooting both boxes in between.
Minion did not return. [No response]
Makes me think the salt-minion process is not running. (those other two errors are expected behavior until you accepted the key on the master)
Check if salt-minion is running with (depending on your OS) something like
$ systemctl status salt-minion
or
$ service salt-minion status
If it was not running, start it and try your test again.
$ sudo systemctl start salt-minion
or
$ sudo service salt-minion start
Depending on your installation method, the salt-minion may not have been registered to start upon system boot, and you may run into this issue again after a reboot.
Now, if your salt-minion was in fact running, and you are still getting No response, I would stop the process and restart the minion in debug so you can watch.
$ sudo systemctl stop salt-minion
$ sudo salt-minion -l debug
Another quick test you can run to test communication between your minion and master is to execute your test from the minion:
$ sudo salt-call test.ping
salt-call does not need the salt-minion process to be running to work. It fetches the states from the master and executes them ad hoc. So, if that works (returns
local:
True
) you can eliminate the handshake between minion and master as the issue.
I just hit this problem when moving the salt master to a new server, to fix it I had to do these things in this order (Debian 9):
root#minion:~# service salt-minion stop
root#master:~# salt-key -d minion
root#minion:~# rm /etc/salt/pki/minion/minion_master.pub
root#minion:~# service salt-minion start
root#master:~# salt-key -a minion
Please note the minion/master servers above.
If you are confident that you are connecting to a valid Salt Master, then
remove the master public key and restart the Salt Minion.
The master public key can be found at:
/etc/salt/pki/minion/minion_master.pub
Minion's public key position under master is
/etc/salt/pki/master/minions
Just compare it with the minions own public key(under /etc/salt/pki/minion/minion.pub)
If it is not the same, excute
salt-key -d *
to delete the public key of minion from master,
then excute service salt-minion restart to restart salt minion on the minion client.
After this, Master can control the minion.
I got the same error message.
I change user: root to user: ubuntu in /etc/salt/minion.
Stop salt-minion, and run salt-minion -l debug as ubuntu user. salt master could get the salt-minion response.
But, when I run salt-minion with systemctl start salt-minion, salt master got error. (no response)
I run salt-minion as root user and systemctl start salt-minion, it works.
I don't know if it is a bug.
I had this exact same problem when I was moving minions from one master to another. Here are the steps I took to resolve it.
Remove the salt-master and salt-minon on the master server.
rm -rf /etc/salt on the master hosts
Save your minion config if it has any defined variables
Save your minion config if it has any defined variables
Remove salt-minion on the minion hosts
rm -rf /etc/salt on all minion-hosts
Reinstall salt-master and salt-minion
Start salt-master then salt-minions
My problem was solved after doing this. I know it is not a solution really, but it's likely a conflict with keys that is causing this issue.
Run salt-minion debug mode to see, if this is a handshake issue.
If so, Adding salt-master port(4506 or configured) to public zone with firewalld-cmd should help.
`firewall-cmd --permanent --zone=public --add-port=4506/tcp`
`firewall-cmd --reload`
Your salt-master keys on minion seems to be invalid (may be due to master ip or name update)
Steps for troubleshoot:-
From minion check if master is reachable (simple ping test )
Delete the old master keys present on minion (/etc/salt/pki/minion/minion_master.pub)
Again try from master to ping minion new valid keys will be auto populated

Resources