How to restart salt-minon daemon automatically after machine restart? - salt-stack

I've installed salt-minion on CentOS-7. Started minion using it's own user salt and command salt-minion -d.
Once the machine was restarted, salt-minion was not started automatically.
Suggest a clean solution.

On Centos 7 run the following command to ensure the salt-minion starts on boot:
systemctl enable salt-minion.service

Related

az acr login raises DOCKER_COMMAND_ERROR with message docker daemon not running

Windows 11 with wsl2 ubuntu-22.04.
In Windows Terminal I open a PowerShell window and start wsl with command:
wsl
Then I start the docker daemon in this window with the following command:
sudo dockerd
It prompts for the admin password, which I enter and then it starts the daemon.
Next I open a new PowerShell window in Windows Terminal, run wsl and run a container to verify everything is working. So far so good.
Now I want to login to Azure Container Registry with the following command:
az acr login -n {name_of_my_acr}
This returns the following error:
You may want to use 'az acr login -n {name_of_my_acr} --expose-token' to get an access token,
which does not require Docker to be installed.
An error occurred: DOCKER_COMMAND_ERROR
error during connect: This error may indicate that the docker daemon is not running.:
Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json":
open //./pipe/docker_engine: The system cannot find the file specified.
The error suggests the daemon is not running, but since I can run a container I assume the deamon is running - otherwise I would not be able to run a container either, right? What can I do to narrow down or resolve this issue?
Docker version info using docker -v command:
Docker version 20.10.12, build 20.10.12-0ubuntu4
An error occurred: DOCKER_COMMAND_ERROR error during connect: This error may indicate that the docker daemon is not running.: Get"http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.
The above error due to some times docker might be disabled from starting on boot or login.
The following suggestion can be used:
Open the Powershell and type dockerd which will start the daemon.
Open the docker with run as administrator and run the command as below :
C:\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
Check the version of WSL2, if it is older it might be a problem and then download the latest package WSL2 Linux kernel update package for x64-bit machines in the windows 11.
Reference:
Manual installation steps for older versions of WSL | Microsoft Docs

OpenVAS installation and running errors

I've installed Greenbone Security Assistant Version 9.0.1 (OpenVAS) by this instruction on my VirtualBox's Ubuntu 20.4.
sudo apt install postgresql
sudo add-apt-repository ppa:mrazavi/gvm
sudo apt install gvm
greenbone-nvt-sync
sudo greenbone-scapdata-sync
sudo greenbone-certdata-sync
Unfortunately, it does not works.
When I'm trying to create a task by Wizard, I have the task completed just in moment, with an empty log. And that's all.
I've tried three commands:
systemctl status ospd-openvas # scanner
systemctl status gvmd # manager
systemctl status gsad # web ui
Everything is okay, except ospd-openvas. The status is green and active, but there are some errors too:
Jul 20 15:00:27 alex-VirtualBox ospd-openvas[833]: OSPD - openvas:
ERROR: (ospd_openvas.daemon) Failed to create feed lock file
/var/run/ospd/feed-update.lock. [Errno 2] No such file or directory:
'/var/run/ospd/feed-update.lock'
From the error message it looks like the directory /var/run/ospd/ does not exist.
Create the directory and try to restart the service.
In ubuntu 20.04 /var/run points to /run which is a temporary file system. That means that if you create the directory /var/run/ospd manually, it will be gone after the next reboot. To fix it permanently (in case the missing directory is the issue), please refer to this post.
This may help some people with some of the issues I've been facing:
mkdir -p /var/run/ospd/
touch /var/run/ospd/feed-update.lock
chown gvm:gvm /var/run/ospd/feed-update.lock

Running airflow worker gives error: Address already in use

I am running Airflow with CeleryExecutor. I am able to run the commands airflow webserver and airflow scheduler but trying to run airflow worker gives the error: socket.error: [Errno 98] Address already in use.
The traceback:
In the docker container running Airflow server a process was already running on the port 8793 which the worker_log_server_port settings in airflow.cfg refers by default. I changed the port to 8795 and the command airflow worker worked.
Or you can check the process listening to 8793 as: lsof i:8793 and if you don't need that process you kill it by: kill $(lsof -t -i:8793). I was running ubuntu container in docker I had to install lsof first:
apt-get update
apt-get install lsof
See if there's server_logs process running, if so, kill it and try again.
/usr/bin/python2 /usr/bin/airflow serve_logs
I had the same problem and Javed's answer about changing the worker_log_server_port on aiflow.cfg works for me.

Salt minion returns no response after being accepted

After setting up the salt-master and one minion, I am able to accept the key on the master. Running sudo salt-key -L shows that it is accepted. However, when I try the test.ping command, the master shows:
Minion did not return. [No response]
On the master, the log shows:
[ERROR ][1642] Authentication attempt from minion-01 failed, the public key in pending did not match. This may be an attempt to compromise the Salt cluster.
On the minion, the log shows:
[ERROR ][1113] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
I have tried disconnecting and reconnecting, including rebooting both boxes in between.
Minion did not return. [No response]
Makes me think the salt-minion process is not running. (those other two errors are expected behavior until you accepted the key on the master)
Check if salt-minion is running with (depending on your OS) something like
$ systemctl status salt-minion
or
$ service salt-minion status
If it was not running, start it and try your test again.
$ sudo systemctl start salt-minion
or
$ sudo service salt-minion start
Depending on your installation method, the salt-minion may not have been registered to start upon system boot, and you may run into this issue again after a reboot.
Now, if your salt-minion was in fact running, and you are still getting No response, I would stop the process and restart the minion in debug so you can watch.
$ sudo systemctl stop salt-minion
$ sudo salt-minion -l debug
Another quick test you can run to test communication between your minion and master is to execute your test from the minion:
$ sudo salt-call test.ping
salt-call does not need the salt-minion process to be running to work. It fetches the states from the master and executes them ad hoc. So, if that works (returns
local:
True
) you can eliminate the handshake between minion and master as the issue.
I just hit this problem when moving the salt master to a new server, to fix it I had to do these things in this order (Debian 9):
root#minion:~# service salt-minion stop
root#master:~# salt-key -d minion
root#minion:~# rm /etc/salt/pki/minion/minion_master.pub
root#minion:~# service salt-minion start
root#master:~# salt-key -a minion
Please note the minion/master servers above.
If you are confident that you are connecting to a valid Salt Master, then
remove the master public key and restart the Salt Minion.
The master public key can be found at:
/etc/salt/pki/minion/minion_master.pub
Minion's public key position under master is
/etc/salt/pki/master/minions
Just compare it with the minions own public key(under /etc/salt/pki/minion/minion.pub)
If it is not the same, excute
salt-key -d *
to delete the public key of minion from master,
then excute service salt-minion restart to restart salt minion on the minion client.
After this, Master can control the minion.
I got the same error message.
I change user: root to user: ubuntu in /etc/salt/minion.
Stop salt-minion, and run salt-minion -l debug as ubuntu user. salt master could get the salt-minion response.
But, when I run salt-minion with systemctl start salt-minion, salt master got error. (no response)
I run salt-minion as root user and systemctl start salt-minion, it works.
I don't know if it is a bug.
I had this exact same problem when I was moving minions from one master to another. Here are the steps I took to resolve it.
Remove the salt-master and salt-minon on the master server.
rm -rf /etc/salt on the master hosts
Save your minion config if it has any defined variables
Save your minion config if it has any defined variables
Remove salt-minion on the minion hosts
rm -rf /etc/salt on all minion-hosts
Reinstall salt-master and salt-minion
Start salt-master then salt-minions
My problem was solved after doing this. I know it is not a solution really, but it's likely a conflict with keys that is causing this issue.
Run salt-minion debug mode to see, if this is a handshake issue.
If so, Adding salt-master port(4506 or configured) to public zone with firewalld-cmd should help.
`firewall-cmd --permanent --zone=public --add-port=4506/tcp`
`firewall-cmd --reload`
Your salt-master keys on minion seems to be invalid (may be due to master ip or name update)
Steps for troubleshoot:-
From minion check if master is reachable (simple ping test )
Delete the old master keys present on minion (/etc/salt/pki/minion/minion_master.pub)
Again try from master to ping minion new valid keys will be auto populated

How to gracefully reload a spawn-fcgi script for nginx

My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.

Resources