salt stack if then else for service - salt-stack

I have a windows service which I deploy from salt stack. The state looks like this
create_executable:
module.run:
- name: service.create
- m_name: SVC1
- bin_path: d:\svc1.exe
- start_type: auto
Now, when I redeploy the service, it says "Exception: Service Already Exists: SVC1". Is there a way to test if the service is running? If the s
ervice is running, I would like to stop it, disable it, delete it and then recreate it.

Maybe module.wait together with a cmd.run that uses the unless feature might help in this case.
You need a nice way to check that the service is not installed and use this in the unless test. The command itself might something like echo - it does not really matter. You can now use watch or watch_in to make the module.wait relying on cmd.run.
untested draft (note that I don't know windows cli at all):
check_if_service_installed:
cmd.run:
name: echo 'not installed'
unless: 'if exists c:\svc1.exe'
create_executable:
module.wait:
- name: service.create
- m_name: SVC1
- bin_path: d:\svc1.exe
- start_type: auto
- watch:
- cmd: check_if_service_installed
After writing I think that this looks a little complicated for what you try to achieve. Maybe a small batch or powershell script which takes care of being able to be executed again is more straight forward. But this is a matter of taste IMO.

Related

Symfony logging with Monolog, confused about STDERR

I am trying to align my logging with the best practice of using STDERR.
So, I would like to understand what happens with the logs sent to STDERR.
Symfony official docs (https://symfony.com/doc/current/logging.html):
In the prod environment, logs are written to STDERR PHP stream, which
works best in modern containerized applications deployed to servers
without disk write permissions.
If you prefer to store production logs in a file, set the path of your
log handler(s) to the path of the file to use (e.g. var/log/prod.log).
This time I want to follow the STDERR stream option.
When I was writing to a specific file, I knew exactly where to look for that file, open it and check the logged messages.
But with STDERR, I don't know where to look for my logs.
So, using monolog, I have the configuration:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404, 405]
nested:
type: stream
path: "php://stderr"
level: debug
Suppose next morning I want to check the logs. Where would I look?
Several hours of reading docs later, my understanding is as follows:
First, the usage of STDERR over STDOUT is preferred for errors because it is not buffered (gathering all output waiting for the script to end), thus errors are thrown immediately to the STDERR stream. Also, this way the normal output doesn't get mixed with errors.
Secondly, the immediate intuitive usage is when running a shell script, because in the Terminal one will directly see the STDOUT and STDERR messages (by default, both streams output to the screen).
But then, the non-intuitive usage of STDERR is when logging a website/API. We want to log the errors, and we want to be able to monitor the already occurred errors, that is to come back later and check those errors. Traditional practice stores errors in custom defined log-files. More modern practice recommends sending errors to STDERR. Regarding Symfony, Fabien Potencier (the creator of Symfony), says:
in production, stderr is a better option, especially when using
Docker, SymfonyCloud, lambdas, ... So, I now recommend to use
php://stderr
(https://symfony.com/blog/logging-in-symfony-and-the-cloud).
And he further recommends using STDERR even for development.
Now, what I believe to be missing from the picture (at least for me, as non-expert), is the guidance on HOW to access and check the error logs. Okay, we send the errors to STDERR, and then? Where am I going to check the errors next morning? I get it that containerized platforms (clouds, docker etc) have specific tools to easily and nicely monitor logs (tools that intercept STDERR and parse the messages in order to organize them in specific files/DBs), but that's not the case on a simple server, be it a local server or on a hosting.
Therefore, my understanding is that sending errors to STDERR is a good standardization when:
Resorting to using a third-party tool for log monitoring (like ELK, Grail, Sentry, Rollbar etc.)
When knowing exactly where your web-server is storing the STDERR logs. For instance, if you try (I defined a new STD_ERR constant to avoid any pre-configs):
define('STD_ERR', fopen('php://stderr', 'wb'));
fputs(STD_ERR, "ABC error message.");
you can find the "ABC error message" at:
XAMPP Apache default (Windows):
..\xampp\apache\logs\error.log
Symfony5 server (Windows):
C:\Users\your_user\.symfony5\log\ [in the most recent folder, as the logs rotate]
Symfony server (Linux):
/home/your_user/.symfony/log/ [in the most recent folder, as the logs rotate]
For Symfony server, you can actually see the logs paths when starting the server, or by command "symfony server:log".
One immediate advantage is that these STDERR logs are stored outside of the app folders, and you do not need to maintain extra writable folders or deal with the permissions etc. Of course, when developing/hosting multiple sites/apps, you need to configure the error log (the STDERR storage) location per app (in Apache that would be inside each <VirtualHost> conf ; with Symfony server, I am not sure). Personally, without a third-party tool for monitoring logs, I would stick with custom defined log files (no STDERR), but feel free to contradict me.

How to create new commands in Zuul CI to trigger custom jobs

Similar to recheck command, if I want to create custom trigger commands in Zuul CI, what's the best possible way to implement it?
For example, I want to have following 2 commands defined in Zuul to trigger specific tests.
/test-e2e - This would trigger a specific job that runs e2e tests
/test-conformance - This would trigger a specific job that triggers conformance tests.
Can somebody please advise?
Assuming that you are using GitHub to manage you source code you could define a custom regex trigger for you check pipeline
See documentation of pipeline.trigger.<github source>.comment
Something like this:
- pipeline:
name: check
trigger:
github:
- event: pull_request
action: comment
comment: (?i)^\s*test-e2e\s*$

How to run the execution module as a specific user?

I would like to manage users settings interfaced by GSettings in salt. I have a Python code that can manage the GSettings, but it needs to do that as a specific user.
Salt execution modules (and everything actually) by default run as a user that executed salt-minion, which is root by default. I couldn't find in the documentation information how to run a specific module as someone other.
I can walkaround it by executing a shell with su -l <username>, that in turn would call my Python code, but I hope there is an elegant build-in way, like an option user: <username> in the module.
There are at least two ways to run a command as a specified user:
From a state, you can do something like this (docs):
mycommand:
cmd.run:
- name: python my_gsettings_script.py
- runas: alternate_user
There is also an execution module, that also provides a runas option.

Increase behat performance with drush driver

I am running behat inside vagrant in a drupal installation.
When I use the drush driver, in order to authenticate an admin for example, the test runs extremelly slow(2').
My behat.yml is:
default:
suites:
default:
contexts:
- FeatureMinkContext
- FeatureContext:
- "/vagrant/images/behat"
- 813
- 1855
- Drupal\DrupalExtension\Context\DrupalContext
- Drupal\DrupalExtension\Context\MinkContext
- Drupal\DrupalExtension\Context\MessageContext
- Drupal\DrupalExtension\Context\DrushContext
extensions:
Behat\MinkExtension:
selenium2: ~
javascript_session: 'selenium2'
browser_name: firefox
base_url: http://solar.com/ # Replace with your site's URL
Drupal\DrupalExtension:
blackbox: ~
region_map:
search: ".form-search"
api_driver: 'drush'
drush:
root: /vagrant/drupal
selectors:
message_selector: '.messages'
error_message_selector: '.messages.messages-error'
success_message_selector: '.messages.messages-status'
drupal:
# Replace with your real Drupal root.
drupal_root: "/vagrant/drupal"
Test structure:
#javascript #api
Feature: Tests google maps and pois
#maps
Scenario: My tests
Given I am logged in as a user with the "administrator" role
...
Didn't used drush but,
The first thing you need to do is to identify the bottleneck, maybe the server is slow, maybe you are using some method that is slow or maybe you have some waits that are not right.
Definitely something is wrong if it takes 2' for an admin authentication.
Run the scenario, debug until you narrow to the method with the issue.
Some other things you could do are:
never use blind waits, only conditional waits
if you have to fill large forms try using a javascript method for fill, it will be very fast
try different browsers, in my case chrome is slightly faster
Is it a local site setup? If yes, then behat scripts for Drupal do run a bit slow on local as compared to running it on the actual site. I have faced this in almost every project. For me, the first step which is admin authentication takes upto 5 minutes at times. Increasing the RAM size reduces this problem to some extent.

Monitoring SaltStack

Is there anything out there to monitor SaltStack installations besides halite? I have it installed but it's not really what we are looking for.
It would be nice if we could have a web gui or even a daily email that showed the status of all the minions. I'm pretty handy with scripting but I don't know what to script.
Anybody have any ideas?
In case by monitoring you mean operating salt, you can try one of the following:
SaltStack Enterprise GUI
Foreman
SaltPad
Molten
Halite (DEPRECATED by SaltStack)
These GUI will allow you more than just knowing whether or not minions are alive. They will allow you to operate on them in the same manner you could with the salt client.
And in case by monitoring you mean just whether the salt master and salt minions are up and running, you can use a general-purpose monitoring solutions like:
Icinga
Naemon
Nagios
Shinken
Sensu
In fact, these tools can monitor different services on the hosts they know about. The host can be any machine that has an ip address and the service can be any resource that can be queried via the underlying OS. Example of host can be a server, router, printer... And example of service can be memory, disk, a process, ...
Not an absolute answer, but we're developing saltpad, which is a replacement and improvement of halite. One of its feature is to display the status of all your minions. You can give it a try: Saltpad Project page on Github
You might look into consul while it isn't specifically for SaltStack, I use it to monitor that salt-master and salt-minion are running on the hosts they should be.
Another simple test would be to run something like:
salt --output=json '*' test.ping
And compare between different runs. It's not amazing monitoring, but at least shows your minions are up and communicating with your master.
Another option might be to use the salt.runners.manage functions, which comes with a status function.
In order to print the status of all known salt minions you can run this on your salt master:
salt-run manage.status
salt-run manage.status tgt="webservers" expr_form="nodegroup"
I had to write my own. To my knowledge, there is nothing out there which will do this, and halite didn't work for what I needed.
If you know Python, it's fairly easy to write an application to monitor salt. For example, my app had a thread which refreshed the list of hosts from the salt keys from time to time, and a few threads that ran various commands against that list to verify they were up. The monitor threads updated a dictionary with a timestamp and success/fail for each host after they ran. It had a hacked together HTML display color coded to reflect the status of each node. Took me a about half a day to write it.
If you don't want to use Python, you could, painfully, do something similar to this inefficient, quick, untested hack using command line tools in bash.
minion_list=$(salt-key --out=txt|grep '^minions_pre:.*'|tr ',' ' ') # You'
for minion in ${minion_list}; do
salt "${minion}" test.ping
if [ $? -ne 0 ]; then
echo "${minion} is down."
fi
done
It would be easy enough to modify to write file or send an alert.
halite was depreciated in favour of paid ui version, sad, but true - still saltstack does the job. I'd just guess your best monitoring will be the one you can write yourself, happily there's a salt-api project (which I believe was part of halite, not sure about this), I'd recommend you to use this one with tornado as it's better than cherry version.
So if you want nice interface you might want to work with api once you set it up... when setting up tornado make sure you're ok with authentication (i had some trouble in here), here's how you can check it:
Using Postman/Curl/whatever:
check if api is alive:
- no post data (just see if api is alive)
- get request http://masterip:8000/
login (you'll need to take token returned from here to do most operations):
- post to http://masterip:8000/login
- (x-www-form-urlencoded data in postman), raw:
username:yourUsername
password:yourPassword
eauth:pam
im using pam so I have a user with yourUsername and yourPassword added on my master server (as a regular user, that's how pam's working)
get minions, http://masterip:8000/minions (you'll need to post token from login operation),
get all jobs, http://masterip:8000/jobs (you'll n need to post token from login operation),
So basically if you want to do anything with saltstack monitoring just play with that salt-api & get what you want, saltstack has output formatters so you could get all data even as a json (if your frontend is javascript like) - it lets you run cmd's or whatever you want and the monitoring is left to you (unless you switch from the community to pro versions) or unless you want to use mentioned saltpad (which, sorry guys, have been last updated a year ago according to repo).
btw. you might need to change that 8000 port to something else depending on version of saltstack/tornado/config.
Basically if you want to have an output where you can check the status of all the minions then you can run a command like
salt '*' test.ping
salt --output=json '*' test.ping #To get output in Json Format
salt manage.up # Returns all minions status
Or else if you want to visualize the same with a Dashboard then you can see some of the available options like Foreman, SaltPad etc.

Resources