I need to run service, here's code snippet for that:
/etc/init.d/collect-node:
file.managed:
- source: salt://scripts/collect_node.sh.j2
- template: jinja
- mode: 755
service.running:
- name: collect-node
- enable: True
- restart: True
- watch:
- file.managed: /etc/collect/node-config.json
- file.managed: /etc/init.d/collect-node
- require:
- service.running: xvfb
- user.present: collect
The node is managing by vagrant. So when I'm vagrant up node it's calling state.highstate but service is not running, but when I'm explicitly calling salt-call state.highstate in the console, the service starts to run.
What might be problem here? How I can diagnose the problem? Thanks
The problem was in dependencies, if other package, script or something is not ready yet then it silently won't run service.
That's why when all stuff are installed the state.highstate runs the service.
Related
I am running airflow 2 with docker-compose (works great) but I cannot make it accessible behind a nginx proxy, using a combo of nginxproxy/nginx-proxy and nginxproxy/acme-companion.
Other projects work fine using that combo (meaning, that combo is working fine) but it seems that I need to change some airflow cfgs to make it work.
The airflow docker-compose includes the following:
x-airflow-common:
&airflow-common
build: ./airflow-docker/
environment:
AIRFLOW__WEBSERVER__BASE_URL: 'http://abc.def.com'
AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'true'
[...]
services:
[...]
airflow-webserver:
<<: *airflow-common
command: webserver
expose:
- "8080"
environment:
- VIRTUAL_HOST=abc.def.com
- LETSENCRYPT_HOST=abc.def.com
- LETSENCRYPT_EMAIL=some.email#def.com
networks:
- proxy_default # proxy_default is the docker network the nginx-proxy container runs in
- default
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
[...]
[...]
[...]
networks:
proxy_default:
external: true
Airflow can be reached under the (successfully encrypted) address, but when one opens that url it results in the "Ooops! Something bad has happened." airflow error, more specifically a "sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: session" error, even though it works fine when not behind the proxy.
What am I missing?
I want SaltStack to reload or restart when the file 000-default-conf is changed, but when i manually edit the file on my debian9 system via ssh nothing happens.
Can anyone help?
The configuration looks like this:
apache2:
pkg.installed:
- name: apache2
service.running:
- name: apache2
- enable: True
- reload: True
- require:
- pkg: apache2
- watch:
- file: /etc/apache2/sites-available/000-default-conf
- file: /etc/apache2/sites-available/*
- pkg: apache2
/etc/apache2/sites-available/000-default-conf:
file.managed:
- name: /etc/apache2/sites-available/000-default.conf
- user: www-data
- group: www-data
- mode: 644
- source: salt://apache-wordpress/files/000-default.conf
- require:
- pkg: apache2
a2enmod_rewrite:
cmd.run:
- name: a2enmod rewrite
- require:
- service: apache2
Manually made changes won't restart the service as mentioned in salt documentation:
watch can be used with service.running to restart a service when
another state changes ( example: a file.managed state that creates the
service's config file ).
(https://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html)
what you need is beacons and reactors , have a look at inotify beacon
I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.
I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.
I have a remote state called logstash_forwarder that is located at https://github.com/saltstack-formulas/logstash_forwarder-formula.git.
Im using git as fileserver_backend.
When I run state.highstate, it does not find the state.
When I run state.sls logstash_forwarder, it works.
Why does it not work for state.highstate?
/etc/salt/minion:
master: localhost
id: lemp
file_client: local
state_events: false
environment: development
grains:
roles:
- lemp
file_roots:
base:
- /srv/salt/base
development:
- /srv/salt/development
- /srv/salt/base
production:
- /srv/salt/production
- /srv/salt/base
pillar_roots:
development:
- /srv/pillar/development
production:
- /srv/pillar/production
fileserver_backend:
- roots
- git
gitfs_provider: gitpython
gitfs_remotes:
- https://github.com/saltstack-formulas/logstash_forwarder-formula.git
/srv/salt/base/top.sls:
development:
'*':
- system
- util
- project
- logstash_forwarder
'roles:lemp':
- match: grain
- php5
- nginx
- mysql
- laravel5_app
Thanks in advance, have a nice day :)
Remove environment: development from your minion config.