Gitfs remote state not recognized on highstate - salt-stack

I have a remote state called logstash_forwarder that is located at https://github.com/saltstack-formulas/logstash_forwarder-formula.git.
Im using git as fileserver_backend.
When I run state.highstate, it does not find the state.
When I run state.sls logstash_forwarder, it works.
Why does it not work for state.highstate?
/etc/salt/minion:
master: localhost
id: lemp
file_client: local
state_events: false
environment: development
grains:
roles:
- lemp
file_roots:
base:
- /srv/salt/base
development:
- /srv/salt/development
- /srv/salt/base
production:
- /srv/salt/production
- /srv/salt/base
pillar_roots:
development:
- /srv/pillar/development
production:
- /srv/pillar/production
fileserver_backend:
- roots
- git
gitfs_provider: gitpython
gitfs_remotes:
- https://github.com/saltstack-formulas/logstash_forwarder-formula.git
/srv/salt/base/top.sls:
development:
'*':
- system
- util
- project
- logstash_forwarder
'roles:lemp':
- match: grain
- php5
- nginx
- mysql
- laravel5_app
Thanks in advance, have a nice day :)

Remove environment: development from your minion config.

Related

SaltStack watch file restart service won't work

I want SaltStack to reload or restart when the file 000-default-conf is changed, but when i manually edit the file on my debian9 system via ssh nothing happens.
Can anyone help?
The configuration looks like this:
apache2:
pkg.installed:
- name: apache2
service.running:
- name: apache2
- enable: True
- reload: True
- require:
- pkg: apache2
- watch:
- file: /etc/apache2/sites-available/000-default-conf
- file: /etc/apache2/sites-available/*
- pkg: apache2
/etc/apache2/sites-available/000-default-conf:
file.managed:
- name: /etc/apache2/sites-available/000-default.conf
- user: www-data
- group: www-data
- mode: 644
- source: salt://apache-wordpress/files/000-default.conf
- require:
- pkg: apache2
a2enmod_rewrite:
cmd.run:
- name: a2enmod rewrite
- require:
- service: apache2
Manually made changes won't restart the service as mentioned in salt documentation:
watch can be used with service.running to restart a service when
another state changes ( example: a file.managed state that creates the
service's config file ).
(https://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html)
what you need is beacons and reactors , have a look at inotify beacon

Transfer symfony logfiles with filebeat to graylog in local docker-environment

Description
I am trying to build an equal configuration in my local docker-environment like on our production system. After spending some time investigating and rebuilding the docker container setup, still can't get it to work and Graylog is not receiving any data.
Overview and interim results
web, php and db container are in use for the symfony based application
symfony runs properly on localhost in php-container and generates logfiles
symfony-logfiles are located here: /var/www/html/var/logs/*.log
symfony-logfiles format is json / gelf
all other containers are also up and running when starting the complete composition
filebeat configuration is based on first link below
filebeat.yml seems to retrieve any logfile found in any container
filebeat configured to transfer data directly to elasticsearch
elasticsearch persists data in mongodb
all graylog related data in persisted in named volumes in docker
additionally I am working with docker-sync on a Mac
The docker-compose.yml is based on the following resources:
https://github.com/jochenchrist/docker-logging-elasticsearch
http://docs.graylog.org/en/2.4/pages/installation/docker.html?highlight=docker
https://www.elastic.co/guide/en/beats/filebeat/6.3/running-on-docker.html
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-reference-yml.html
config.yml
# Monolog Configuration
monolog:
channels: [graylog]
handlers:
graylog:
type: stream
formatter: line_formatter
path: "%kernel.logs_dir%/graylog.log"
channels: [graylog]
docker-compose.yml
version: "3"
services:
web:
image: nginx
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- ./docker-config/nginx.conf:/etc/nginx/conf.d/default.conf
- project-app-sync:/var/www/html
- ./docker-config/localhost.crt:/etc/nginx/ssl/localhost.crt
- ./docker-config/localhost.key:/etc/nginx/ssl/localhost.key
php:
build:
context: .
dockerfile: ./docker-config/Dockerfile-php
links:
- graylog
volumes:
- project-app-sync:/var/www/html
- ./docker-config/php.ini:/usr/local/etc/php/php.ini
- ./docker-config/www.conf:/usr/local/etc/php-fpm.d/www.conf
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=project
- MYSQL_USER=project
- MYSQL_PASSWORD=password
volumes:
- ./docker-config/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
- project-mysql-sync:/var/lib/mysql
# Graylog / Filebeat
filebeat:
build: ./docker-config/filebeat
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- graylog-network
depends_on:
- graylog-elasticsearch
graylog:
image: graylog/graylog:2.4
volumes:
- graylog-journal:/usr/share/graylog/data/journal
networks:
- graylog-network
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- graylog-mongo:mongo
- graylog-elasticsearch:elasticsearch
depends_on:
- graylog-mongo
- graylog-elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
graylog-mongo:
image: mongo:3
volumes:
- graylog-mongo-data:/data/db
networks:
- graylog-network
graylog-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10
ports:
- "9200:9200"
volumes:
- graylog-elasticsearch-data:/usr/share/elasticsearch/data
networks:
- graylog-network
environment:
- cluster.name=graylog
- "discovery.zen.minimum_master_nodes=1"
- "discovery.type=single-node"
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
project-app-sync:
external: true
project-mysql-sync: ~
graylog-mongo-data:
driver: local
graylog-elasticsearch-data:
driver: local
graylog-journal:
driver: local
networks:
graylog-network: ~
Dockerfile of filebeat container
FROM docker.elastic.co/beats/filebeat:6.3.1
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
# must run as root to access /var/lib/docker and /var/run/docker.sock
USER root
RUN chown root /usr/share/filebeat/filebeat.yml
# dont run with -e, to disable output to stderr
CMD [""]
filebeat.yml
filebeat.prospectors:
- type: docker
paths:
- '/var/lib/docker/containers/*/*.log'
# path to symfony based logs
- '/var/www/html/var/logs/*.log'
containers.ids: '*'
processors:
- decode_json_fields:
fields: ["host","application","short_message"]
target: ""
overwrite_keys: true
- add_docker_metadata: ~
output.elasticsearch:
# transfer data to elasticsearch container?
hosts: ["localhost:9200"]
logging.to_files: true
logging.to_syslog: false
Graylog backend
After setting up this docker composition I started the Graylog web-view and set up a collector and input as described here:
http://docs.graylog.org/en/2.4/pages/collector_sidecar.html#step-by-step-guide
Maybe I have totally misunderstood how this could work. I am not totally sure if Beats from Elastic is the same as the filebeats container and if the sidecar collector is something extra I forgot to add. Maybe I misconfigured the collector and input in graylog?!
I would be thankful to any help or working example according to my problem ...
Graylog seems to be running on http://127.0.0.1:9000/api which is in the container. You might want to run it as http://graylog:9000/api or as http://0.0.0.0:9000/api
Accessing the other images from within any of the other images will have be done with the same name as the service name, as defined in the docker-compose.yml files. The url to the graylog-elasticsearch would be something like: http://graylog-elasticsearch/.... if you would post to localhost it would stay inside its own image.
Hope this will help you along in finding the solution.

Error installing salt-minion using salt ssh

I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.

Using Vagrant with SaltStack, how do I start nginx after providing custom configuration

I have a Vagrant boxset up to provision with salt. When I do a fresh vagrant up (after a vagrant destroy), nginx defaults to port 80 and the default welcome page, despite not being configured to. I can fix it by manually running sudo nginx -s reload inside the guest, but I would prefer not to use a manual workaround.
Here's my salt/roots/salt/nginx/init.sls file:
nginx:
pkg:
- installed
nginx run:
service.running:
- name: nginx
- enable: True
- watch:
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/sites-available/dotmanca
require:
- file: /etc/nginx/sites-enabled/dotmanca
- file: /etc/nginx/nginx.conf
- pkg: nginx
/etc/nginx/nginx.conf:
file:
- managed
- source: salt://nginx/nginx.conf
- user: root
- group: root
- mode: 644
/etc/nginx/sites-available/dotmanca:
file:
- managed
- source: salt://nginx/dotmanca.conf
- user: root
- group: root
- mode: 644
require:
- pkg: nginx
/etc/nginx/sites-enabled/dotmanca:
file.symlink:
- target: /etc/nginx/sites-available/dotmanca
- user: root
- group: root
- mode: 644
require:
- file: /etc/nginx/sites-available/dotmanca
/etc/nginx/sites-enabled/default:
file.absent:
- name: /etc/nginx/sites-enabled/default
require:
- pkg: nginx
The nxginx server is installed and runs properly after provisioning, and the configuration files show up in the correct location.
I need to either reload the config in nginx after my custom files get placed, or somehow hold off running the nginx service until the files are in place.
You can always run restart command automatically - see cmd.run. Just make it depend on service state.
However, it would be my last resort. Salt is able to use dependencies (or requisites in Salt's terms) and make sure proper config file content is used before start of the service (or restart service if config file changes detected).
Apparently, I needed to know more about the require bits. I had them where states should go, and not under the states themselves.
My file should have looked like this:
nginx:
pkg:
- installed
nginx_run:
service.running:
- name: nginx
- enable: True
- watch:
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/sites-available/dotmanca
- require:
- file: /etc/nginx/sites-enabled/dotmanca
- file: /etc/nginx/nginx.conf
- pkg: nginx
/etc/nginx/nginx.conf:
file:
- managed
- source: salt://nginx/nginx.conf
- user: root
- group: root
- mode: 644
/etc/nginx/sites-available/dotmanca:
file:
- managed
- source: salt://nginx/dotmanca.conf
- user: root
- group: root
- mode: 644
- require:
- pkg: nginx
/etc/nginx/sites-enabled/dotmanca:
file.symlink:
- target: /etc/nginx/sites-available/dotmanca
- user: root
- group: root
- mode: 644
- require:
- file: /etc/nginx/sites-available/dotmanca
/etc/nginx/sites-enabled/default:
file.absent:
- name: /etc/nginx/sites-enabled/default
- require:
- pkg: nginx

SaltStack is not starting services

I need to run service, here's code snippet for that:
/etc/init.d/collect-node:
file.managed:
- source: salt://scripts/collect_node.sh.j2
- template: jinja
- mode: 755
service.running:
- name: collect-node
- enable: True
- restart: True
- watch:
- file.managed: /etc/collect/node-config.json
- file.managed: /etc/init.d/collect-node
- require:
- service.running: xvfb
- user.present: collect
The node is managing by vagrant. So when I'm vagrant up node it's calling state.highstate but service is not running, but when I'm explicitly calling salt-call state.highstate in the console, the service starts to run.
What might be problem here? How I can diagnose the problem? Thanks
The problem was in dependencies, if other package, script or something is not ready yet then it silently won't run service.
That's why when all stuff are installed the state.highstate runs the service.

Resources