I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.
Related
stuck at an ansible hackkerrank lab(fresco play) that asks to install nginx and postgresql and ensure they are running.
But after finishing the code and running the exam it is checking for redirection of nginx server after restart to google.com.
Has anyone faced this issue?
Below is my code to install and ensure services are running:
name: 'To install packages'
hosts: localhost
connection: local
become: yes
become_method: sudo
tasks:
-
apt:
name: "{{item}}"
state: present
with_items:
- nginx
- postgresql
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
apt: name=postgresql state=latest
- name: start postgresql
service:
name: postgresql
state: started
Wrote these in two separate playbooks as of now and need help in redirection of nginx to google.com
You need to write your nginx configuration file (in this case specifying to redirect traffic to google) and copy to the /etc/nginx/nginx.conf file.
- name: write nginx.conf
template:
src: <path_to_file>
dest: /etc/nginx/nginx.conf
After this you should restart the nginx service.
Thanks!
Below code worked for me:
Define your port number and the site you wish to redirect nginx server to in .j2 file in Templates folder under your roles.
Include a task in Playbook to set the template to /etc/nginx/sites-enabled/default folder. Include a notify for the handler defined in 'Handlers' folder.
In some cases if nginx server doesnt restart, use 'sudo service nginx restart' at the terminal before testing your code.
Ansible-Sibelius (Try it Out- Write a Playbook)
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
- name: Set the configuration for the template file
template:
src: /<path-to-your-roles>/templates/sites-enabled.j2
dest: /etc/nginx/sites-enabled/default
notify: restart nginx
I want SaltStack to reload or restart when the file 000-default-conf is changed, but when i manually edit the file on my debian9 system via ssh nothing happens.
Can anyone help?
The configuration looks like this:
apache2:
pkg.installed:
- name: apache2
service.running:
- name: apache2
- enable: True
- reload: True
- require:
- pkg: apache2
- watch:
- file: /etc/apache2/sites-available/000-default-conf
- file: /etc/apache2/sites-available/*
- pkg: apache2
/etc/apache2/sites-available/000-default-conf:
file.managed:
- name: /etc/apache2/sites-available/000-default.conf
- user: www-data
- group: www-data
- mode: 644
- source: salt://apache-wordpress/files/000-default.conf
- require:
- pkg: apache2
a2enmod_rewrite:
cmd.run:
- name: a2enmod rewrite
- require:
- service: apache2
Manually made changes won't restart the service as mentioned in salt documentation:
watch can be used with service.running to restart a service when
another state changes ( example: a file.managed state that creates the
service's config file ).
(https://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html)
what you need is beacons and reactors , have a look at inotify beacon
I have a Vagrant boxset up to provision with salt. When I do a fresh vagrant up (after a vagrant destroy), nginx defaults to port 80 and the default welcome page, despite not being configured to. I can fix it by manually running sudo nginx -s reload inside the guest, but I would prefer not to use a manual workaround.
Here's my salt/roots/salt/nginx/init.sls file:
nginx:
pkg:
- installed
nginx run:
service.running:
- name: nginx
- enable: True
- watch:
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/sites-available/dotmanca
require:
- file: /etc/nginx/sites-enabled/dotmanca
- file: /etc/nginx/nginx.conf
- pkg: nginx
/etc/nginx/nginx.conf:
file:
- managed
- source: salt://nginx/nginx.conf
- user: root
- group: root
- mode: 644
/etc/nginx/sites-available/dotmanca:
file:
- managed
- source: salt://nginx/dotmanca.conf
- user: root
- group: root
- mode: 644
require:
- pkg: nginx
/etc/nginx/sites-enabled/dotmanca:
file.symlink:
- target: /etc/nginx/sites-available/dotmanca
- user: root
- group: root
- mode: 644
require:
- file: /etc/nginx/sites-available/dotmanca
/etc/nginx/sites-enabled/default:
file.absent:
- name: /etc/nginx/sites-enabled/default
require:
- pkg: nginx
The nxginx server is installed and runs properly after provisioning, and the configuration files show up in the correct location.
I need to either reload the config in nginx after my custom files get placed, or somehow hold off running the nginx service until the files are in place.
You can always run restart command automatically - see cmd.run. Just make it depend on service state.
However, it would be my last resort. Salt is able to use dependencies (or requisites in Salt's terms) and make sure proper config file content is used before start of the service (or restart service if config file changes detected).
Apparently, I needed to know more about the require bits. I had them where states should go, and not under the states themselves.
My file should have looked like this:
nginx:
pkg:
- installed
nginx_run:
service.running:
- name: nginx
- enable: True
- watch:
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/sites-available/dotmanca
- require:
- file: /etc/nginx/sites-enabled/dotmanca
- file: /etc/nginx/nginx.conf
- pkg: nginx
/etc/nginx/nginx.conf:
file:
- managed
- source: salt://nginx/nginx.conf
- user: root
- group: root
- mode: 644
/etc/nginx/sites-available/dotmanca:
file:
- managed
- source: salt://nginx/dotmanca.conf
- user: root
- group: root
- mode: 644
- require:
- pkg: nginx
/etc/nginx/sites-enabled/dotmanca:
file.symlink:
- target: /etc/nginx/sites-available/dotmanca
- user: root
- group: root
- mode: 644
- require:
- file: /etc/nginx/sites-available/dotmanca
/etc/nginx/sites-enabled/default:
file.absent:
- name: /etc/nginx/sites-enabled/default
- require:
- pkg: nginx
I need to run service, here's code snippet for that:
/etc/init.d/collect-node:
file.managed:
- source: salt://scripts/collect_node.sh.j2
- template: jinja
- mode: 755
service.running:
- name: collect-node
- enable: True
- restart: True
- watch:
- file.managed: /etc/collect/node-config.json
- file.managed: /etc/init.d/collect-node
- require:
- service.running: xvfb
- user.present: collect
The node is managing by vagrant. So when I'm vagrant up node it's calling state.highstate but service is not running, but when I'm explicitly calling salt-call state.highstate in the console, the service starts to run.
What might be problem here? How I can diagnose the problem? Thanks
The problem was in dependencies, if other package, script or something is not ready yet then it silently won't run service.
That's why when all stuff are installed the state.highstate runs the service.
I have a remote state called logstash_forwarder that is located at https://github.com/saltstack-formulas/logstash_forwarder-formula.git.
Im using git as fileserver_backend.
When I run state.highstate, it does not find the state.
When I run state.sls logstash_forwarder, it works.
Why does it not work for state.highstate?
/etc/salt/minion:
master: localhost
id: lemp
file_client: local
state_events: false
environment: development
grains:
roles:
- lemp
file_roots:
base:
- /srv/salt/base
development:
- /srv/salt/development
- /srv/salt/base
production:
- /srv/salt/production
- /srv/salt/base
pillar_roots:
development:
- /srv/pillar/development
production:
- /srv/pillar/production
fileserver_backend:
- roots
- git
gitfs_provider: gitpython
gitfs_remotes:
- https://github.com/saltstack-formulas/logstash_forwarder-formula.git
/srv/salt/base/top.sls:
development:
'*':
- system
- util
- project
- logstash_forwarder
'roles:lemp':
- match: grain
- php5
- nginx
- mysql
- laravel5_app
Thanks in advance, have a nice day :)
Remove environment: development from your minion config.