Standalone masterless Salt minion won't pick up matching pillar - salt-stack

I have a standalone minion installed on the web1 server:
me#web1:~$ hostname
web1
me#web1:~$ sudo salt-call network.get_hostname
local:
web1
me#web1:~$ cat /etc/salt/minion|egrep -v '^#'|egrep -v '^$'
file_client: local
file_roots:
base:
- /srv/salt
pillar_roots:
base:
- /srv/pillar
My pillar is set up something like this:
me#web1:~$ cat /srv/pillar/top.sls
base:
'*':
- db
'web1':
- env.production
me#web1:~$ cat /srv/pillar/db.sls
postgres:
use_upstream_repo: true
version: '9.6'
pkg: 'postgresql-9.6'
pkg_client: 'postgresql-client-9.6'
me#web1:~$ cat /srv/pillar/env/production.sls
env: production
But when I use pillar.items, I only see this:
me#web1:~$ sudo salt-call pillar.items
local:
----------
postgres:
----------
use_upstream_repo:
True
version:
9.6
pkg:
postgresql-9.6
pkg_client:
postgresql-client-9.6
The standalone server seems to only be applying pillars in the '*' match-all section, but not the direct-match 'web1' hostname section. What am I doing wrong here?

the top file matches the minion_id that may be different from the hostname (for instance it may be the FQDN or something manually set by you).
Can you check the content of /etc/salt/minion_id? That's the minion_id salt is going to match in the top file

Related

How to redirect nginx server to google.com after restarting?

stuck at an ansible hackkerrank lab(fresco play) that asks to install nginx and postgresql and ensure they are running.
But after finishing the code and running the exam it is checking for redirection of nginx server after restart to google.com.
Has anyone faced this issue?
Below is my code to install and ensure services are running:
name: 'To install packages'
hosts: localhost
connection: local
become: yes
become_method: sudo
tasks:
-
apt:
name: "{{item}}"
state: present
with_items:
- nginx
- postgresql
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
apt: name=postgresql state=latest
- name: start postgresql
service:
name: postgresql
state: started
Wrote these in two separate playbooks as of now and need help in redirection of nginx to google.com
You need to write your nginx configuration file (in this case specifying to redirect traffic to google) and copy to the /etc/nginx/nginx.conf file.
- name: write nginx.conf
template:
src: <path_to_file>
dest: /etc/nginx/nginx.conf
After this you should restart the nginx service.
Thanks!
Below code worked for me:
Define your port number and the site you wish to redirect nginx server to in .j2 file in Templates folder under your roles.
Include a task in Playbook to set the template to /etc/nginx/sites-enabled/default folder. Include a notify for the handler defined in 'Handlers' folder.
In some cases if nginx server doesnt restart, use 'sudo service nginx restart' at the terminal before testing your code.
Ansible-Sibelius (Try it Out- Write a Playbook)
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
- name: Set the configuration for the template file
template:
src: /<path-to-your-roles>/templates/sites-enabled.j2
dest: /etc/nginx/sites-enabled/default
notify: restart nginx

How to get root password in Bitnami Wordpress from kubernetes shell?

I have installed Worpress in Rancher, (docker.io/bitnami/wordpress:5.3.2-debian-10-r43) I have to make wp-config writable but I get stuck, when get shell inside this pod to log as root :
kubectl exec -t -i --namespace=annuaire-p-brqcw annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7 bash
I cannot login to root, no password match with Bitnami Wordpress installation.
wordpress#annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7:/$ su root
Password:
su: Authentication failure
What is the default password, or how to change it ?
I really need your help!
The WordPress container has been migrated to a "non-root" user
approach. Previously the container ran as the root user and the Apache
daemon was started as the daemon user. From now on, both the container
and the Apache daemon run as user 1001. You can revert this behavior
by changing USER 1001 to USER root in the Dockerfile.
No writing permissions will be granted on wp-config.php by default.
This means that the only way to run it as root user is to create own Dockerfile and changing user to root.
However it's not recommended to run those containers are root for security reasons.
The simplest and most native Kubernetes way to change the file content on the Pod's container file system is to create a ConfigMap object from file using the following command:
$ kubectl create configmap myconfigmap --from-file=foo.txt
$ cat foo.txt
foo test
(Check the ConfigMaps documentation for details how to update them.)
then mount the ConfigMap to your container to replace existing file as follows:
(example requires some adjustments to work with Wordpress image):
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: volname1
mountPath: "/etc/wpconfig.conf"
readOnly: true
subPath: foo.txt
volumes:
- name: volname1
configMap:
name: myconfigmap
In the above example, the file in the ConfigMap data: section replaces original /etc/wpconfig.conf file (or creates if the file doesn't exist) in the running container without necessity to build a new container.
$ kubectl exec -ti mypod -- bash
root#mypod:/# ls -lah /etc/wpconfig.conf
-rw-r--r-- 1 root root 9 Jun 4 16:31 /etc/wpconfig.conf
root#mypod:/# cat /etc/wpconfig.conf
foo test
Note, that the file permissions is 644 which is enough to be readable by non-root user.
BTW, Bitnami Helm chart also uses this approach, but it relies on the existing configMap in your cluster for adding custom .htaccess and persistentVolumeClaim for mounting Wordpress data folder.

SaltStack watch file restart service won't work

I want SaltStack to reload or restart when the file 000-default-conf is changed, but when i manually edit the file on my debian9 system via ssh nothing happens.
Can anyone help?
The configuration looks like this:
apache2:
pkg.installed:
- name: apache2
service.running:
- name: apache2
- enable: True
- reload: True
- require:
- pkg: apache2
- watch:
- file: /etc/apache2/sites-available/000-default-conf
- file: /etc/apache2/sites-available/*
- pkg: apache2
/etc/apache2/sites-available/000-default-conf:
file.managed:
- name: /etc/apache2/sites-available/000-default.conf
- user: www-data
- group: www-data
- mode: 644
- source: salt://apache-wordpress/files/000-default.conf
- require:
- pkg: apache2
a2enmod_rewrite:
cmd.run:
- name: a2enmod rewrite
- require:
- service: apache2
Manually made changes won't restart the service as mentioned in salt documentation:
watch can be used with service.running to restart a service when
another state changes ( example: a file.managed state that creates the
service's config file ).
(https://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html)
what you need is beacons and reactors , have a look at inotify beacon

Error installing salt-minion using salt ssh

I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.

Nginx cannot restart via Ansible

I have a task in a playbook that tries to restart nginx via a handler as per usual:
- name: run migrations
command: bash -lc "some command"
notify: restart nginx
The playbook however breaks on this error:
NOTIFIED: [deploy | restart nginx] ********************************************
failed: [REDACTED] => {"failed": true}
msg: failure 1 running systemctl show for 'nginx.service': Failed to get D-Bus connection: No connection to service manager.
The handler is standard:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
And the way that I've setup nginx is not out of the ordinary as well:
- name: install nginx
apt: name=nginx state=present
sudo: yes
- name: copy nginx.conf to the server
template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
sudo: yes
- name: delete default virtualhost
file: path=/etc/nginx/sites-enabled/default state=absent
sudo: yes
- name: add mysite site-available
template: src=mysite.conf.j2 dest=/etc/nginx/sites-available/mysite.conf
sudo: yes
- name: link mysite site-enabled
file: path=/etc/nginx/sites-enabled/mysite src=/etc/nginx/sites-available/mysite.conf state=link
sudo: yes
This is on a ubuntu-14-04-x64 VPS.
The handler was:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
It seems that the state and enabled flags cannot both be present. By trimming the above to the following, it worked.
- name: restart nginx
service: name=nginx state=restarted
Why this is, and why it started breaking suddenly, I do not know.

Resources