I have an Ansible playbook which worked on a different machine.
But here it fails:
fatal: [coffee-and-sugar.club]: FAILED! =>
{"changed": false, "msg": "No > package matching 'nginx' is available"}
---
- hosts: all
tasks:
- name: ensure nginx is at the latest version
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
What could be wrong?
guettli's answer is correct but you can also make it shorter, calling the apt module only once:
---
- hosts: all
tasks:
- name: Update and upgrade apt packages
apt:
name: nginx
state: latest
update_cache: yes
upgrade: yes
- name: start nginx
service:
name: nginx
state: started
If the machine was setup up just some seconds ago, then you need to run apt update at least once.
You can do it like this via Ansible:
---
- hosts: all
tasks:
- name: Update and upgrade apt packages
apt:
update_cache: yes
upgrade: yes
- name: ensure nginx is at the latest version
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
Related
stuck at an ansible hackkerrank lab(fresco play) that asks to install nginx and postgresql and ensure they are running.
But after finishing the code and running the exam it is checking for redirection of nginx server after restart to google.com.
Has anyone faced this issue?
Below is my code to install and ensure services are running:
name: 'To install packages'
hosts: localhost
connection: local
become: yes
become_method: sudo
tasks:
-
apt:
name: "{{item}}"
state: present
with_items:
- nginx
- postgresql
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
apt: name=postgresql state=latest
- name: start postgresql
service:
name: postgresql
state: started
Wrote these in two separate playbooks as of now and need help in redirection of nginx to google.com
You need to write your nginx configuration file (in this case specifying to redirect traffic to google) and copy to the /etc/nginx/nginx.conf file.
- name: write nginx.conf
template:
src: <path_to_file>
dest: /etc/nginx/nginx.conf
After this you should restart the nginx service.
Thanks!
Below code worked for me:
Define your port number and the site you wish to redirect nginx server to in .j2 file in Templates folder under your roles.
Include a task in Playbook to set the template to /etc/nginx/sites-enabled/default folder. Include a notify for the handler defined in 'Handlers' folder.
In some cases if nginx server doesnt restart, use 'sudo service nginx restart' at the terminal before testing your code.
Ansible-Sibelius (Try it Out- Write a Playbook)
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
- name: Set the configuration for the template file
template:
src: /<path-to-your-roles>/templates/sites-enabled.j2
dest: /etc/nginx/sites-enabled/default
notify: restart nginx
How to fix Error: must either provide a name or specify --generate-name in Helm
Created sample helm chart name as mychart and written the deployment.yaml, service.yaml, ingress.yaml with nginx service. After that running the command like $ helm install mychart
service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: main
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
deployment.yaml
apiVersion: extensions/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
containerPort: 80
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
http.port: "443"
spec:
backend:
serviceName: nginx
servicePort: 80
Expected output:
.....
status: DEPLOYED
just to add --generate-name at the end of helm command
According to the helm documentation for v3.x
helm install --help
Usage:
helm install [NAME] [CHART] [flags]
you want to use:
helm install "your release name" chart
For example:
# helm repo add stable https://kubernetes-charts.storage.googleapis.com/
# helm install wordpress-helm-testing stable/wordpress
NAME: wordpress-helm-testing
LAST DEPLOYED: 2019-10-07 15:56:21.205156 -0700 PDT m=+1.763748029
NAMESPACE: default
STATUS: deployed
NOTES:
1. Get the WordPress URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w wordpress-helm-testing'
export SERVICE_IP=$(kubectl get svc --namespace default wordpress-helm-testing --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "WordPress URL: http://$SERVICE_IP/"
echo "WordPress Admin URL: http://$SERVICE_IP/admin"
2. Login with the following credentials to see your blog
echo Username: user
echo Password: $(kubectl get secret --namespace default wordpress-helm-testing -o jsonpath="{.data.wordpress-password}" | base64 --decode)
#helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
wordpress-helm-testing default 1 2019-10-07 15:56:21.205156 -0700 PDT deployed wordpress-7.3.9
This is a better operational approach since it eliminates randomness in your release names. You might want to use something like a user name or anything that makes it unique and adds meaning to the release other than the GUID the --generate-name option will give you.
In helm v3 you can use either:
helm install [NAME] [CHART]
or:
helm install [CHART] --generate-name
Examples:
helm install reloader stakater/reloader
helm install stakater/reloader --generate-name
From the help manual:
helm install --help
Usage:
helm install [NAME] [CHART] [flags]
Flags:
-g, --generate-name generate the name (and omit the NAME parameter)
Assuming the chart is in the current directory:
helm install some-name .
Output:
NAME: some-name
LAST DEPLOYED: Sun Jan 5 21:03:25 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
The best/easiest way to fix this would be to append "--generate-name" in the command used to install helm chart.
Add release name
helm install test --dry-run --debug .\mychart\
test is the release name.
I want SaltStack to reload or restart when the file 000-default-conf is changed, but when i manually edit the file on my debian9 system via ssh nothing happens.
Can anyone help?
The configuration looks like this:
apache2:
pkg.installed:
- name: apache2
service.running:
- name: apache2
- enable: True
- reload: True
- require:
- pkg: apache2
- watch:
- file: /etc/apache2/sites-available/000-default-conf
- file: /etc/apache2/sites-available/*
- pkg: apache2
/etc/apache2/sites-available/000-default-conf:
file.managed:
- name: /etc/apache2/sites-available/000-default.conf
- user: www-data
- group: www-data
- mode: 644
- source: salt://apache-wordpress/files/000-default.conf
- require:
- pkg: apache2
a2enmod_rewrite:
cmd.run:
- name: a2enmod rewrite
- require:
- service: apache2
Manually made changes won't restart the service as mentioned in salt documentation:
watch can be used with service.running to restart a service when
another state changes ( example: a file.managed state that creates the
service's config file ).
(https://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html)
what you need is beacons and reactors , have a look at inotify beacon
I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.
I have a task in a playbook that tries to restart nginx via a handler as per usual:
- name: run migrations
command: bash -lc "some command"
notify: restart nginx
The playbook however breaks on this error:
NOTIFIED: [deploy | restart nginx] ********************************************
failed: [REDACTED] => {"failed": true}
msg: failure 1 running systemctl show for 'nginx.service': Failed to get D-Bus connection: No connection to service manager.
The handler is standard:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
And the way that I've setup nginx is not out of the ordinary as well:
- name: install nginx
apt: name=nginx state=present
sudo: yes
- name: copy nginx.conf to the server
template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
sudo: yes
- name: delete default virtualhost
file: path=/etc/nginx/sites-enabled/default state=absent
sudo: yes
- name: add mysite site-available
template: src=mysite.conf.j2 dest=/etc/nginx/sites-available/mysite.conf
sudo: yes
- name: link mysite site-enabled
file: path=/etc/nginx/sites-enabled/mysite src=/etc/nginx/sites-available/mysite.conf state=link
sudo: yes
This is on a ubuntu-14-04-x64 VPS.
The handler was:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
It seems that the state and enabled flags cannot both be present. By trimming the above to the following, it worked.
- name: restart nginx
service: name=nginx state=restarted
Why this is, and why it started breaking suddenly, I do not know.