How can I install a particular version of Nginx using Ansible Playbook? - nginx

My agenda is to write Ansible Playbook for installing nginx-1.18.0.
But I couldn't install the specific version of Nginx using below playbook. Getting error as "Failed to install some of the specified packages"
Could you please help me on the same.
Thanks in Advance.
---
- hosts: localhost
become: yes
tasks:
- name: To install Nginx
yum:
name: nginx-1.18.0
state: present
- name: To enable and start Nginx
service:
name: nginx-1.18.0
state: started
enabled: yes
Output:
TASK [To install Nginx] *******************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failures": ["nginx-1.18.0 All matches were filtered out by modular filtering for argument: nginx-1.18.0"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}

Related

Can't reach builds.midonet.org when installing Eucalyptus

As the subject says, I am doing the local installing running bash <(curl -Ls https://get.eucalyptus.cloud) but I am getting the following errors:
[Ansible] Installing Eucalyptus ansible package
Failed to set locale, defaulting to C
http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30000 milliseconds')
Trying other mirror.
http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30002 milliseconds')http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror.
It says it is trying other mirror but it doesn't appear that way.
I tried to ping the domain but no response and navigating to the root domain via a web browser fails to show anything. Is this something on my side or is the host really down?
This is my first time looking at Eucalyptus.cloud
if you have a webserver laying around
---
- name: make sure destination dir exists
file:
path: '/tmp/midonet/'
state: directory
tags:
- localinstall
- name: download a copy of the packages that should have been in the midonet repo
get_url:
url: 'http://pxe.server.lan/packages/midonet/{{ item }}'
dest: '/tmp/midonet/'
with_items:
- libzookeeper-3.4.8-4.x86_64.rpm
- libzookeeper-devel-3.4.8-4.x86_64.rpm
- lldpd-0.9.5-2.1.x86_64.rpm
- lldpd-debuginfo-0.9.5-2.1.x86_64.rpm
- lldpd-devel-0.9.5-2.1.x86_64.rpm
- midolman-5.2.2-1.0.el7.noarch.rpm
- midonet-cluster-5.2.2-1.0.el7.noarch.rpm
- midonet-selinux-1.0-2.el7.centos.noarch.rpm
- midonet-tools-5.2.2-1.0.el7.noarch.rpm
- python-midonetclient-5.2.2-1.0.el7.noarch.rpm
- python-zookeeper-3.4.8-4.x86_64.rpm
- quagga-0.99.23-0.el7.midokura.x86_64.rpm
- zkdump-1.05-1.noarch.rpm
- zkpython-3.4.5-2.x86_64.rpm
- zookeeper-3.4.8-4.x86_64.rpm
- zookeeper-debuginfo-3.4.8-4.x86_64.rpm
- zookeeper-lib-3.4.5-1.x86_64.rpm
register: lidownlowd
retries: 3
delay: 3
until: lidownlowd is not failed
tags:
- localinstall
- name: localinstall all packages from midonet repo
shell: yum -y localinstall *.rpm
args:
chdir: '/tmp/midonet/'
please refer to unable to install eucalyptus in centos 7.9
i'll give it a go and let you know the outcome

Spinnaker - Error fetching artifactory names: 500 Internal Server Error

I want to use Spinnaker with JFrog Artifactory.
I've followed both documentation
https://www.spinnaker.io/guides/user/pipeline/triggers-with-artifactsrewrite/artifactory/
http://theblasfrompas.blogspot.com/2019/06/deploy-artifacts-from-jfrog-artifactory.html
hal config
repository:
artifactory:
enabled: true
searches:
- name: spring-artifactory
permissions: {}
baseUrl: https://xxx.jfrog.io/artifactory
repo: libs-snapshot-local
groupId: com.example
repoType: maven
username: usernamexxx
password: passwordxxx
maven:
enabled: true
accounts:
- name: spring-artifactory-maven
repositoryUrl: https://xxx.jfrog.io/artifactory/libs-snapshot-local/
Once deployed when I add an automated trigger, type Artifactory I immediatly get in red the following error
Error fetching artifactory names: 500 Internal Server Error
http://192.168.39.83:30808/artifactory/names 500 status: 500, error:
"Internal Server Error", message: "No value present"}
Spinnaker is running on minikube, I changed storage, hypervisor, version, ...
Please advise.
Thanks.
I change local-artifactory to local-artifactory and it worked.
I can now retrieve the builds in artifactory.

Need Ansible playbook inorder to calculate number of users currently login into VPN

Writing ansible playbook for "Count number of users currently login to VPN".Using Junos modules as suggested by network team.I have installed below softwares on my RHEL 7 machine with Ansible 2.9 version installed.
Junos Ansible Requirements
===============================
-->Install Dependencies
# pip install ncclient
# pip install junos-eznc
--> Install Juniper.junos Galaxy role
ansible-galaxy install juniper.junos
---> Have NETCONF enabled on Juniper devices over SSH
# set system services netconf ssh
--->(Optional)
#pip install junos-netconify (python lib for juniper console)
Whenever i am writing any playbook, I am getting below error.
Playbook:-
---
- name: Get device uptime
hosts:
- dc1
roles:
- Juniper.junos
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: Junos Username
private: no
- name: password
prompt: Junos Password
private: yes
tasks:
- name: get uptime using galaxy module
junos_command:
commands: show system uptime
register: uptime
- name: display uptimes
debug: var=uptime
Error:-
PLAY [Get device uptime] **************************************************************************************************************
TASK [get uptime using galaxy module] *************************************************************************************************
fatal: [172.16.130.1]: FAILED! => {"changed": false, "msg": "invalid rpc for running in check_mode"}
PLAY RECAP ****************************************************************************************************************************
172.16.130.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I was just exploring ansible networking commands & got above error.Please suggest what configurations required working with junos.
Please find below playbook to check no. of users currently login to VPN:-
name: Get system users currently logged in
hosts: all
connection: local
gather_facts: no
roles:
Juniper.junos
tasks:
name: Retrieve facts from device running Junos OS
juniper_junos_facts:
name: Print version
debug:
var: junos.fqdn
name: Run RPC Commands
juniper_junos_command:
commands="show security dynamic-vpn users"
format=text
dest={{ junos.fqdn }}.output

Nginx fails to restart via Ansible

I have a task in a playbook that tries to restart nginx via a handler as per usual:
name: restart nginx
service: name=nginx state=restarted
It gaves me this following error:
RUNNING HANDLER [webtier : restart nginx] **************************************
fatal: [vagrant]: FAILED! => {"changed": false, "msg": "Unable to restart service nginx: Failed to restart nginx.service: Connection timed out\nSee system logs and 'systemctl status nginx.service' for details.\n"}
However until last time sudo: yes command was working. and the above error was not coming.
But this time, by adding sudo: yes command
name: restart nginx
service: name=nginx state=restarted
sudo: yes
Gives following error:
ERROR! conflicting action statements: service, sudo
The error appears to be in '/Users/mac/Documents/GitHub/petalandstem/ansible/roles/webtier/handlers/main.yml': line 28, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: restart nginx
^ here
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
How to restart nginx successfully ?
The correct syntax is either INI
- name: restart nginx
service: name=nginx state=restarted
become: true
become_method: sudo
or YAML
- name: restart nginx
service:
name: nginx
state: restarted
become: true
become_method: sudo
See Understanding privilege escalation: become.
Ansible 1.x: sudo: yes
Ansible 2.x: become: yes
That's because the become_method is a choice now but the default is "sudo".
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ]
i was facing the same issue
this happened to me because httpd was already running on port 80
so i had to stop the httpd service
$ service httpd stop
then try the ansible-playbook
First don't edit the files in sites-enabled, but create links and edit in sites-available.
For me the problem was in sites-enabled folder.
When you delete the default site from sites-available folder, you need to delete the link from sites-enabled.
After deleting the default link from sites-enabled for me worked.

SSH connectivity issues with ntc-ansible modules

I am trying to using the ntc-ansible module with Ansible running on Ubuntu (WSL). I have ssh connectivity to my remote device (Cisco 2960X) and I can run ansible playbooks to the same remote switch using the built in Ansible networking modules (ios_command) and it works fine.
Issue:
When I try to run any of the ntc-ansible modules, it fails, unable to connect to the device. Probably something simple, but I have hit a wall. There is something I am missing about how to use ntc-ansible modules. Ansible is seeing the modules as I can look at the docs as was suggested as a test in the readme.
I have ntc-ansible module installed here: /home/melshman/.ansible/plugins/modules/ntc-ansible
I am running my playbooks from here: ~/projects/ansible/
The first time I ran the playbook with the ntc-ansible modules it failed and based on error message and some research I installed sshpass (sudo apt-get install sshpass). But still having ssh problems using ntc-ansible… (playbook and traceback below)
I hear folks taking about an index file, but I can’t find that file? Where does it live and what do I need to do with it?
What is my connection supposed to be setup to be? Local? SSH? Netmiko_ssh?
What should I be using for platform? Cisco_ios? cisco_ios_ssh?
Appreciate any help I can get. I have been running in circles for hours and hours.
Ansible Version Info:
VTMNB17024:~/projects/ansible $ ansible --version
ansible 2.5.3
config file = /home/melshman/projects/ansible/ansible.cfg
configured module search path = [u'/home/melshman/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Working playbook (ios_command:) note: ansible_ssh_pass and ansible_user in group var:
- name: Test Net Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: Grab run config
ios_command:
commands:
- show run
register: config
- name: Create backup of running configuration
copy:
content: "{{config.stdout[0]}}"
dest: "backups/show_run_{{inventory_hostname}}.txt"
Playbook (not working) using ntc-ansible module (Note: username and password are defined in Group VAR:
- name: Cisco IOS Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version | inc uptime'
host: "{{ inventory_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
use_templates: True
template_dir: /home/melshman/.ansible/plugins/modules/ntc-ansible/ntc-templates/templates
Here is the traceback I get when the error occurs:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: netmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22
fatal: [VTgroup_SW]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 279, in \n main()\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 251, in main\n device = ntc_device(device_type, host, username, password, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/__init__.py\", line 35, in ntc_device\n return device_class(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 39, in __init__\n self.open()\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 55, in open\n verbose=False)\n File \"build/bdist.linux-x86_64/egg/netmiko/ssh_dispatcher.py\", line 178, in ConnectHandler\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 207, in __init__\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 693, in establish_connection\nnetmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is a working solution using ntc_show_command to a Cisco IOS device.
- name: Cisco IOS Automation
hosts: pynet-rtr1
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version'
host: "{{ ansible_host }}"
username: "{{ ansible_user }}"
password: "{{ ansible_ssh_pass }}"
use_templates: True
template_dir: '/home/kbyers/ntc-templates/templates'
If you are going to use ntc-templates, I probably would not have the '| include uptime' in the 'show version'. In other words, let TextFSM convert the output to structured data first and then grab the uptime from that structured data.
I modified inventory_hostname to ansible_host to be consistent with my inventory format (my inventory_hostname doesn't actually resolve in DNS).
I modified username and password to 'ansible_user' and 'ansible_ssh_pass' to be consistent with my inventory and also to be more consistent with Ansible 2.5/2.6 variable naming.
On your above issue, your exception message does not match your playbook (i.e. are you sure that is the exception you get for that playbook).
Here is my inventory file (I simplified this to remove some unnecessary devices and to hide confidential information)
[all:vars]
ansible_connection=local
ansible_python_interpreter=/home/kbyers/VENV/ansible/bin/python
ansible_user=user
ansible_ssh_pass=password
[local]
localhost ansible_connection=local
[cisco]
pynet-rtr1 ansible_host=cisco1.domain.com
pynet-rtr2 ansible_host=cisco2.domain.com

Resources