[☺ first time posting here, I have huge problems with formating so sorry, I really dont understand how to get that code to the grey boxes, sorry!)
Hello, so I am supposed to set up a server using Ansible for a high school graduation project. All I have to do is basicaly install a few programs like htop, httpd ..... and finally set up a wordpress server. I am folowing this guide.
Problem is that this code:
---
# tasks file for wp-dependencies
- name: Update packages (this is equivalent to yum update -y)
yum: name=* state=latest
- name: Install dependencies for WordPress
yum:
name:
- php
- php-mysql
- MySQL-python
state: present
- name: Ensure MariaDB is running (and enable it at boot)
service: name=mariadb state=started enabled=yes
- name: Copy ~/.my.cnf to nodes
copy: src=.my.cnf dest=/root/.my.cnf
- name: Create MariaDB database
mysql_db: name={{ wp_mysql_db }} state=present
- name: Create MariaDB username and password
mysql_user: login_user=root login_password=root name = {{ wp_mysql_user }} password = {{ wp_mysql_password }}
priv=*.*:ALL`
Results in this error:
TASK [wp-dependencies : Create MariaDB username and password] ******************************************
fatal: [192.168.56.101]: FAILED! => {"changed": false, "msg": "missing required arguments: user"}
to retry, use: --limit #/home/Admin/wordpress.retry
COuld you tell whats the problem?
Your task is this:
- name: Create MariaDB username and password
mysql_user: login_user=root login_password=root name = {{ wp_mysql_user }} password = {{ wp_mysql_password }}
priv=*.*:ALL`
You have spaces between name and password and the values they are to take. And for safe variable handling you should also place quotation marks (") around the variables.
Try this:
- name: Create MariaDB username and password
mysql_user: login_user=root login_password=root name="{{ wp_mysql_user }}" password="{{ wp_mysql_password }}" priv=*.*:ALL
Related
I'm using an ansible playbook (ansible ver. 2.9) to install WordPress using wp-cli tool.
Here's the playbook:
- name: Create WordPress database
mysql_db: name="{{ db_name }}"
state=present
login_user=root
login_password="{{ mysql_root_password }}"
- name: Create WordPress DB user and grant permissions to WordPress DB
mysql_user: name="{{ db_user }}"
password="{{ db_pwd }}"
priv="{{ db_name }}.*:ALL"
state=present
login_user="root"
login_password="{{ mysql_root_password }}"
- name: Is WordPress downloaded?
stat: path="/var/www/{{ domain_name }}/html/index.php"
register: wp_dir
- name: Download WordPress
command: wp core download
args:
chdir: "/var/www/{{ domain_name }}/html/"
remote_user: "{{ web_user }}"
when: wp_dir.stat.isdir is not defined
- name: Configure WordPress
command: wp core config
--path="/var/www/{{ domain_name }}/html"
--dbname="{{ db_name }}"
--dbuser="{{ db_user }}"
--dbpass="{{ db_pwd }}"
--dbprefix="{{ db_prefix }}"
remote_user: "{{ web_user }}"
when: wp_dir.stat.isdir is not defined
- name: Is WordPress installed?
command: wp core is-installed
args:
chdir: "/var/www/{{ domain_name }}/html/"
register: wordpress_is_installed
ignore_errors: True
remote_user: "{{ web_user }}"
- name: Install WordPress tables
command: wp core install
--url="{{ wp_home_url }}"
--title="{{ wp_site_title }}"
--admin_user="{{ wp_admin_user }}"
--admin_password="{{ wp_admin_pwd }}"
--admin_email="{{ wp_admin_email }}"
args:
chdir: "/var/www/{{ domain_name }}/html/"
when: wordpress_is_installed|failed
remote_user: "{{ web_user }}"
At the "Download WordPress" task, a fatal error shows up:
"Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under."
I run the playbook as a sudo user ("ansible_user" in hosts file). And I have setup an additional user to manage WordPress setup (remote_user: "{{ web_user }}").
Any help would be much appreciated!
In the tasks you need to use become and become_user instead remote_user as below
- name: Download WordPress
command: wp core download
args:
chdir: "/var/www/{{ domain_name }}/html/"
become: yes
become_user: "{{ web_user }}"
when: wp_dir.stat.isdir is not defined
Now a different error is showing up when running the same code:
FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of '/var/tmp/ansible-tmp-1613648876.307028-8235-221563540981220/': Operation not permitted\nchown: changing ownership of '/var/tmp/ansible-tmp-1613648876.307028-8235-221563540981220/AnsiballZ_command.py': Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
I updated Ansible to the last version available (2.10).
The only solution I've found so far is adding allow_world_readable_tmpfiles = Yes to ansible.cfg file...
Any ideas?
Thanks
I am currently trying to deploy Log-rhythm out into our environment that consists of 100+ Servers with the help of SaltStack:
While I am able to copy files over to a Windows minion with the use of file.managed, I am facing some difficulty in the process getting the IP Address of the minion server and adding this both to the .ini file and cmd.run file.
I would like to be able to do this for each minion that is connected to Salt:
While running salt -G 'roles:logging' state.apply. I seem to be getting the following error:
Rendering SLS 'base:pacakage-logrhythm' failed: Jinja variable 'dict object' has no attribute 'fqdn_ip4':
UPDATE:
I was able to resolve the issue within the ini files: by placing the following
ClientAddress={{ grains['fqdn_ip4'][0] }}
currently having issues with passing grains into the cmd.run section of the program:
create_dir:
file.directory:
- name: C:\logrhythm
/srv/salt/logrhythm/proxy1.ini:
file.managed:
- source: salt://logrhythm/proxy1.ini
- name: c:\logrhythm\proxy1.ini
- template: jinja
/srv/salt/logrhythm/proxy2.ini:
file.managed:
- source: salt://logrhythm/proxy2.ini
- name: c:\logrhythm\proxy2.ini
- tempalte: jinja
LRS_File:
file.managed:
- name: c:\logrhythm\LRSystemMonitor_64_7.4.2.8003.exe
- source: salt://logrhythm/LRSystemMonitor_64_7.4.2.8003.exe
LRS_Install:
cmd.run:
- name: 'LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains[''fqdn_ip4''][0] }} CLIENTPORT=0"'
- cwd: C:\logrhythm
I think it should work. You may have a problem with the multiple quotes you use: simple then double then simple. Trying removing the simple quotes englobing all the command and the two simple for accessing the grains dict.
- name: LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains['fqdn_ip4'][0] }} CLIENTPORT=0"
I am running Ansible version 2.7 on Centos7 using the network_cli connection method.
I have a playbook that:
Instructs a networking device to pull in a new firmware image via TFTP
Instructs the networking device to calculate the md5 hash value
Stores the output of the calculation in .stdout
Has a conditional When: statment that checks for a given md5 value in the .stdout before proceeding with the task block.
No matter what md5 value I give, it always runs the task block.
The conditional statement is:
when: '"new_ios_md5" | string in md5_result.stdout'
Here is the full playbook:
- name: UPGRADE SUP8L-E SWITCH FIRMWARE
hosts: switches
connection: network_cli
gather_facts: no
vars_prompt:
- name: "compliant_ios_version"
prompt: "What is the compliant IOS version?"
private: no
- name: "new_ios_bin"
prompt: "What is the name of the new IOS file?"
private: no
- name: "new_ios_md5"
prompt: "What is the MD5 value of the new IOS file?"
private: no
- name: "should_reboot"
prompt: "Do you want Ansible to reboot the hosts? (YES or NO)"
private: no
tasks:
- name: GATHER SWITCH FACTS
ios_facts:
- name: UPGRADE IOS IMAGE IF NOT COMPLIANT
block:
- name: COPY OVER IOS IMAGE
ios_command:
commands:
- command: "copy tftp://X.X.X.X/45-SUP8L-E/{{ new_ios_bin }} bootflash:"
prompt: '[{{ new_ios_bin }}]'
answer: "\r"
vars:
ansible_command_timeout: 1800
- name: CHECK MD5 HASH
ios_command:
commands:
- command: "verify /md5 bootflash:{{ new_ios_bin }}"
register: md5_result
vars:
ansible_command_timeout: 300
- name: CONTINUE UPGRADE IF MD5 HASH MATCHES
block:
- name: SETTING BOOT IMAGE
ios_config:
lines:
- no boot system
- boot system flash bootflash:{{ new_ios_bin }}
match: none
save_when: always
- name: REBOOT SWITCH IF INSTRUCTED
block:
- name: REBOOT SWITCH
ios_command:
commands:
- command: "reload"
prompt: '[confirm]'
answer: "\r"
vars:
ansible_command_timeout: 30
- name: WAIT FOR SWITCH TO RETURN
wait_for:
host: "{{inventory_hostname}}"
port: 22
delay: 60
timeout: 600
delegate_to: localhost
- name: GATHER ROUTER FACTS FOR VERIFICATION
ios_facts:
- name: ASSERT THAT THE IOS VERSION IS CORRECT
assert:
that:
- compliant_ios_version == ansible_net_version
msg: "New IOS version matches compliant version. Upgrade successful."
when: should_reboot == "YES"
when: '"new_ios_md5" | string in md5_result.stdout'
when: ansible_net_version != compliant_ios_version
...
The other two conditionals in the playbook work as expected. I cannot figure out how to get ansible to fail the when: '"new_ios_md5" | string in md5_result.stdout' conditional and stop the play if the md5 value is wrong.
When you run the play with debug output the value of stdout is:
"stdout": [
".............................................................................................................................................Done!",
"verify /md5 (bootflash:cat4500es8-universalk9.SPA.03.10.02.E.152-6.E2.bin) = c1af921dc94080b5e0172dbef42dc6ba"
]
You can clearly see the calculated md5 in the string but my conditional doesn't seem to care either way.
Does anyone have any advice?
When you write:
when: '"new_ios_md5" | string in md5_result.stdout'
You are looking for the literal string "new_ios_md5" inside the variable md5_result.stdout. Since you actually want to refer to the new new_ios_md5 variable, you ened to remove the quotes around it:
when: 'new_ios_md5 | string in md5_result.stdout'
Credit goes to zoredache on reddit for the final solution:
BTW, you know that for most of the various networking commands ios_command the results come back as a list right? So you need to index into the list relative to the command you run.
Say you had this. task
ios_command:
commands:
- command: "verify /md5 bootflash:{{ new_ios_bin }}"
- command: show version
- command: show config
register: results
You would have output in the list like this.
# results.stdout[0] = verify
# results.stdout[1] = show version
# results.stdout[2] = show config
So the correct conditional statement would be:
when: 'new_ios_md5 in md5_result.stdout[0]'
I am trying to using the ntc-ansible module with Ansible running on Ubuntu (WSL). I have ssh connectivity to my remote device (Cisco 2960X) and I can run ansible playbooks to the same remote switch using the built in Ansible networking modules (ios_command) and it works fine.
Issue:
When I try to run any of the ntc-ansible modules, it fails, unable to connect to the device. Probably something simple, but I have hit a wall. There is something I am missing about how to use ntc-ansible modules. Ansible is seeing the modules as I can look at the docs as was suggested as a test in the readme.
I have ntc-ansible module installed here: /home/melshman/.ansible/plugins/modules/ntc-ansible
I am running my playbooks from here: ~/projects/ansible/
The first time I ran the playbook with the ntc-ansible modules it failed and based on error message and some research I installed sshpass (sudo apt-get install sshpass). But still having ssh problems using ntc-ansible… (playbook and traceback below)
I hear folks taking about an index file, but I can’t find that file? Where does it live and what do I need to do with it?
What is my connection supposed to be setup to be? Local? SSH? Netmiko_ssh?
What should I be using for platform? Cisco_ios? cisco_ios_ssh?
Appreciate any help I can get. I have been running in circles for hours and hours.
Ansible Version Info:
VTMNB17024:~/projects/ansible $ ansible --version
ansible 2.5.3
config file = /home/melshman/projects/ansible/ansible.cfg
configured module search path = [u'/home/melshman/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Working playbook (ios_command:) note: ansible_ssh_pass and ansible_user in group var:
- name: Test Net Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: Grab run config
ios_command:
commands:
- show run
register: config
- name: Create backup of running configuration
copy:
content: "{{config.stdout[0]}}"
dest: "backups/show_run_{{inventory_hostname}}.txt"
Playbook (not working) using ntc-ansible module (Note: username and password are defined in Group VAR:
- name: Cisco IOS Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version | inc uptime'
host: "{{ inventory_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
use_templates: True
template_dir: /home/melshman/.ansible/plugins/modules/ntc-ansible/ntc-templates/templates
Here is the traceback I get when the error occurs:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: netmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22
fatal: [VTgroup_SW]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 279, in \n main()\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 251, in main\n device = ntc_device(device_type, host, username, password, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/__init__.py\", line 35, in ntc_device\n return device_class(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 39, in __init__\n self.open()\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 55, in open\n verbose=False)\n File \"build/bdist.linux-x86_64/egg/netmiko/ssh_dispatcher.py\", line 178, in ConnectHandler\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 207, in __init__\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 693, in establish_connection\nnetmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is a working solution using ntc_show_command to a Cisco IOS device.
- name: Cisco IOS Automation
hosts: pynet-rtr1
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version'
host: "{{ ansible_host }}"
username: "{{ ansible_user }}"
password: "{{ ansible_ssh_pass }}"
use_templates: True
template_dir: '/home/kbyers/ntc-templates/templates'
If you are going to use ntc-templates, I probably would not have the '| include uptime' in the 'show version'. In other words, let TextFSM convert the output to structured data first and then grab the uptime from that structured data.
I modified inventory_hostname to ansible_host to be consistent with my inventory format (my inventory_hostname doesn't actually resolve in DNS).
I modified username and password to 'ansible_user' and 'ansible_ssh_pass' to be consistent with my inventory and also to be more consistent with Ansible 2.5/2.6 variable naming.
On your above issue, your exception message does not match your playbook (i.e. are you sure that is the exception you get for that playbook).
Here is my inventory file (I simplified this to remove some unnecessary devices and to hide confidential information)
[all:vars]
ansible_connection=local
ansible_python_interpreter=/home/kbyers/VENV/ansible/bin/python
ansible_user=user
ansible_ssh_pass=password
[local]
localhost ansible_connection=local
[cisco]
pynet-rtr1 ansible_host=cisco1.domain.com
pynet-rtr2 ansible_host=cisco2.domain.com
I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.