How do I combine two commands in one task? | Ansible - nginx

So, my problem is that I want to check if nginx is installed on two different OS with different package managers.
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
Can I combine it in one task and how if it is possible? Because I need to register the output result and then use it onwards. Can't figure it out.

An alternative solution is to use the package_facts module, like this:
- hosts: localhost
tasks:
- package_facts:
- debug:
msg: "Nginx is installed!"
when: "'nginx' in packages"
But you could also register individual variables for your two tasks, and then combine the result:
- hosts: localhost
tasks:
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
failed_when: false
register: rpm_check
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
failed_when: false
register: dpkg_check
- set_fact:
nginx_result: >-
{{
(rpm_check is not skipped and rpm_check.rc == 0) or
(dpkg_check is not skipped and dpkg_check.rc == 0)
}}
- debug:
msg: "nginx is installed"
when: nginx_result

Related

Ansible Nested Loop for Cisco ACL

I'm creating a playbook for an ACL update, where the existing ACL needs to be updated, but before adding the new set of IP addresses to that ACL, I need to make sure that the ACL is present and that the IP hasn't already been configured.
Process:
Need to add the below IP addresses
ACL NAME: 11, 13, DATA_TEST, dummy
Check if the list of ACL are present
commands: "show access-lists {{item}}"
Check if ACL Exist
Q: Can't figure out how to access each item in the result of the first action to see if ACL has been configured. Ex. We can see from the output that dummy has no output, how can I exclude that and process if exist. (refer code below)
Check if IP addresses already added
Q: What is the best approach here? I'm thinking using when then comparing the ACL output from stdout vs the given variables content (ex. parents/lines)?
Add the set of IP addresses on target ACL
Q: What is the best approach here? Need to match the ACL name and configure using the variable.
If somebody is knowledgeable about Ansible, perhaps you could assist me in creating this project? I'm still doing some research, so any assistance you can give would be greatly appreciated. Thanks
My Code:
---
- name: Switch SVU
hosts: Switches
gather_facts: False
vars:
my_acl_list:
- 11
- 13
- DATA_TEST
- dummy
fail: "No such access-list {{item}}"
UP_ACL11:
parents:
- access-list 11 permit 192.168.1.4
- access-list 11 permit 192.168.1.5
UP_ACL13:
parents: access-list 13 permit 10.22.1.64 0.0.0.63
UP_ACLDATA:
lines:
- permit 172.11.1.64 0.0.0.63
- permit 172.12.2.64 0.0.0.63
parents: ip access-list standard DATA_TEST
tasks:
- name: Check if the ACL Name already exists.
ios_command:
commands: "show access-lists {{item}}"
register: acl_result
loop: "{{my_acl_list}}"
- debug: msg="{{acl_result}}"
- name: Check if ACL Exist
debug:
msg: "{{item.stdout}}"
when: item.stdout.exists
with_items: "{{acl_result.results}}"
loop_control:
label: "{{item.item}}"
# Pending - Need to know how to match if ACL name exist on stdout.
- name: Check if IP addresses already added
set_fact:
when:
# pending - ansible lookup?
# when var: UP_ACL11, UP_ACL13, UP_ACLDATA IPs are not in ACL then TRUE
- name: Add the set of IP addresses on target ACL
ios_config:
# pending - if doest exist on particular ACL name then configure using the var: UP_ACL11, UP_ACL13, UP_ACLDATA
Given the simplified data for testing
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
Q: "Check if ACL Exists"
A: If ACL doesn't exist the attribute stdout is a list of empty strings. Test it
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
gives
TASK [Check if ACL Exists] ********************************************
ok: [localhost] => (item=DATA_TEST) =>
msg: 'DATA_TEST exists: True'
ok: [localhost] => (item=dummy) =>
msg: 'dummy exists: False'
Notes:
In the filter select, "If no test is specified, each object will be evaluated as a boolean". The number 0 evaluates to false.
Example of a complete playbook for testing
- hosts: localhost
vars:
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
tasks:
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
The test can be simplified if you're sure stdout is a list with a single line only
msg: "{{ item.item }} exists: {{ item.stdout|first|length > 0 }}"

Airflow secure-file-priv option - cannot insert data from file to database

I have a DAG of three tasks. The first one checks if csv file exists, the second creates MySQL table and the third one inserts the data from csv file into MySQL table. First two tasks succeed but the last one not saying that due to secure-file-priv option I have no rigts to insert data from file to database.
default_args = {
"owner": "airflow",
"start_date": airflow.utils.dates.days_ago(1),
"retries": 1,
"retry_delay": timedelta(seconds=5)
}
with DAG('sql_operator_from_csv_to_mysql',default_args=default_args,schedule_interval='#daily', template_searchpath=['/opt/airflow/sql_files_mysql'], catchup=True) as dag:
t1 = BashOperator(task_id='check_file_exists', bash_command='ls /opt/airflow/store_files/students_data.csv | sha1sum', retries=2, retry_delay=timedelta(seconds=15))
t2 = MySqlOperator(task_id='create_mysql_table', mysql_conn_id="mysql_conn", sql="create_table.sql")
t3 = MySqlOperator(task_id='insert_into_table', mysql_conn_id="mysql_conn", sql="insert_into_table.sql")
SQL statement:
LOAD DATA INFILE '/store_files/students_data.csv' INTO TABLE students_db FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 1 ROWS;
mysql.cnf file is as following.
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
secure-file-priv= ""
I have tried to mark 'secure-file-priv' option as an empty string and showing the location of a csv file but none of them are working.
secure-file-priv= ""
secure-file-priv= "/opt/airflow/store_files"
Error message saying:
[2022-06-12, 09:44:24 UTC] {taskinstance.py:1774} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/mysql/operators/mysql.py", line 84, in execute
hook.run(self.sql, autocommit=self.autocommit, parameters=self.parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/hooks/dbapi.py", line 205, in run
self._run_command(cur, sql_statement, parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/hooks/dbapi.py", line 229, in _run_command
cur.execute(sql_statement)
File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1290, 'The MySQL server is running with the --secure-file-priv option so it cannot execute this statement')
[2022-06-12, 09:44:24 UTC] {taskinstance.py:1288} INFO - Marking task as FAILED. dag_id=sql_operator_from_csv_to_mysql, task_id=insert_into_table, execution_date=20220612T094413, start_date=20220612T094424, end_date=20220612T094424
[2022-06-12, 09:44:24 UTC] {standard_task_runner.py:98} ERROR - Failed to execute job 10 for task insert_into_table ((1290, 'The MySQL server is running with the --secure-file-priv option so it cannot execute this statement'); 1190)
[2022-06-12, 09:44:24 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-06-12, 09:44:24 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
Any ideas to solve this problem?
I have tried to make some minor modifications in the docker-compose.yaml file.
My docker-compose.yaml file is as below
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL.
#
# WARNING: This configuration is for local development. Do not use it in a production deployment.
#
# This configuration supports basic configuration using environment variables or an .env file
# The following variables are supported:
#
# AIRFLOW_IMAGE_NAME - Docker image name used to run Airflow.
# Default: apache/airflow:2.3.2
# AIRFLOW_UID - User ID in Airflow containers
# Default: 50000
# Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
#
# _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account (if requested).
# Default: airflow
# _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account (if requested).
# Default: airflow
# _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
# Default: ''
#
# Feel free to modify this file to suit your needs.
---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.3.2}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/usr/local/airflow/dags
- ./store_files:/usr/local/airflow/store_files_airflow
- ./sql_files_mysql:/usr/local/airflow/sql_files
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
expose:
- 6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=airflow
volumes:
- ./store_files:/var/lib/mysql-files
- ./mysql.cnf:/etc/mysql/mysql.cnf
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
# Required to handle warm shutdown of the celery workers properly
# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-triggerer:
<<: *airflow-common
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
# yamllint disable rule:line-length
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.2.0
min_airflow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airflow_version_comparable )); then
echo
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
echo
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
echo "If you are on Linux, you SHOULD follow the instructions below to set "
echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
echo "For other operating systems you can get rid of the warning with manually created .env file:"
echo " See: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#setting-the-right-airflow-user"
echo
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
echo
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
echo
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
echo
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
echo
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
# yamllint enable rule:line-length
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
_PIP_ADDITIONAL_REQUIREMENTS: ''
user: "0:0"
volumes:
- .:/sources
airflow-cli:
<<: *airflow-common
profiles:
- debug
environment:
<<: *airflow-common-env
CONNECTION_CHECK_MAX_COUNT: "0"
# Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
command:
- bash
- -c
- airflow
# You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
# or by explicitly targeted on the command line e.g. docker-compose up flower.
# See: https://docs.docker.com/compose/profiles/
flower:
<<: *airflow-common
command: celery flower
profiles:
- flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
volumes:
postgres-db-volume:
And the sql.conf file is as below
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
secure-file-priv= ""

how to check 777 permission in multiple directory by ansible

For a single directory my script is running fine, but how to check the same for multiple directories?
Code for a single directory:
---
- name: checking directory permission
hosts: test
become: true
tasks:
- name: Getting permission to registered var 'p'
stat:
path: /var/SP/Shared/
register: p
- debug:
msg: "permission is 777 for /var/SP/Shared/
when: p.stat.mode == "0777" or p.stat.mode == "2777" or p.stat.mode == "4777"
Reading stat_module shows that there is no parameter for recursion. Testing with_fileglob: did not gave the expected result.
So it seems you would need to loop over the directories in a way like
- name: Get directory permissions
stat:
path: "{{ item }}"
register: result
with_items:
- "/tmp/example"
- "/tmp/test"
tags: CIS
- name: result
debug:
msg:
- "{{ result }}"
tags: CIS
but I am sure there can be still more advanced solutions found.

Can't generate lets-encrypt certificate using saltStack

I am trying to generate the lets-encrypt certificate and here are the steps that I followed:
Under /srv/salt/pillars/minion I added the file init.sls
letsencrypt:
config: |
email = email
auth:
method: standalone
type: http-01
port: 8080
agree-tos = True
renew-by-default = True
domainsets:
mydomain:
- mydomain.com
After that I updated the salt_pillar:
# . update_salt.sh
# salt 'minion' state.sls letsencrypt
I got this result:
ID: letsencrypt-crontab-mydomain.com
Function: cron.present
Name: /usr/local/bin/renew_letsencrypt_cert.sh mydomain.com
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-
cert-mydomain.com
Started:
Duration:
Changes:
------------
ID: create-fullchain-privkey-pem-for-mydomain.com
Function: cmd.run
Name: cat /etc/letsencrypt/live/mydomain.com/fullchain.pem \
/etc/letsencrypt/live/mydomain.com/privkey.pem \
> /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem && \
chmod 600 /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-cert-mydomain.com
Started:
Duration:
Changes:
What should I modify in my configuration to get the certificate?

Saltstack: ignoring result of cmd.run

I am trying to invoke a command on provisioning via Saltstack. If command fails then I get state failing and I don't want that (retcode of command doesn't matter).
Currently I have the following workaround:
Run something:
cmd.run:
- name: command_which_can_fail || true
is there any way to make such state ignore retcode using salt features? or maybe I can exclude this state from logs?
Use check_cmd :
fails:
cmd.run:
- name: /bin/false
succeeds:
cmd.run:
- name: /bin/false
- check_cmd:
- /bin/true
Output:
local:
----------
ID: fails
Function: cmd.run
Name: /bin/false
Result: False
Comment: Command "/bin/false" run
Started: 16:04:40.189840
Duration: 7.347 ms
Changes:
----------
pid:
4021
retcode:
1
stderr:
stdout:
----------
ID: succeeds
Function: cmd.run
Name: /bin/false
Result: True
Comment: check_cmd determined the state succeeded
Started: 16:04:40.197672
Duration: 13.293 ms
Changes:
----------
pid:
4022
retcode:
1
stderr:
stdout:
Summary
------------
Succeeded: 1 (changed=2)
Failed: 1
------------
Total states run: 2
If you don't care what the result of the command is, you can use:
Run something:
cmd.run:
- name: command_which_can_fail; exit 0
This was tested in Salt 2017.7.0 but would probably work in earlier versions.

Resources