How to ensure that the dhcp pool name exist before updating options using ansible - networking

Here is my playbook. When I execute it, the command ip dhcp pool {{ item.name }} don't check if the name exist or not. I use the parameter "match: exact" but it doesn't work. So can I use if statement in ansible playbook or is there a way to check the pool name before execute commands.
I use when in the playbook to check if item.name is defined but it also don't work.
---
- name: "UPDATE DHCP OPTIONS FOR FNAC-FRANCHISE SWITCHES"
hosts: all
gather_facts: false
vars:
xx: ["ip dhcp pool VOICEC_DHCP","ip dhcp pool DATA","ip dhcp pool VIDEO_DHCP ","ip dhcp pool WIFI_USER"," ip dhcp pool WIFI_ADM", ]
tasks:
- name: "CHECK"
ios_command:
commands:
- show run | include ip dhcp pool
register: output
- name: DISPLAY THE COMMAND OUTPUT
debug:
var: output.stdout_lines
- name: transform output
set_fact:
pools: "{{ item | regex_replace('ip dhcp pool ', '') }}"
loop: "{{ output.stdout_lines }}"
- name: "UPDATE DHCP OPTIONS IN POOL DATA & WIFI_USER"
ios_config:
lines:
- dns-server 10.0.0.1
- netbios-name-server 10.0.0.1
- netbios-node-type h-node
parents: ip dhcp pool {{ item.name }}
match: exact
loop: "{{ pools }}"
Here is the output that I have
ok: [ITG] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "stdout": ["ip dhcp pool VIDEO_DHCP\nip dhcp pool WIFI_ADM\nip dhcp pool VOICEC_DHCP\nip dhcp pool DATA\nip dhcp pool WIFI_USER"], "stdout_lines": [["ip dhcp pool VIDEO_DHCP", "ip dhcp pool WIFI_ADM", "ip dhcp pool VOICEC_DHCP", "ip dhcp pool DATA", "ip dhcp pool WIFI_USER"]]}
TASK [DISPLAY THE COMMAND OUTPUT] *********************************************************************************************************************************************
ok: [ITG] => {
"output.stdout_lines": [
[
"ip dhcp pool VIDEO_DHCP",
"ip dhcp pool WIFI_ADM",
"ip dhcp pool VOICEC_DHCP",
"ip dhcp pool DATA",
"ip dhcp pool WIFI_USER"
]
]
}
TASK [transform output] *******************************************************************************************************************************************************
ok: [ITG] => (item=[u'ip dhcp pool VIDEO_DHCP', u'ip dhcp pool WIFI_ADM', u'ip dhcp pool VOICEC_DHCP', u'ip dhcp pool DATA', u'ip dhcp pool WIFI_USER']) => {"ansible_facts": {"pools": ["VIDEO_DHCP", "WIFI_ADM", "VOICEC_DHCP", "DATA", "WIFI_USER"]}, "ansible_loop_var": "item", "changed": false, "item": ["ip dhcp pool VIDEO_DHCP", "ip dhcp pool WIFI_ADM", "ip dhcp pool VOICEC_DHCP", "ip dhcp pool DATA", "ip dhcp pool WIFI_USER"]}
There are switches which pool name WIFI_USER doesn't exist and I don't want to create it in the switch if the line "parents: ip dhcp pool {{ item.name }}" does not match.

if i simulate your return value:
- name: vartest
hosts: localhost
vars: #d use just to test
xx: ["ip dhcp pool VOICEC_DHCP","ip dhcp pool DATA","ip dhcp pool VIDEO_DHCP ","ip dhcp pool WIFI_USER"," ip dhcp pool WIFI_ADM", ]
tasks:
- name: trap output
ios_config:
parents: show ip dhcp pool
register: output
- name: transform output
set_fact:
pools: "{{ pools | default([]) + [item | regex_replace('ip dhcp pool ', '')] }}"
loop: "{{ output.stdout_lines[0] }}" #you adapt following the result output.stdout or something else
- name: "UPDATE DHCP OPTIONS IN POOL DATA & WIFI_USER"
ios_config:
commands:
- dns-server <ip-addr>
- netbios-name-server <ip-addr>
- netbios-node-type h-node
parents: ip dhcp pool {{ item }}
loop: "{{ pools }}"

Related

artifactory docker push error unknown: Method Not Allowed

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########
The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

Send the trace data of a website using Jaeger and Opentelemetry to Opensearch

I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)

VNF do not forward packets sent from Client in Openstack using VNFF Graph

I'm trying to ping from Client to 8.8.8.8 via VNF1 so I use VNFFG to force ICMP traffic of Client go through VNF1 before going out to internet.
After I apply the VNFFG rule in openstack, VNF1 can see MPLS packet encapsulated from Client's ICMP packet by openstack when I use tcpdump but the Forwarding Table of VNF1 do not receive any packet to continue forward that packet.
This is packet seen on VNF1:
09:15:12.161830 MPLS (label 13311, exp 0, [S], ttl 255) IP 12.0.0.58 > 8.8.8.8: ICMP echo request, id 10531, seq 15, length 64
I capture that packet, see that the content can be read (without encryption) and src, dst MAC belong to Client and VNF1 respectively.
This is my VNFFG template:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: demo chain
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_icmp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 1
- name: block_udp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 17
path:
- forwarder: VNF1
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: Traffic to server
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]
This is my VNF Descriptor:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 2 GB
disk_size: 20 GB
properties:
image: VNF1
availability_zone: nova
mgmt_driver: noop
key_name: my-key-pair
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: my-private-network
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: public
requirements:
- link:
node: CP1
I used this command to deploy VNFGG rule:
tacker vnffg-create --vnffgd-template vnffg_test.yaml forward_traffic
I do not know if the problem can come from the key I defined for VNF1 because I do not know what param0: key0 and param1: key1 used for and where are they?
How can I resolve to make the VNF forward these packet.

Network Interface not enabling when provisioning VM from Ansible

I'm provisioning VM from ansible-playbook using VMware template, I can see the VM is created successfully but the network interface is not enabled automatically. I have to manually go to the VMware console and then edit the settings of the VM to enable the Network Interface. Kindly check the below playbook tasks and suggest what correction I need to do to enable the Network Interface when running the playbook
tasks:
- name: Create VM from template
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
esxi_hostname: "{{ esxhost }}"
datacenter: "{{ datacenter_name }}"
name: "{{ name }}"
folder: TEST
template: "{{ vmtemplate }}"
disk:
- size_gb: "{{ disk_size | default(32) }}"
type: thin
datastore: "{{ datastore }}"
networks:
- name: VM Network
ip: 172.17.254.223
netmask: 255.255.255.0
gateway: 172.17.254.1
device_type: vmxnet3
state: present
wait_for_ip_address: True
hardware:
memory_mb: "{{ vm_memory | default(2000) }}"
state: present
register: newvm
- name: Changing network adapter
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
datacenter: "{{ datacenter_name }}"
name: "{{ name }}"
validate_certs: no
networks:
- name: "VM Network"
device_type: vmxnet3
state: present
According to the documentation you can "connect" the network interface via connected: true. There is also a parameter start_connected. So add both to your networks dictionary
networks:
- name: VM Network
ip: 172.17.254.223
netmask: 255.255.255.0
gateway: 172.17.254.1
device_type: vmxnet3
connected: true
start_connected: true
I can't see a default value in the documentation, but I assume - they are per default false.
Also - there is no state parameter in networks dict list.
I have had this issue before. This happen because you're using VDS and on the vm-template side the open-vm-tools is installed instead of vmware-tools.
I was able to fix this issue by applying this workaround:
First install vmware-tools in the template instead of open-vm-tools.
Make sure in the playbook that the VM got those parameters:
connected: true /
start_connected: true
In case there is a need for open-vm-tools after, you can simply run a small playbook which uninstall vmware-tools and reinstall the open-vmtools instead.

Ansible - How to loop with a dict and a module's values

I'm trying to create a task with Ansible (=> 2.5) that will configure network interfaces such as that:
- name: Set up network interfaces addr
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: address
value: "{{ item.addr }}"
with_items:
- "{{ network }}"
when: item.addr is defined
notify: Restart interface
- name: Set up network interfaces netmask
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: netmask
value: "{{ item.netmask }}"
with_items:
- "{{ network }}"
when: item.netmask is defined
notify: Restart interface
- name: Set up network interfaces dns
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: dns-nameservers
value: "{{ item.dns }}"
with_items:
- "{{ network }}"
when: item.dns is defined
notify: Restart interface
- name: Set up network interfaces dns-search
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: dns-search
value: "{{ item.dns_search }}"
with_items:
- "{{ network }}"
when: item.dns_search is defined
notify: Restart interface
This works.
But from my point of view, that's not so clean ..
So I'm trying to use 2 loops ... Which is not working obviously.
- name: Set up network interfaces
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.iDunnoWhatToPutHere }}"
iface: "{{ item.iDunnoWhatToPutHere }}"
state: present
option: {{ item.option }}
value: "{{ item.value }}"
with_together:
- "{{ network }}"
- { option: address, value: item.0.addr }
- { option: netmask, value: item.0.netmask }
- { option: dns-nameservers, value: item.0.dns }
when: item.dns_search is defined
notify: Restart interface
[...]
Edit: This is good but it's strict. I should loop on vars which should loop on each option and its value for any options. Because I also have options for bridge such as "vlan_raw_device, bridge_ports, bridge_stp ...". So it should just loop blindly on a dict of options and values.
Edit2: With variable network
network:
- name: admin
device: admin
method: static
address: X.X.X.X/X
netmask: X.X.X.X
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
Why I'm trying all this ?
Because I need to change all the values if it has to be changed.
Because I want to restart (ifup, ifdown) only the interface that
Because I'm surprised that I have to use multiple times the same module.
Can you guys help me find out how to use that ?
Maybe it's not possible ?
Thanks folks !
here is a task that will hopefully meet your needs. i have replaced the interfaces_file with debug module, just to print the variables you need to actually use in the interfaces_file module. for the sake of the demo, i added a second interface in the network variable:
playbook with the variable and the task:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
network:
- name: admin
device: admin
method: static
address: 10.10.10.22
netmask: 255.255.255.0
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
- name: admin22
device: admin22
method: static
address: 20.20.20.22
netmask: 255.255.255.192
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
tasks:
- name: process network config
debug:
msg: "dest: {{ item[0].name }}, option: {{ item[1].option }}, value: {{ item[0][item[1].value] }}"
with_nested:
- "{{ network }}"
- [{ option: address, value: address }, { option: netmask, value: netmask }]
result:
TASK [process network config] ******************************************************************************************************************************************************************************************
ok: [localhost] => (item=None) => {
"msg": "dest: admin, option: address, value: 10.10.10.22"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin, option: netmask, value: 255.255.255.0"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin22, option: address, value: 20.20.20.22"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin22, option: netmask, value: 255.255.255.192"
}
hope it helps

Resources