This question already has answers here:
How to use Ansible's with_item with a variable?
(2 answers)
Closed 5 years ago.
This is my configuration array.
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
Now I woul like to create a nrpe.cfg with a jinja2 template with these task:
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items:
- tomcatsconfs
Ansible transfers this array as a dictionary.
+[{u'connector': u'ajp-nio-', u'instance': u'tc-p1_i01_3001', u'connector_port': 30012, u'port': 30011}, {u'connector': u'ajp-nio-', u'instance': u'tc-p1_i02_3002', u'connector_port': 30022, u'port': 30021}, {u'connector': u'ajp-nio-', u'instance': u'tc-p1_i03_3003', u'connector_port': 30032, u'port': 30031}]
And I try to iterate this dictionary with this loop
{% for key value in tomcatconfs.iteritems() %}
key value
{% endfor %}
But I get the error message:
failed: [host] (item=tomcatconfs) => {"failed": true, "item": "tomcatconfs", "msg": "AnsibleUndefinedVariable: 'list object' has no attribute 'iteritems'"}
How I can iterate this dictionary in this template?
Greetings niesel
I used this.
---
- name: Run Ansible
hosts: 127.0.0.1
connection: local
gather_facts: true
vars:
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
tasks:
- name: Testing Iteration
copy:
dest: /tmp/testtemp
content: |
{% for var in tomcatsconfs %}
instance: {{ var.instance }}
port: {{ var.port }}
connector: {{ var.connector }}
connector_port: {{ var.connector_port }}
{% endfor %}
OUTPUT:
instance: tc-p1_i01_3001
port: 30011
connector: ajp-nio-
connector_port: 30012
instance: tc-p1_i02_3002
port: 30021
connector: ajp-nio-
connector_port: 30022
instance: tc-p1_i03_3003
port: 30031
connector: ajp-nio-
connector_port: 30032
I think all you need to change is how you are passing the list to with_items. Try changing
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items:
- tomcatsconfs
to
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items: "{{ tomcatsconfs }}"
I think what is going on is that you are giving with_items a list of one list. If you change it to what I have in my example, you are just giving it the list.
This fixed it with my simplified sample playbook:
---
- hosts: localhost
connection: local
vars:
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
tasks:
- debug: var="{{item}}"
with_items:
- tomcatsconfs
- debug: var="{{item['port']}}"
with_items: "{{ tomcatsconfs }}"
Related
i wrote a salt state as below which writes data to config.yaml
file.append:
- name: /etc/xentrax/config.yml
- text: |
tunnel: xentrax
credentials-file: /roor/.xentrax/xentrax.json
logfile: /var/log/xentrax.log
loglevel: info
now i want to append some sensitive data to this config.yaml using a pillar. the data is sensitive data and i want to maintain using a pillar. the data i want to append is below
ingress:
- hostname: shop.xentrax.com
- keyid: xxxxxxxxxxxxxxxxxxx
originRequest:
httpHostHeader: shop.xentrax.com
originServerName: shop.xentrax.com
service: https://localhost:443
- service: http_status:404
How to write that pillar? i am pretty new to saltstack. please help me.
the final data in the config.yaml after applying pillar would be like
tunnel: xentrax
credentials-file: /roor/.xentrax/xentrax.json
logfile: /var/log/xentrax.log
loglevel: info
ingress:
- hostname: shop.xentrax.com
- keyid: xxxxxxxxxxxxxxxxxxx
originRequest:
httpHostHeader: shop.xentrax.com
originServerName: shop.xentrax.com
service: https://localhost:443
- service: http_status:404
The pillar definition is straightforward:
xentrax_ingress:
ingress:
- hostname: shop.xentrax.com
- keyid: xxxxxxxxxxxxxxxxxxx
originRequest:
httpHostHeader: shop.xentrax.com
originServerName: shop.xentrax.com
service: https://localhost:443
- service: http_status:404
Assuming your final output doesn't have to literally be what you said, only that it is valid YAML, this state will work:
/etc/xentrax/config.yml:
file.append:
- text: |
tunnel: xentrax
credentials-file: /roor/.xentrax/xentrax.json
logfile: /var/log/xentrax.log
loglevel: info
{{ pillar["xentrax_ingress"] | tojson }}
If you can manage whole files instead of appending, then file.serialize would be even better:
/etc/xentrax/config.d/part1.yml:
file.serialize:
- serializer: yaml
- dataset:
tunnel: xentrax
credentials-file: /roor/.xentrax/xentrax.json
logfile: /var/log/xentrax.log
loglevel: info
/etc/xentrax/config.d/part2.yml:
file.serialize:
- serializer: yaml
- dataset_pillar: xentrax_ingress
I am trying helm install for a sample application consisting of two microservices. I have created a solution level folder called charts and all subsequent helm specific resources (as per this example (LINK) .
When I execute helm upgrade --install microsvc-poc--release . from C:\Users\username\source\repos\MicroservicePOC\charts\microservice-poc (where values.yml is) I get error :
Error: template: microservicepoc/templates/service.yaml:8:18: executing "microservicepoc/templates/service.yaml" at <.Values.service.type>: nil pointer evaluating interface {}.type
I am not quite sure whats the exact issue that causes this behavior,I have set all possible defaults in values.yaml as below :
payments-app-service:
replicaCount: 3
image:
repository: golide/paymentsapi
pullPolicy: IfNotPresent
tag: "0.1.0"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: payments-svc.local
paths:
- "/payments-app"
autoscaling:
enabled: false
serviceAccount:
create: false
products-app-service:
replicaCount: 3
image:
repository: productsapi_productsapi
pullPolicy: IfNotPresent
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: products-svc.local
paths:
- "/products-app"
autoscaling:
enabled: false
serviceAccount:
create: false
As a check I have opened service.yaml file and it throws syntax errors which I'm thinking to may be related to why helm install is failing :
Missed comma between flow control entries
This error is throwing on lines 6 and 15 for service.yaml file below :
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
What am I missing ?
I have tried recreating the chart afresh but when I try helm install I get the exact same error. Moreover service.yaml continues showing same syntax error ( I have not edited anything in service.yaml that would otherwise cause linting issues).
As the error describes, helm can't find the service field in the value.yaml file when rendering the template, and it caused the rendering to fail.
The services in your value.yaml file are located under the payments-app-service field and the products-app-service field. To access them, you need to pass {{ .Values.payments-app-service.service.type }} or {{ .Values.products-app-service.service.type }}
like:
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.products-app-service.service.type }}
ports:
- port: {{ .Values.products-app-service.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
It is recommended that you use helm better by reading the official documentation
helm doc
I trying to find a way how to execute a specific state only if the previous one completed successfully but ONLY when is without changes, basically, I need something like no onchanges.
start-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-STARTED'
start-patching-{{ minion }}:
salt.state:
- tgt: {{ minion }}
- require:
- bits-{{ minion }}
- sls:
- patching.uptodate
finish-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-FINISHED'
or in other words, I want to send ever "finish-event-{{ minion }}" only when "start-patching-{{ minion }}" is like:
----------
ID: start-patching-LKA3
Function: salt.state
Result: True
Comment: States ran successfully. No changes made to LKA3.
Started: 11:29:15.906124
Duration: 20879.248 ms
Changes:
----------
I have a custom Kubernetes cluster created through kubeadm. My service is exposed through node port. Now I want to use ingress for my services.
I have deployed one application which is exposed through NodePort.
Below is my deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "demochart.fullname" . }}
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
volumeMounts:
- name: cred-storage
mountPath: /root/
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: cred-storage
hostPath:
path: /home/aodev/
type:
Below is values.yaml
replicaCount: 3
image:
repository: REPO_NAME
tag: latest
pullPolicy: IfNotPresent
service:
type: NodePort
port: 8007
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2000Mi
requests:
cpu: 1000m
memory: 2000Mi
nodeSelector: {}
tolerations: []
affinity: {}
Now I have deployed nginx ingress controller from the following repository.
git clone https://github.com/samsung-cnct/k2-charts.git
helm install --namespace kube-system --name my-nginx k2-charts/nginx-ingress
Below is values.yaml file for nginx-ingress and its service is exposed through LoadBalancer.
# Options for ConfigurationMap
configuration:
bodySize: 64m
hstsIncludeSubdomains: "false"
proxyConnectTimeout: 15
proxyReadTimeout: 600
proxySendTimeout: 600
serverNameHashBucketSize: 256
ingressController:
image: gcr.io/google_containers/nginx-ingress-controller
version: "0.9.0-beta.8"
ports:
- name: http
number: 80
- name: https
number: 443
replicas: 2
defaultBackend:
image: gcr.io/google_containers/defaultbackend
version: "1.3"
namespace:
resources:
memory: 20Mi
cpu: 10m
replicas: 1
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
# effect: NoSchedule
ingressService:
type: LoadBalancer
# nodePorts:
# - name: http
# port: 8080
# targetPort: 80
# protocol: TCP
# - name: https
# port: 8443
# targetPort: 443
# protocol: TCP
loadBalancerIP:
externalName:
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
kubectl describe svc my-nginx
kubectl describe svc nginx-ingress --namespace kube-system
Name: nginx-ingress
Namespace: kube-system
Labels: chart=nginx-ingress-0.1.2
component=my-nginx-nginx-ingress
heritage=Tiller
name=nginx-ingress
release=my-nginx
Annotations: helm.sh/created=1526979619
Selector: k8s-app=nginx-ingress-lb
Type: LoadBalancer
IP: 10.100.180.127
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31378/TCP
Endpoints: External-IP:80,External-IP:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32127/TCP
Endpoints: External-IP:443,External-IP:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
It is not creating an external IP address for nginx-ingress and it's showing pending status.
nginx-ingress LoadBalancer 10.100.180.127 <pending> 80:31378/TCP,443:32127/TCP 20s
And my ingress.yaml is as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.example.com
http:
paths:
- path: /entity
backend:
serviceName: testsvc
servicePort: 30003
Is it possible to implement ingress in custom Kubernetes cluster through nginx-ingress-controller?
I'm trying to create a task with Ansible (=> 2.5) that will configure network interfaces such as that:
- name: Set up network interfaces addr
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: address
value: "{{ item.addr }}"
with_items:
- "{{ network }}"
when: item.addr is defined
notify: Restart interface
- name: Set up network interfaces netmask
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: netmask
value: "{{ item.netmask }}"
with_items:
- "{{ network }}"
when: item.netmask is defined
notify: Restart interface
- name: Set up network interfaces dns
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: dns-nameservers
value: "{{ item.dns }}"
with_items:
- "{{ network }}"
when: item.dns is defined
notify: Restart interface
- name: Set up network interfaces dns-search
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.device }}"
iface: "{{ item.device }}"
state: present
option: dns-search
value: "{{ item.dns_search }}"
with_items:
- "{{ network }}"
when: item.dns_search is defined
notify: Restart interface
This works.
But from my point of view, that's not so clean ..
So I'm trying to use 2 loops ... Which is not working obviously.
- name: Set up network interfaces
interfaces_file:
dest: "/etc/network/interfaces.d/{{ item.iDunnoWhatToPutHere }}"
iface: "{{ item.iDunnoWhatToPutHere }}"
state: present
option: {{ item.option }}
value: "{{ item.value }}"
with_together:
- "{{ network }}"
- { option: address, value: item.0.addr }
- { option: netmask, value: item.0.netmask }
- { option: dns-nameservers, value: item.0.dns }
when: item.dns_search is defined
notify: Restart interface
[...]
Edit: This is good but it's strict. I should loop on vars which should loop on each option and its value for any options. Because I also have options for bridge such as "vlan_raw_device, bridge_ports, bridge_stp ...". So it should just loop blindly on a dict of options and values.
Edit2: With variable network
network:
- name: admin
device: admin
method: static
address: X.X.X.X/X
netmask: X.X.X.X
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
Why I'm trying all this ?
Because I need to change all the values if it has to be changed.
Because I want to restart (ifup, ifdown) only the interface that
Because I'm surprised that I have to use multiple times the same module.
Can you guys help me find out how to use that ?
Maybe it's not possible ?
Thanks folks !
here is a task that will hopefully meet your needs. i have replaced the interfaces_file with debug module, just to print the variables you need to actually use in the interfaces_file module. for the sake of the demo, i added a second interface in the network variable:
playbook with the variable and the task:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
network:
- name: admin
device: admin
method: static
address: 10.10.10.22
netmask: 255.255.255.0
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
- name: admin22
device: admin22
method: static
address: 20.20.20.22
netmask: 255.255.255.192
up:
net: X.X.X.X/X
gateway: X.X.X.X/X
down:
net: X.X.X.X/X
gateway: X.X.X.X/X
tasks:
- name: process network config
debug:
msg: "dest: {{ item[0].name }}, option: {{ item[1].option }}, value: {{ item[0][item[1].value] }}"
with_nested:
- "{{ network }}"
- [{ option: address, value: address }, { option: netmask, value: netmask }]
result:
TASK [process network config] ******************************************************************************************************************************************************************************************
ok: [localhost] => (item=None) => {
"msg": "dest: admin, option: address, value: 10.10.10.22"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin, option: netmask, value: 255.255.255.0"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin22, option: address, value: 20.20.20.22"
}
ok: [localhost] => (item=None) => {
"msg": "dest: admin22, option: netmask, value: 255.255.255.192"
}
hope it helps