When I am running playbook its showing Variable not defined - networking

- hosts: switch
connection: network_cli
become_method: enable
gather_facts: no
vars_prompt:
- name: vlan_id
prompt: enter the vlan_id
private: no
vars:
cli:
username: admin
password: int123$%^
vlans:
100: "CORE"
200: "MONITORING"
300: "ACCESS"
400: "GUEST_WIFI"
ansible_buffer_read_timeout: 2
tasks:
- name: "creating the vlans"
ios_vlans:
config:
- vlan_id: "{{ vlan_id }}"
mtu: 700
state: active
shutdown: disabled
register: show_vlan
- debug:
var: show_vlan.stdout_lines
Output:
enter the vlan_id: 11
PLAY [switch] ****************************************************************************************************************************************************
TASK [creating the vlans] **************************************************************************************************************************************** changed: [172.16.1.252]
TASK [debug]
ok: [172.16.1.252] => show_vlan.stdout_lines: VARIABLE IS NOT DEFINED!
PLAY RECAP 172.16.1.252 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

ios_vlans module does not have stdout_lines key in it's return values. Please check the documention here
so debug show_lan
- debug:
var: show_vlan

Related

artifactory docker push error unknown: Method Not Allowed

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########
The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

Ansible filter to extract specific keys from a dict into another dict

Using the nios lookup modules, I can get a list of dicts of records
- set_fact:
records: "{{ lookup('community.general.nios', 'record:a', filter={'name~': 'abc.com'}) }}"
This returns something like
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
I want to convert this into a dict of dicts of {name}: a: {ipv4addr}
abc.com:
a: 1.2.3.4
def.abc.com:
a: 1.2.3.5
ghi.abc.com:
a: 1.2.3.6
So that I can then run a similar lookup to get other record types (e.g. cname) and combine them into the same dict. The items2dict filter seems halfway there, but I want the added a: key underneath.
If you just wanted a dictionary that maps name to an ipv4 address, like:
{
"abc.com": "1.2.3.4",
...
}
You could use a simple json_query expression. Take a look at the
set_fact task in the following example:
- hosts: localhost
gather_facts: false
vars:
data:
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
tasks:
- set_fact:
name_map: "{{ dict(data|json_query('[].[name, ipv4addr]')) }}"
- debug:
var: name_map
Running that playbook will output:
PLAY [localhost] ***************************************************************
TASK [set_fact] ****************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"name_map": {
"abc.com": "1.2.3.4",
"def.abc.com": "1.2.3.5",
"ghi.abc.com": "1.2.3.6"
}
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You could use a similar structure to extract other data (e.g. cname records). This would get you dictionary per type of data, rather than merging everything together in a single dictionary as you've requested, but this might end up being easier to work with.
To get exactly the structure you want, you can use set_fact in a loop, like this:
- hosts: localhost
vars:
data:
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
gather_facts: false
tasks:
- set_fact:
name_map: "{{ name_map|combine({item.name: {'a': item.ipv4addr}}) }}"
loop: "{{ data }}"
vars:
name_map: {}
- debug:
var: name_map
This will produce:
PLAY [localhost] ***************************************************************
TASK [set_fact] ****************************************************************
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'abc.com', 'ipv4addr': '1.2.3.4', 'view': 'default'})
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'def.abc.com', 'ipv4addr': '1.2.3.5', 'view': 'default'})
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'ghi.abc.com', 'ipv4addr': '1.2.3.6', 'view': 'default'})
TASK [debug] *******************************************************************
ok: [localhost] => {
"name_map": {
"abc.com": {
"a": "1.2.3.4"
},
"def.abc.com": {
"a": "1.2.3.5"
},
"ghi.abc.com": {
"a": "1.2.3.6"
}
}
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Simple gRPC envoy configuration

I'm trying to setup a envoy proxy as a gRPC fron end, and can't get it to work, so I'm trying to get to as simple a test setup as possible and build from there, but I can't get that to work either. Here's what my test setup looks like:
Python server (slightly modified gRPC example code)
# greeter_server.py
from concurrent import futures
import time
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
class Greeter(helloworld_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
server.add_insecure_port('[::]:8081')
server.start()
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
serve()
Python client (slightly modified gRPC example code)
from __future__ import print_function
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
def run():
# NOTE(gRPC Python Team): .close() is possible on a channel and should be
# used in circumstances in which the with statement does not fit the needs
# of the code.
with grpc.insecure_channel('localhost:9911') as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
if __name__ == '__main__':
run()
And then my two envoy yaml files:
# envoy-hello-server.yaml
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8811
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
access_log:
- name: envoy.file_access_log
typed_config:
"#type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/dev/stdout"
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
grpc: {}
route:
cluster: hello_grpc_service
http_filters:
- name: envoy.router
typed_config: {}
clusters:
- name: hello_grpc_service
connect_timeout: 0.250s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: hello_grpc_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: hello_grpc_service
port_value: 8081
admin:
access_log_path: "/tmp/envoy_hello_server.log"
address:
socket_address:
address: 0.0.0.0
port_value: 8881
and
# envoy-hello-client.yaml
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 9911
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
add_user_agent: true
access_log:
- name: envoy.file_access_log
typed_config:
"#type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/dev/stdout"
stat_prefix: egress_http
common_http_protocol_options:
idle_timeout: 0.840s
use_remote_address: true
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- grpc
routes:
- match:
prefix: "/"
route:
cluster: backend-proxy
http_filters:
- name: envoy.router
typed_config: {}
clusters:
- name: backend-proxy
type: logical_dns
dns_lookup_family: V4_ONLY
lb_policy: round_robin
connect_timeout: 0.250s
http_protocol_options: {}
load_assignment:
cluster_name: backend-proxy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: hello_grpc_service
port_value: 8811
admin:
access_log_path: "/tmp/envoy_hello_client.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9991
Now, what I expect this would allow is something like hello_client.py (port 9911) -> envoy (envoy-hello-client.yaml) -> envoy (envoy-hello-server.yaml) -> hello_server.py (port 8081)
Instead, what I get is an error from the python client:
$ python3 greeter_client.py
Traceback (most recent call last):
File "greeter_client.py", line 35, in <module>
run()
File "greeter_client.py", line 30, in run
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 533, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = ""
debug_error_string = "{"created":"#1594770575.642032812","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"","grpc_status":12}"
>
And in the envoy client log:
[2020-07-14 16:22:10.407][16935][info][main] [external/envoy/source/server/server.cc:652] starting main dispatch loop
[2020-07-14 16:23:25.441][16935][info][runtime] [external/envoy/source/common/runtime/runtime_impl.cc:524] RTDS has finished initialization
[2020-07-14 16:23:25.441][16935][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:182] cm init: all clusters initialized
[2020-07-14 16:23:25.441][16935][info][main] [external/envoy/source/server/server.cc:631] all clusters initialized. initializing init manager
[2020-07-14 16:23:25.441][16935][info][config] [external/envoy/source/server/listener_manager_impl.cc:844] all dependencies initialized. starting workers
[2020-07-14 16:23:25.441][16935][warning][main] [external/envoy/source/server/server.cc:537] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2020-07-14T23:49:35.641Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 NR 0 0 0 - "10.0.0.56" "grpc-python/1.16.1 grpc-c/6.0.0 (linux; chttp2; gao)" "aa72310a-3188-46b2-8cbf-9448b074f7ae" "localhost:9911" "-"
And nothing in the server log.
Also, weirdly, this is an almost one second delay between when I run the python client and when the log message shows up in the client envoy.
What am I missing to make these two scripts talk via envoy?
I know I'm bit late, hope this helps someone. Since you are grpc server is running in the same host you could specify hostname to be host.docker.internal (previous docker.for.mac.localhost deprecated from docker v18.03.0)
In your case if you are running in a dockerized environment you could do the following:
Envoy version: 1.13+
clusters:
- name: backend-proxy
type: logical_dns
dns_lookup_family: V4_ONLY
lb_policy: round_robin
connect_timeout: 0.250s
http_protocol_options: {}
load_assignment:
cluster_name: backend-proxy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 8811
hello_grpc_service won't be resolved to IP in dockerized environment.
Note: you could enable envoy trace log level for detailed logs

Ansible datetime timezone conversion

Is there a way to convert ansible date into a different timezone in a "debug" statement in my playbook ? I dont want a global timezone setting at the playbook level. I have this :
debug:
msg: "{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}"
This works fine but displays time in UTC. I need the time to be displayed in EDT without setting timezone at the global playbook level. How do I accomplish this ?
If you use a command task to run date rather than relying on the ansible_date_time variable, you can set the timezone via an environment variable. E.g. the following playbook:
- hosts: localhost
vars:
ansible_python_interpreter: /usr/bin/python
tasks:
- command: "date '+%Y-%m-%d %H:%M:%S'"
register: date_utc
environment:
TZ: UTC
- command: "date '+%Y-%m-%d %H:%M:%S'"
register: date_us_eastern
environment:
TZ: US/Eastern
- debug:
msg:
- "{{ date_utc.stdout }}"
- "{{ date_us_eastern.stdout }}"
Results in this output:
PLAY [localhost] *****************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [localhost]
TASK [command] *******************************************************************************
changed: [localhost]
TASK [command] *******************************************************************************
changed: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] => {
"msg": [
"2020-05-12 15:21:05",
"2020-05-12 11:21:06"
]
}
PLAY RECAP ***********************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to define an Ansible playbook environment with passed in environment dictionary

My problem is that I'm unable to set the environment for the entire playbook by passing in a dict to be set as the environment. Is that possible?
For example, here is my sample ansible playbook:
- hosts: localhost
vars:
env_vars: "{{ PLAY_ENVS }}"
environment: "{{ env_vars }}"
tasks:
- name: Here is what you passed in
debug: msg="env_vars == {{ env_vars }}"
- name: What is FAKE_ENV
debug: msg="FAKE_ENV == {{ lookup('env', 'FAKE_ENV') }}"
And I'm passing the command:
/bin/ansible-playbook sample_playbook.yml --extra-vars '{PLAY_ENVS: {"FAKE_ENV":"/path/to/fake/destination"}}'
The response I'm getting is the following:
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [Here is what you passed in] **********************************************
ok: [localhost] => {
"msg": "env_vars == {u'FAKE_ENV': u'/path/to/fake/destination'}"
}
TASK [What is FAKE_ENV] ********************************************************
ok: [localhost] => {
"msg": "FAKE_ENV == "
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
As you can see 'FAKE_ENV' is not being set in the environment. What am I doing wrong?
Lookups in Ansible are executed in a context of parent ansible process.
You should check your environment with a spawned process, like this:
- hosts: localhost
vars:
env_vars:
FAKE_ENV: foobar
environment: "{{ env_vars }}"
tasks:
- name: Test with spawned process
shell: echo $FAKE_ENV
And get expected result: "stdout": "foobar",

Resources