Ansible - math operation, substract - math

Trying to substract a number for a variable, which is an int in Ansible.
var:
number: 30
tasks:
- set_fact: me={{ number -1 }}
- debug: var=me
Expectation: me = 29
Result:
fatal: [node1]: FAILED! => {"failed": true, "msg": "Unexpected templating type error occurred on ({{ number - 1 }}): unsupported operand type(s) for -: 'AnsibleUnicode' and 'int'"}

It is a known issue with Ansible/Jinja that you can't preserve numeric type after templating.
Use int filter inside {{..}} expression:
- set_fact: me={{ number | int - 1 }}

Related

How can I store a functions output into a variable in jq

I have been trying to convert months into days in my script. I managed to make a function that turns an amount of months into days. I now want to store these days into a variable to be used in other functions later on.
I have tried:
days: .months|Months_to_Days
I have also tried to include it in an object with the month variable:
{
months: "12",
days: .months|Months_to_Days
}
This also doesn't work because when I did:
months: "12"
days: months
as a test, the days returned null. How can I do it?
I have tried:
days: .months|Months_to_Days
This returns:
jq: error: syntax error, unexpected IDENT, expecting $end (Unix shell quoting issues?) at <top-level>, line 65:
days: .months|Months_to_Days
The function Months_to_Days is:
eif .|tonumber > 8 then
((.|tonumber - 1) * 30) + ((.|tonumber - 1)/2) + 1 - 2|floor
else
if .|tonumber < 3 then
((.|tonumber - 1) * 30) + (.|tonumber/2)| floor
else
((.|tonumber - 1) * 30) + (.|tonumber/2) - 2| floor end| debug
end| debug
There is no input and it only uses the variables in the script.
The variables were:
{
months: "12"
}
Before I wanted to make the variable storing the output of the function, I would run it with:
.months|Months_to_Days
There isn't an expected output just to store the output of the function to use in other functions.
Please show:
Your input JSON
Your function definition
Your full jq program
Your expected output JSON
Because I don't know what you are trying to do or what's not working. The following jq program works just fine:
$ printf '{"months": 5}' | jq 'def to_days: . * 30; # a very simple dummy impl
(.months|to_days) as $calculated_days # store in variable
| { days: $calculated_days } # use variable'
Variables are explained in Variable / Symbolic Binding Operator: ... as $identifier | ... in the jq manual.

Ansible: split a dictionary with list values to a list of dictionaries with a single item from the list as value

I need to convert a dictionary with list values into a list of dictionaries.
Given:
my_dict:
key1: ["111", "222"]
key2: ["444", "555"]
Desired output:
my_list:
- key1: "111"
key2: "444"
- key1: "222"
key2: "555"
What I've tried:
- set_fact:
my_list: "{{ my_list | default([]) + [{item.0.key: item.1}] }}"
loop: "{{ my_dict | dict2items | subelements('value') }}"
And what I've got:
[
{
"key1": "111"
},
{
"key1": "222"
},
{
"key2": "444"
},
{
"key2": "555"
}
]
Thankful for any help and suggestions!
Get the keys and values of the dictionary first
keys: "{{ my_dict.keys()|list }}"
vals: "{{ my_dict.values()|list }}"
gives
keys: [key1, key2]
vals:
- ['111', '222']
- ['444', '555']
Transpose the values
- set_fact:
tvals: "{{ tvals|d(vals.0)|zip(item)|map('flatten') }}"
loop: "{{ vals[1:] }}"
gives
tvals:
- ['111', '444']
- ['222', '555']
Create the list of the dictionaries
my_list: "{{ tvals|map('zip', keys)|
map('map', 'reverse')|
map('community.general.dict')|
list }}"
gives
my_list:
- key1: '111'
key2: '444'
- key1: '222'
key2: '555'
Notes
Example of a complete playbook
- hosts: localhost
vars:
my_dict:
key1: ["111", "222"]
key2: ["444", "555"]
keys: "{{ my_dict.keys()|list }}"
vals: "{{ my_dict.values()|list }}"
my_list: "{{ tvals|map('zip', keys)|
map('map', 'reverse')|
map('community.general.dict')|
list }}"
tasks:
- set_fact:
tvals: "{{ tvals|d(vals.0)|zip(item)|map('flatten') }}"
loop: "{{ vals[1:] }}"
- debug:
var: my_list
You can use a custom filer to transpose the matrix. For example,
shell> cat filter_plugins/numpy.py
# All rights reserved (c) 2022, Vladimir Botka <vbotka#gmail.com>
# Simplified BSD License, https://opensource.org/licenses/BSD-2-Clause
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleFilterError
from ansible.module_utils.common._collections_compat import Sequence
import json
import numpy
def numpy_transpose(arr):
if not isinstance(arr, Sequence):
raise AnsibleFilterError('First argument for numpy_transpose must be list. %s is %s' %
(arr, type(arr)))
arr1 = numpy.array(arr)
arr2 = arr1.transpose()
return json.dumps(arr2.tolist())
class FilterModule(object):
''' Ansible wrappers for Python NumPy methods '''
def filters(self):
return {
'numpy_transpose': numpy_transpose,
}
Then you can avoid iteration. For example, the playbook below gives the same result
- hosts: localhost
vars:
my_dict:
key1: ["111", "222"]
key2: ["444", "555"]
keys: "{{ my_dict.keys()|list }}"
vals: "{{ my_dict.values()|list }}"
tvals: "{{ vals|numpy_transpose()|from_yaml }}"
my_list: "{{ tvals|map('zip', keys)|
map('map', 'reverse')|
map('community.general.dict')|
list }}"
tasks:
- debug:
var: my_list
Transposing explained
Let's start with the matrix 2x2
vals:
- ['111', '222']
- ['444', '555']
The task below
- set_fact:
tvals: "{{ tvals|d(vals.0)|zip(item) }}"
loop: "{{ vals[1:] }}"
gives step by step:
a) Before the iteration starts the variable tvals is assigned the default value vals.0
vals.0: ['111', '222']
b) The task iterates the list vals[1:]. These are all lines in the array except the first one
vals[1:]:
- ['444', '555']
c) The first, and the only one, iteration zip the first and the second line. This is the result
vals.0|zip(vals.1):
- ['111', '444']
- ['222', '555']
Let's proceed with matrix 3x3
vals:
- ['111', '222', '333']
- ['444', '555', '666']
- ['777', '888', '999']
The task below
- set_fact:
tvals: "{{ tvals|d(vals.0)|zip(item)|map('flatten') }}"
loop: "{{ vals[1:] }}"
gives step by step:
a) Before the iteration starts the variable tvals is assigned the default value vals.0
vals.0: ['111', '222', '333']
b) The task iterates the list vals[1:]
vals[1:]:
- ['444', '555', '666']
- ['777', '888', '999']
c) The first iteration zip the first and the second line, and assigns it to tvals. The filer flatten has no effect on the lines
vals.0|zip(vals.1)|map('flatten'):
- ['111', '444']
- ['222', '555']
- ['333', '666']
d) The next iteration zip tvals and the third line
tvals|zip(vals.2):
- - ['111', '444']
- '777'
- - ['222', '555']
- '888'
- - ['333', '666']
- '999
e) The lines must be flattened. This is the result
tvals|zip(vals.2)|map('flatten'):
- ['111', '444', '777']
- ['222', '555', '888']
- ['333', '666', '999']

Flatten nested dictionary to key/value pairs with Ansible

---
- hosts: localhost
vars:
mydict:
key1: val1
key2: val2
key3:
subkey1: subval1
subkey2: subval2
tasks:
- debug:
msg: "{{ TODO }}"
How would I make the above debug message print out all key/value pairs from the nested dictionary? Assume the depth is unknown. I would expect the output to be something like:
{
"key1": "val1",
"key2": "val2",
"subkey1": "subval1"
"subkey2": "subval2"
}
Write a filter plugin and use pandas.json_normalize, e.g.
shell> cat filter_plugins/dict_normalize.py
from pandas.io.json import json_normalize
def dict_normalize(d):
df = json_normalize(d)
l = [df.columns.values.tolist()] + df.values.tolist()
return(l)
class FilterModule(object):
def filters(self):
return {
'dict_normalize': dict_normalize,
}
The filter returns lists of keys and values
- set_fact:
mlist: "{{ mydict|dict_normalize }}"
gives
mlist:
- - key1
- key2
- key3.subkey1
- key3.subkey2
- - val1
- val2
- subval1
- subval2
Create a dictionary, e.g.
- debug:
msg: "{{ dict(mlist.0|zip(mlist.1)) }}"
gives
msg:
key1: val1
key2: val2
key3.subkey1: subval1
key3.subkey2: subval2
If the subkeys are unique remove the path
- debug:
msg: "{{ dict(_keys|zip(mlist.1)) }}"
vars:
_regex: '^(.*)\.(.*)$'
_replace: '\2'
_keys: "{{ mlist.0|map('regex_replace', _regex, _replace)|list }}"
gives
msg:
key1: val1
key2: val2
subkey1: subval1
subkey2: subval2
Notes
Install the package, e.g. python3-pandas
The filter might be extended to support all parameters of json_normalize
The greedy regex works also in nested dictionaries
Q: "Turn the key3.subkey1 to get the original dictionary."
A: Use json_query. For example, given the dictionary created in the first step
- set_fact:
mydict_flat: "{{ dict(mlist.0|zip(mlist.1)) }}"
mydict_flat:
key1: val1
key2: val2
key3.subkey1: subval1
key3.subkey2: subval2
iterate the keys and retrieve the values from mydict
- debug:
msg: "{{ mydict|json_query(item) }}"
loop: "{{ mydict_flat|list }}"
gives
msg: val1
msg: val2
msg: subval1
msg: subval2

How to pass map parameter in corda flow start command

flow start TransactionRecoveryFlow report: {O=PartyB, L=New York, C=US=LedgerSyncFindings(missingAtRequester=[24DC1B1C6D8743988C5F4DE6725C64D4354B713D78F27E60CF03398B32657D57, FD3F0A5D8E03A9E8B79229B8271DCEDA691AE106A99F38E5F9F0408FB1F1BAFA, A997737DC3359FE7F3D15CB06E12EF347DA149328F263D2B35F99DA8F363EFCB], missingAtRequestee=[])}
I am trying to pass report parameter in flow through command line which is a map of type "Map<Party, LedgerSyncFindings>" . How to pass value to it from command line.
I am getting mutliple syntax errors in output.
exception: while parsing a flow mapping in 'reader', line 1, column 11: { report: {O=PartyB, L=New York, C=US=Ledg ... ^ expected ',' or '}', but got [ in 'reader', line 1, column 77: ... SyncFindings(missingAtRequester=[24DC1B1C6D8743988C5F4DE6725C64D ... ^ at [Source: (StringReader); line: 1, column: 77] - while parsing a flow mapping in 'reader', line 1, column 11: { report: {O=PartyB, L=New York, C=US=Ledg ... ^ expected ',' or '}', but got [ in 'reader', line 1, column 77: ... SyncFindings(missingAtRequester=[24DC1B1C6D8743988C5F4DE6725C64D ... ^ at [Source: (StringReader); line: 1, column: 77] - while parsing a flow mapping in 'reader', line 1, column 11: { report: {O=PartyB, L=New York, C=US=Ledg ... ^ expected ',' or '}', but got [ in 'reader', line 1, column 77: ... SyncFindings(missingAtRequester=[24DC1B1C6D8743988C5F4DE6725C64D ... ^ [errorCode=1eyuahe, moreInformationAt=https://errors.corda.net/OS/4.5/1eyuahe]
According to this answer, a list is passed like this:
flow start MyFlow listParam: [value1, value2]
Following the above approach, a map should be passed like this:
flow start MyFlow mapParam: [key1:value1, key2:value2]
In your code sample, you're missing the brackets [ ] around the map, and the colon : between your key/value pairs.
Also, pay attention to how you pass objects in the shell (see here).

Converting EDI 837I to XML using BizTalk server

I tried with another file,While converting to EDI 837 I to XML the below errors arraised. Are these fields are mandatory fields?
I manually passed values to ISA09, ISA10 and ISA13 then I am receiving
Error encountered during parsing. The X12 interchange with id ' ', with sender id ' ', receiver id ' ' had the following errors:
Error: 4 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA09 Position in Segment: 9 Data Value: 8:
Invalid Date
Error: 5 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA10 Position in Segment: 10 Data Value: 9:
Invalid Time
Error: 6 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA13 Position in Segment: 13 Data Value: 6:
Invalid character in data element
ISA09 is YYMMDD formatted.
ISA10 is HHMM formatted.
ISA13 is n8, meaning 8 digits.
ISA is fixed length so numbers must be 0 padded and text space padded.
Have you done the EDI Tutorials for BizTalk?

Resources