I am trying to get an array of all possible paths in a JSON Document.
Given the document:
{
"a": "bar",
"b": [
{"c": 3}, {"d": 6},
{"c": 7}, {"d": 5}
]
}
I'd like the output to be:
["","a","b","b/0","b/0/c","b/1","b/1/d","b/2","b/2/c","b/3","b/3/d"]
I got pretty close, here is a snippet on the JQ Playground.
Try
jq '["", (paths | join("/"))]'
Demo
Related
Given the following JSON:
{ 'doc': 'foobar',
'pages': [
{ 'data': { 'keyA': { 'foo': 'foo', 'bar': 'bar' }, 'keyB': {'A':'123', 'B': 'c'} },
...
]
}
I am performing the following jq search:
#!/usr/bin/jq -f
.. | .keyA?, .keyB?
Which is great:
{'foo': 'foo', 'bar': 'bar'} # pretend it is formatted
{'A': '123', 'B': 'c'}
But now I'd like to format the output like:
{'key': 'keyA', 'value': {...}}
{'key': 'keyB', 'value': {...}}
However, there doesn't seem to be a reasonable way to use the recursive operator .. and do this.
Instead, I'll have to run N-independent searches for the N keys to have them all separated out. This is painful.
Is it possible to reform the data so that the output of the search function gets labeled with the key that yielded the object?
You can use object construction:
$ jq -c '.. | { keyA, keyB }?' sample.json
{"keyA":null,"keyB":null}
{"keyA":null,"keyB":null}
{"keyA":{"foo":"foo","bar":"bar"},"keyB":{"A":"123","B":"c"}}
{"keyA":null,"keyB":null}
{"keyA":null,"keyB":null}
Which you can then filter out:
$ jq -c '.. | { keyA, keyB }? | to_entries[] | select(.value)' sample.json
{"key":"keyA","value":{"foo":"foo","bar":"bar"}}
{"key":"keyB","value":{"A":"123","B":"c"}}
to_entries will turn an object into an array of key-value pairs. Then, checking the key name explicitly enables you to also find values of null. The -c flag makes the output "compact".
#!/usr/bin/env -S jq -c -f
.. | objects | to_entries[] | select(.key == ("keyA", "keyB"))
{"key":"keyA","value":{"foo":"foo","bar":"bar"}}
{"key":"keyB","value":{"A":"123","B":"c"}}
I am trying to parse the below json file and store the result into another json file. How do I achieve this ?
{
"Objects": [
{
"ElementName": "Test1",
"ElementArray": ["abc","bcd"],
"ElementUnit": "4"
},
{
"ElementName": "Test2",
"ElementArray": ["abc","bcde"],
"ElementUnit": "8"
}
]
}
Expected result :
{
"Test1" :[
"abc","bcd"
],
"Test2" :[
"abc","bcde"
]
}
I've tried something on the lines of the below but I seem to be off -
jq '[.Objects[].ElementName ,(.Objects[]. ElementArray[])]' user1.json
jq ".Objects[].ElementName .Objects[].ElementArray" ruser1.json
Your expected output needs to be wrapped in curly braces in order to be a valid JSON object. That said, use from_entries to create an object from an array of key-value pairs, which can be produced by accordingly mapping the input object's Objects array.
.Objects | map({key: .ElementName, value: .ElementArray}) | from_entries
{
"Test1": [
"abc",
"bcd"
],
"Test2": [
"abc",
"bcde"
]
}
Demo
Demo
https://jqplay.org/s/YbjICOd8EJ
You can also use reduce
reduce .Objects[] as $o ({}; .[$o.ElementName]=$o.ElementArray)
I'm trying to flatten a json file to .csv. I'd like to use jqplay for this in stead of programming it in python for example.
The example below is een array that als contains arrays.
My desired output is one line entry on the 2nd array:
so
OPEN, NR1, ....
CLOSED, NR2, ...
....
Can anyone help me with a good jq command for this?
[
{
"description": "Berendrechtsluis",
"lock_id": "BES",
"longitude_wgs84": 4.28561,
"latitude_wgs84": 51.34414,
"lock_doors": [
{
"state": "OPEN",
"lock_door_id": "NR1",
"operational_state": "NO_DATA",
"state_since_in_utc": "2021-12-29T16:32:23Z",
"longitude_wgs84": 4.28214,
"latitude_wgs84": 51.34426
},
{
"state": "CLOSED",
"lock_door_id": "NR2",
"operational_state": "WORKING",
"state_since_in_utc": "2022-01-12T12:32:52Z",
"operational_state_since_in_utc": "2021-12-22T13:13:57Z",
"longitude_wgs84": 4.28247,
"latitude_wgs84": 51.34424
},
....
Are you looking for something like this?
jq -r '.[].lock_doors[] | [.[]] | #csv'
"OPEN","NR1","NO_DATA","2021-12-29T16:32:23Z",4.28214,51.34426
"CLOSED","NR2","WORKING","2022-01-12T12:32:52Z","2021-12-22T13:13:57Z",4.28247,51.34424
Demo
To add column headers, simply prepend them in an array:
jq -r '["a","b","c"], .[].lock_doors[] | [.[]] | #csv'
"a","b","c"
"OPEN","NR1","NO_DATA","2021-12-29T16:32:23Z",4.28214,51.34426
"CLOSED","NR2","WORKING","2022-01-12T12:32:52Z","2021-12-22T13:13:57Z",4.28247,51.34424
Demo
I would like to change a field in my json file as specified by another json file. My input file is something like:
{"id": 10, "name": "foo", "some_other_field": "value 1"}
{"id": 20, "name": "bar", "some_other_field": "value 2"}
{"id": 25, "name": "baz", "some_other_field": "value 10"}
I have an external override file that specifies how name in certain objects should be overridden, for example:
{"id": 20, "name": "Bar"}
{"id": 10, "name": "foo edited"}
As shown above, the override may be shorter than input, in which case the name should be unchanged. Both files can easily fit into available memory.
Given the above input and the override, I would like to obtain the following output:
{"id": 10, "name": "foo edited", "some_other_field": "value 1"}
{"id": 20, "name": "Bar", "some_other_field": "value 2"}
{"id": 25, "name": "baz", "some_other_field": "value 10"}
Being a beginner with jq, I wasn't really sure where to start. While there are some questions that cover similar ground (the closest being this one), I couldn't figure out how to apply the solutions to my case.
There are many possibilities, but probably the simplest, efficient solution would use the built-in function: INDEX/2, e.g. as follows:
jq -n --slurpfile dict f2.json '
(INDEX($dict[]; .id) | map_values(.name)) as $d
| inputs
| .name = ($d[.id|tostring] // .name)
' f1.json
This uses inputs with the -n option to read the first file so that each JSON object can be processed in turn.
Since the solution is so short, it should be easy enough to figure it out with the aid of the online jq manual.
Caveat
This solution comes with a caveat: that there are no "collisions" between ids in the dictionary as a result of the use of "tostring" (e.g. if {"id": 10} and {"id": "10"} both occurred).
If the dictionary does or might have such collisions, then the above solution can be tweaked accordingly, but it is a bit tricky.
I struggle with json file in robot framework:
testy.json
"A": {
"AA": "cacaca",
"AB": "cbcbcb"
},
"B": ["ea", "eb"],
"C": "aaa",
"D": "bbb",
"E": "ddd"
I tried to get all types in json file: 1st is dict, 2nd list, 3rd str.
The problem is when FOR loop pass "C" value ("aaa") in RF, it only pass aaa, which in Python is error - it skips quotation marks.
I need type of value, to make if statement later.
RF code: https://pastebin.com/WdbzXPcW
Cheers!
PS
It's my first question here, so "Hello World!" :D
You are close with your code.
All you have to do is not use the standard way variables are used in RF (${variable}), but use $variable instead:
${json_obj}= Load JSON From File file.json
${dict}= Convert To Dictionary ${json_obj}
${key_list} Get Dictionary Keys ${dict}
FOR ${key} IN #{key_list}
${value}= Set Variable ${dict['${key}']}
${type} Evaluate type($value)
Log To Console ${type}
END
this will give me:
<class 'dict'>
<class 'list'>
<class 'str'>
<class 'str'>
<class 'str'>
if I want a better-looking output, then:
${type} Evaluate type($value).__name__
which will give me:
dict
list
str
str
str
Also note that your json is not a valid json, so I added braces and made it valid (you can always check at https://jsonlint.com/):
{
"A": {
"AA": "cacaca",
"AB": "cbcbcb"
},
"B": ["ea", "eb"],
"C": "aaa",
"D": "bbb",
"E": "ddd"
}