TextFSM: Extracting Data From Different Lines - python-textfsm

I am trying to create a TextFSM template to extract data from certain lines in order.
Raw Data
Mod Ports Card Type Model Serial No.
--- ----- -------------------------------------- ------------------ -----------
1 24 CEF720 24 port 1000mb SFP SS-X6724-SFP ABC12312253
2 48 CEF720 48 port 10/100/1000mb Ethernet SS-X6748-GE-TX ABC12312FW2
5 2 Supervisor Engine 720 (Active) SS-SUP720-3B ABC12312X5B
6 2 Supervisor Engine 720 (Hot) SS-SUP720-3B ABC123129Y5
Mod MAC addresses Hw Fw Sw Status
--- ---------------------------------- ------ ------------ ------------ -------
1 ffff.4a3e.ffff to ffff.4a3e.ffff 3.1 15.2(18r)S1 18.1(2)SY16 Ok
2 ffff.584b.ffff to ffff.584b.ffff 2.6 15.2(14r)S5 18.1(2)SY16 Ok
5 ffff.9df6.ffff to ffff.9df6.ffff 5.3 9.5(4) 18.1(2)SY16 Ok
6 ffff.c848.ffff to ffff.c848.ffff 5.4 9.4(2) 18.1(2)SY16 Ok
Mod Sub-Module Model Serial Hw Status
---- --------------------------- ------------------ ----------- ------- -------
1 Centralized Forwarding Card SS-F6700-CFC ABC12312NDH 4.0 Ok
2 Centralized Forwarding Card SS-F6700-CFC ABC12312LKR 4.0 Ok
5 Policy Feature Card 3 SS-F6K-PFC3B ABC12312B3D 2.5 Ok
5 MSFC3 Daughterboard SS-SUP720 ABC12312B03 2.11 Ok
6 Policy Feature Card 3 SS-F6K-PFC3B ABC123123B6 2.3 Ok
6 MSFC3 Daughterboard SS-SUP720 ABC1231254K 3.0 Ok
Mod Online Diag Status
---- -------------------
1 Pass
2 Pass
5 Pass
6 Pass
Attempting to Get
[
{
"CARDTYPE": "CEF720 24 port 1000mb SFP",
"FW": "15.2(18r)S1",
"HW": "3.1",
"MODEL": "SS-X6724-SFP",
"MODULE": "1",
"PORT": "24",
"SERIAL": "ABC12312253",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "CEF720 48 port 10/100/1000mb Ethernet",
"FW": "15.2(14r)S5",
"HW": "2.6",
"MODEL": "SS-X6748-GE-TX",
"MODULE": "2",
"PORT": "48",
"SERIAL": "ABC12312FW2",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "Supervisor Engine 720 (Active)",
"FW": "9.5(4)",
"HW": "5.3",
"MODEL": "SS-SUP720-3B",
"MODULE": "5",
"PORT": "2",
"SERIAL": "ABC12312X5B",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "Supervisor Engine 720 (Hot)",
"FW": "9.4(2)",
"HW": "5.4",
"MODEL": "SS-SUP720-3B",
"MODULE": "6",
"PORT": "2",
"SERIAL": "ABC123129Y5",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
}
]
Actual Output
[
{
"CARDTYPE": "CEF720 24 port 1000mb SFP",
"FW": "15.2(18r)S1",
"HW": "3.1",
"MODEL": "SS-X6724-SFP",
"MODULE": "1",
"PORT": "24",
"SERIAL": "ABC12312253",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "",
"FW": "15.2(14r)S5",
"HW": "2.6",
"MODEL": "",
"MODULE": "",
"PORT": "",
"SERIAL": "",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "",
"FW": "9.5(4)",
"HW": "5.3",
"MODEL": "",
"MODULE": "",
"PORT": "",
"SERIAL": "",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
},
{
"CARDTYPE": "",
"FW": "9.4(2)",
"HW": "5.4",
"MODEL": "",
"MODULE": "",
"PORT": "",
"SERIAL": "",
"STATUS": "Ok",
"SW": "18.1(2)SY16"
}
]
I used the below template to get the output. As you can see the CARDTYPE, MODEL, MODULE, PORT, SERIAL are empty where the other information was picked up correctly.
TextFSM Template
Value MODULE (\d+)
Value PORT (\d+)
Value CARDTYPE (.+?)
Value MODEL (\S+)
Value SERIAL (\w+)
Value HW (\S+)
Value FW (\S+)
Value SW (\S+)
Value STATUS (\S+)
Start
^Mod\s+Ports\s+Card\s+Type\s+Model\s+Serial -> Info
Info
^\s+${MODULE}\s+${PORT}\s+${CARDTYPE}\s+${MODEL}\s+${SERIAL}\s*$$ -> Status
Status
^\s+\d+\s+(?:\S+\s+to\s+\S+)\s+${HW}\s+${FW}\s+${SW}\s+${STATUS}\s*$$ -> Continue.Record
^Mod\s+Sub-Module -> Start
End

Related

Ansible flatten/merge dict but not fully

I have the following dict
dict3: [[[{"x": "1","y": "2"},{"z": 3,"h": 4}],{"w": 5}],[[{"x": "333","y": "444"},{"z": "555","h": "777"}],{"w": "999"}]]
I need to make it like this:
dict": [
{
"x": "1",
"y": "2",
"z": "3",
"h": "4"
"w": 5
}
{
"x": "333",
"y": "444"
"z": 555,
"h": 777
"w": 999
}
]
The following command flattens it fully, which I don't want and also does not merge x,y,z,h and w (for each group - there can be many grups).
- name: "'unnest' all elements into single list"
ansible.builtin.debug:
msg: "all in one list {{lookup('community.general.flattened', dict)}}"
This is the result with full flattening (and also not merging):
"_raw_params": [
{
"x": "1",
"y": "2"
},
{
"h": 4,
"z": 3
},
{
"w": 5
},
{
"x": "333",
"y": "444"
},
{
"h": "777",
"z": "555"
},
{
"w": "999"
}
]
BTW here is the code
- hosts: localhost
gather_facts: false
vars:
dict: [[[{"x": "1","y": "2"},{"z": 3,"h": 4}],{"w": 5}],[[{"x": "333","y": "444"},{"z": "555","h": "777"}],{"w": "999"}]]
tasks:
- name: UNNEST
set_fact:
"{{lookup('community.general.flattened',dict)}}"
Is there a way to do this?
Thanks!
What you want to do is:
flatten individually each element of your top list. You then get a list of lists of dicts
combine individually each element (i.e. the dict lists) from the above result to merge every dicts into a single one.
All of the above is done by applying the corresponding filters to the elements of the top list using the map filter
The following example playbook:
---
- hosts: localhost
gather_facts: false
vars:
dict3: [[[{"x": "1","y": "2"},{"z": 3,"h": 4}],{"w": 5}],[[{"x": "333","y": "444"},{"z": "555","h": "777"}],{"w": "999"}]]
tasks:
- debug:
msg: "{{ dict3 | map('flatten') | map('combine') }}"
Gives:
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [debug] ***************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
{
"h": 4,
"w": 5,
"x": "1",
"y": "2",
"z": 3
},
{
"h": "777",
"w": "999",
"x": "333",
"y": "444",
"z": "555"
}
]
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to convert JSON data to tidy format in R

I never have worked with json data in R and unfortunately, I was sent a sample of data as:
{
"task_id": "104",
"status": "succeeded",
"metrics": {
"requests_made": 2,
"network_errors": 0,
"unique_locations_visited": 0,
"requests_queued": 0,
"queue_items_completed": 2,
"queue_items_waiting": 0,
"issue_events": 9,
"caption": "",
"progress": 100
},
"message": "",
"issue_events": [
{
"id": "1234",
"type": "issue_found",
"issue": {
"name": "policy not enforced",
"type_index": 123456789,
"serial_number": "123456789183923712",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "low",
"confidence": "certain",
"caption": "/robots.txt",
"evidence": [
{
"type": "FirstOrderEvidence",
"detail": {
"band_flags": [
"in_band"
]
},
"request_response": {
"url": "https://test.com/robots.txt",
"request": [
{
"type": "DataSegment",
"data": "jaghsdjgasdgaskjdgasdgashdgsahdgasjkdgh==",
"length": 313
}
],
"response": [
{
"type": "DataSegment",
"data": "asudasjdgasaaasgdasgaksjdhgasjdgkjghKGKGgKJgKJgKJGKgh==",
"length": 303
}
],
"was_redirect_followed": false,
"request_time": "1234567890"
}
}
],
"internal_data": "jdfhgjhJHkjhdskfhkjhjs0sajkdfhKHKhkj=="
}
},
{
"id": "1235",
"type": "issue_found",
"issue": {
"name": "certificate",
"type_index": 12345845684,
"serial_number": "123456789165637150",
"origin": "https://test.com",
"path": "/",
"severity": "info",
"confidence": "certain",
"description": "The server description a valid, trusted certificate. This issue is purely informational.<br><br>The server presented the following certificates:<br><br><h4>Server certificate</h4><table><tr><td><b>Issued to:</b> </td><td>test.ie, test.com, www.test.com, www.test.ie</td></tr><tr><td><b>Issued by:</b> </td><td>GeoTrust EV RSA CA 2018</td></tr><tr><td><b>Valid from:</b> </td><td>Tue May 12 00:00:00 UTC 2020</td></tr><tr><td><b>Valid to:</b> </td><td>Tue May 17 12:00:00 UTC 2022</td></tr></table><h4>Certificate chain #1</h4><table><tr><td><b>Issued to:</b> </td><td>GeoTrust EV RSA CA 2018</td></tr><tr><td><b>Issued by:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Valid from:</b> </td><td>Mon Nov 06 12:22:46 UTC 2017</td></tr><tr><td><b>Valid to:</b> </td><td>Sat Nov 06 12:22:46 UTC 2027</td></tr></table><h4>Certificate chain #2</h4><table><tr><td><b>Issued to:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Issued by:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Valid from:</b> </td><td>Fri Nov 10 00:00:00 UTC 2006</td></tr><tr><td><b>Valid to:</b> </td><td>Mon Nov 10 00:00:00 UTC 2031</td></tr></table>",
"caption": "/",
"evidence": [],
"internal_data": "sjhdgsajdggJGJHgjfgjhGJHgjhsdgfgjhGJHGjhsdgfjhsgfdsjfg098867hjhgJHGJHG=="
}
},
{
"id": "1236",
"type": "issue_found",
"issue": {
"name": "without flag set",
"type_index": 1254392,
"serial_number": "12345678965616",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "info",
"confidence": "certain",
"description": "my description text here....",
"caption": "/robots.txt",
"evidence": [
{
"type": "InformationListEvidence",
"request_response": {
"url": "https://test.com/robots.txt",
"request": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 313
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh=",
"length": 161
},
{
"type": "HighlightSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdf=",
"length": 119
},
{
"type": "DataSegment",
"data": "AasjkdhasjkhkjHKJSDHFJKSDFHKhjkHSKADJFHKhjkhjkh=",
"length": 23
}
],
"was_redirect_followed": false,
"request_time": "178454751191465"
},
"information_items": [
"Other: user_id"
]
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKH=="
}
},
{
"id": "1237",
"type": "issue_found",
"issue": {
"name": "without flag set",
"type_index": 1234567,
"serial_number": "123456789056704",
"origin": "https://test.com",
"path": "/",
"severity": "info",
"confidence": "certain",
"description": "long description here zjkhasdjkh hsajkdhsajkd hasjkdhbsjkdash d",
"caption": "/",
"evidence": [
{
"type": "InformationListEvidence",
"request_response": {
"url": "https://test.com/",
"request": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfhsfdsfdsfdsfdsfdsfsdfdsf",
"length": 303
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 151
},
{
"type": "HighlightSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh=",
"length": 119
},
{
"type": "DataSegment",
"data": "sdfdsfsdfSDFSDFdSFDS546SDFSDFDSFG657=",
"length": 23
}
],
"was_redirect_followed": false,
"request_time": "123541191466"
},
"information_items": [
"Other: user_id"
]
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsd=="
}
},
{
"id": "1238",
"type": "issue_found",
"issue": {
"name": "parameter pollution",
"type_index": 4137000,
"serial_number": "123456789810290176",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "low",
"confidence": "firm",
"description": "very long description text here...",
"caption": "/robots.txt [URL path filename]",
"evidence": [
{
"type": "FirstOrderEvidence",
"detail": {
"payload": {
"bytes": "Q3jkeiZkcmg8MQ==",
"flags": 0
},
"band_flags": [
"in_band"
]
},
"request_response": {
"url": "https://test.com/%3fhdz%26drh%3d1",
"request": [
{
"type": "DataSegment",
"data": "W1QOIC8=",
"length": 5
},
{
"type": "HighlightSegment",
"data": "WRMnBGR6JTI2ZHJoJTNkMQ==",
"length": 16
},
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfhcvxxcvklxcvjkxclvjxclkvjxcklvjlxckjvlxckjvklxcjvxcklvjxcklvjxckljvlxckjvxcklvjxckljvxcklvjcklxjvcxkl==",
"length": 298
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 130
},
{
"type": "HighlightSegment",
"data": "Q4jleiZkcmg9MQ==",
"length": 10
},
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 163
}
],
"was_redirect_followed": false,
"request_time": "51"
}
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh="
}
}
],
"event_logs": [],
"audit_items": []
}
I read it in R using jsonlite:
df_orig <- fromJSON('dast_sample_output.json', flatten= T)
This gives a nested list type R object. I wish to convert this list to a data frame in a tidy format with all the arrays and sub arrays being unnested.
If you run the str(df_orig), you could see the nested data frames in there.
How do I convert it to tidy format?
I tried unnest(), purrr but struggling to get into the tidy format for analysis? Any pointers would be highly appreciated.
Cheers,
use the jsonlite package function fromJSON()
edit:
set option flatten=T
edit2:
use content( x, 'text') before flattening
here is a full example converting to data.table:
get.json <- GET( apicall.text )
get.json.text <- content( get.json , 'text')
get.json.flat <- fromJSON( get.json.text , flatten = T)
dt <- as.data.table( get.json.flat )

How do I get the address value with specific name using jsonpath

JSON
[{
"name": "aaaa",
"address": "word1",
"newHouse": false,
"age": 8
},
{
"name": "bbbb",
"address": "word2",
"newHouse": true,
"age": 9
},
{
"name": "cccc",
"address": "word3",
"newHouse": false,
"age": 12
}
]
JSONPath
$[*][?(#.name==['aaaa'])].address
Try
$[*][?(#.name=='aaaa')].address
OR
$..[?(#.name=='aaaa')].address

opentsdb api/query downsample result is incorrect

I used API/query downsample to query the data, but the results I get are different. I cannot explain why.
My first query:
{
"start": 1498838400,
"end": 1501516800,
"timezone": "Asia/Shanghai",
"useCalendar": true,
"delete": false,
"queries": [
{
"aggregator": "sum",
"metric": "meter.energy.active.forward.z",
"downsample": "24h-first",
"rate": false,
"filters": [
{
"type": "literal_or",
"tagk": "deviceId",
"filter": "127",
"groupBy": true
}
]
}
]
}
The result:
[{
"metric": "meter.energy.active.forward.z",
"tags": {
"deviceTypeId": "1",
"deviceNo": "340340001750",
"deviceId": "127",
"gatewayId": "72"
},
"aggregateTags": [],
"dps": {
"1498924800": 0.029999999329447746,
"1499097600": 349577.59375,
"1499184000": 410578.90625,
"1499270400": 515834.09375,
"1499356800": 616553.6875,
"1499443200": 722792.5,
"1499529600": 800983.75...}}]
For the second request, I only change 24h-first to 24h-first-nan, and the second request result is:
[{
"metric": "meter.energy.active.forward.z",
"tags": {
"deviceTypeId": "1",
"deviceNo": "340340001750",
"deviceId": "127",
"gatewayId": "72"
},
"aggregateTags": [],
"dps": {}
}]
I want the result is:
[{
"metric": "meter.energy.active.forward.z",
"tags": {
"deviceTypeId": "1",
"deviceNo": "340340001750",
"deviceId": "127",
"gatewayId": "72"
},
"aggregateTags": [],
"dps": {
"1498924800": 0.029999999329447746,
"1499011200": NaN,
"1499097600": 349577.59375,
"1499184000": 410578.90625,
"1499270400": 515834.09375,
"1499356800": 616553.6875,
"1499443200": 722792.5,
"1499529600": 800983.75...}}]
I also delete the "useCalendar", but the time is not what I want.
Do you see my issue? Can you help? Thank you!

How to find last element in Array list using jsonpath expression?

I have json string like below
[
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 0
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 1
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 2
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 3
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 4
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 5
},
{
"topic": "inputTopic",
"key": "0",
"message": "test",
"partition": 0,
"offset": 6
},
{
"topic": "inputTopic",
"key": "0",
"message": "Hi Test",
"partition": 0,
"offset": 7
},
{
"topic": "inputTopic",
"key": "0",
"message": "Hi Test",
"partition": 0,
"offset": 8
},
{
"topic": "inputTopic",
"key": "0",
"message": "Hi Test",
"partition": 0,
"offset": 9
}
]
How can I get last item using jsonpath expression?
{
"topic": "inputTopic",
"key": "0",
"message": "Hi Test",
"partition": 0,
"offset": 9
}
As you can read in the docs you can ether do a
$..book[(#.length-1)]
as mentioned by Duncan above. Or you can
$..book[-1:]
Personally I find the first a bit more expressive, the latter a bit smarter to write. The latter also seems to be the intended default. I'd say it's a question of personal flavor.
You can use the underlying language features when using JsonPath, so in JavaScript for example, the length property is available. You can therefore say I want the last one by asking for length minus one.
Assuming you are using JsonPath from JavaScript the following query will find the last item:
$.[(#.length-1)]
You can see the query working here on this online JsonPath tester: http://www.jsonquerytool.com/sample/jsonpathlastinarray
What was working here for me was:
$..[-1:]
You can test it here: https://jsonpath.com/

Resources