Create multi-line chart on Kibana vega with legend based on inner buckets key values - kibana

I had a query response providing the data of format
{
"key_as_string": "2022-02-28T00:00:00.000Z",
"key": 1646006400000,
"doc_count": 2070,
"count": {
"doc_count": 3992,
"categories_count": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": 1,
"doc_count": 3070
},
{
"key": 5,
"doc_count": 316
},
{
"key": 3,
"doc_count": 178
},
{
"key": 0,
"doc_count": 26
},
{
"key": 7,
"doc_count": 26
},
{
"key": 6,
"doc_count": 20
},
{
"key": 2,
"doc_count": 10
}
]
}
}
}
How do I create a multi-line chart with legends that based each key value as one line and
y-axis is the doc_count and x-axis is the key_as_string time. I also need to handle the case that inner buckets will not output empty key count (missing key values that have count 0), but the key can be only in range of 0-7.

To create a multi-line chart refer to the gallery. This is a line chart. My advice, look around the gallery for things you want to include in your visualisation and extract those pieces, don't forget to refer to the documentation too.
For the 0 bucket, this is something I've encountered before. Try doing the term aggregation before the date_histogram aggregation. The date_histogram aggregation produces buckets for 0 counts whereas the term does not.

Related

Is there a way to transform these 2 arrays by using jq, into a set of objects, like in the example down below?

Example json data:
{
"data": [
{
"place": "FM346",
"id": [
"7_day_A",
"7_day_B",
"7_day_C",
"7_day_D"
],
"values": [
0,
30,
23,
43
]
},
{
"place": "LH210",
"id": [
"1_day_A",
"1_day_B",
"1_day_C",
"1_day_D"
],
"values": [
4,
45,
100,
9
]
}
]
}
what i need to transform it into:
{
"data": [
{
"place": "FM346",
"7_day_A": {
"value": 0
},
"7_day_B": {
"value": 30
},
"7_day_C": {
"value": 23
},
"7_day_D": {
"value": 43
}
},
{
"place": "LH210",
"1_day_A": {
"value": 4
},
"1_day_B": {
"value": 45
},
"1_day_C": {
"value": 100
},
"1_day_D": {
"value": 9
}
}
]
}
i have tried this:
{
data:[.data |.[]|
{
place: (.place),
(.id[]):
{
value: (.values[])
}
}]
}
(in jqplay: https://jqplay.org/s/f4BBtN9gwmp)
and this:
{
data:[.data |.[]|
{
place: (.place),
test:
[{
(.id[]):
{
value: (.values[])
}
}]
}]
}
(in jqplay: https://jqplay.org/s/pKIvQe1CzgX)
but they arent grouped in the way i wanted and it gives each value to each id, not the corresponding one.
I have been trying for some time now, but im new to jq and have no idea how to transform it this way, thanks in advance for any answers.
You can use transpose here, which can play a key role in converting the arrays to key/value pairs
.data[] |= {place} +
([ .id, .values ] | transpose | map({(.[0]): { value: .[1] } }) | add)
The solution works by converting the array-of-arrays [.id, .values] by transposing them, i.e. converting
[["7_day_A","7_day_B","7_day_C","7_day_D"],[0,30,23,43]]
[["1_day_A","1_day_B","1_day_C","1_day_D"],[4,45,100,9]]
to
[["7_day_A",0],["7_day_B",30],["7_day_C",23],["7_day_D",43]]
[["1_day_A",4],["1_day_B",45],["1_day_C",100],["1_day_D",9]]
With the transformation done, we construct an object with key as the zeroth index element and value as an object comprising of the value of first index element, and combine the results together with add
Demo - jqplay

Best way to retrieve document with nested JSON and limit

Suppose we have a structure:
{
"nested_items": [
{
"nested_sample0": "1",
"nested_sample1": "test",
"nested_sample2": "test",
"nested_sample3": {
"type": "type"
},
"nested_sample": null
},
{
"nested_sample0": "1",
"nested_sample1": "test",
"nested_sample2": "test",
"nested_sample3": {
"type": "type"
},
"nested_sample1": null
},
...
],
"sample1": 1233,
"id": "ed68ca34-6b59-4687-a557-bdefc9ec2f4b",
"sample2": "",
"sample3": "test",
"sample4": "test",
"_ts": 1656503348
}
I want to retrieve documents by id by with limit of "nested_items" field .As I know limit and offset not supported in sub queries. Any way to do this except of divide into two queries? Maybe some udf or else?
You can use the function ARRAY_SLICE assuming the array is ordered.
Example data:
{
"name": "John",
"details": [
{
"id": 1
},
{
"id": 2
},
{
"id": 3
}
]
}
Example queries
-- First 2 items from nested array
SELECT c.name, ARRAY_SLICE(c.details, 0, 2) as details
FROM c
-- Last 2 items from nested array
SELECT c.name, ARRAY_SLICE(c.details, ARRAY_LENGTH(c.details) - 2, 2) as details
FROM c

How to retrieve array element from a specific position in CosmosDB document?

Suppose I have a document with the following structure,
{
"VehicleDetailId": 1,
"VehicleDetail": [
{
"Id": 1,
"Make": "BMW"
},
{
"Id": 1,
"Model": "ABDS"
},
{
"Id": 1,
"Trim": "5.6L/ASMD"
},
{
"Id": 1,
"Year": 2008
}
]
}
Now I want to retrieve an array element located at a specific position from VehicleDetail array like I want to retrieve the second element, i.e.,
{
"Id": 1,
"Model": "ABDS"
}
or the third,
{
"Id": 1,
"Trim": "5.6L/ASMD"
}
How should I write the query to achieve this?
Use the built-in ARRAY_SLICE function. This allows you to select part of an array.
Pass the array, starting position, number of elements to select.
SELECT ARRAY_SLICE(c.VehicleDetail, 1, 1) As SecondElement
FROM c
Output:
{
"SecondElement": [
{
"Id": 1,
"Model": "ABDS"
}
]
}

Mapbox Distance Matrix API returns all zeroes

I'm trying to implement the Mapbox Distance Matrix API as an alternative to Google, since it was becoming too expensive. I've tried to reduce the example to something minimal, with only two values:
{
code: "Ok",
distances: [
[
0,
0
]
],
durations: [
[
0,
0,
0,
0,
0,
0
]
],
destinations: [
{
distance: 404951.186070298,
name: "",
location: [
48.761423,
5.731594
]
},
{
distance: 402983.402982556,
name: "",
location: [
48.761423,
5.731594
]
}
],
sources: [
{
distance: 401905.604376238,
name: "",
location: [
48.761423,
5.731594
]
}
]
}
I see that the coordinates of the values are the same, even though they do not match the input coordinates from my URL, which are 52.08515,4.2826;52.11703,4.28716;52.11736,4.28939. The problem persists with all modes of transportation. Any help would be appreciated!
The format is lon,lat - not lat,lon - I made the same mistake but the docs are correct.

Watson doesn't get zeros

Let's say I have a conversation services configured in IBM Watson ready to recognize a number given in words and in pieces. For example, if I have the number 1320, it can be sent as thirteen twenty or thirteen two zero, etc.
In the first case I'll get something like this from a conversation service:
{
// ...
"entities": [
{
"entity": "sys-number",
"location": [
0,
5
],
"value": "13",
"confidence": 1,
"metadata": {
"numeric_value": 13
}
},
{
"entity": "sys-number",
"location": [
6,
12
],
"value": "20",
"confidence": 1,
"metadata": {
"numeric_value": 20
}
}
]
// ...
}
In the second case (thirteen two zero):
{
// ...
"entities": [
{
"entity": "sys-number",
"location": [
0,
5
],
"value": "13",
"confidence": 1,
"metadata": {
"numeric_value": 13
}
},
{
"entity": "sys-number",
"location": [
6,
14
],
"value": "2",
"confidence": 1,
"metadata": {
"numeric_value": 2
}
}
]
// ...
}
The big question here is: Where is my zero?
I know this question has been asked more than once, but none of the answers I found solved my current issues.
I've seen examples where a regular expression could be used, but that's for actual numbers, here I have words and Watson is the one who actually guesses the number.
Is there a way to obtain a third entry in my entities for that zero? or another work arround? or configuration I may be lacking?
I just tried this in Watson Assistant and it now gets the zero. With #sys-numbers enabled my utterance is thirteen two zero and I get these entities back:
entities: [
0: {
entity: "sys-number",
location: [0, 8],
value: "13",
confidence: 1,
metadata: {numeric_value: 13}
}
1: {
entity: "sys-number",
location: [9, 12],
value: "2",
confidence: 1,
metadata: {numeric_value: 2}
}
2: {
entity: "sys-number",
location: [13, 17],
value: "0",
confidence: 1,
metadata: {numeric_value: 0}
}
]
It might be the entities matching has been improved.

Resources